Power of tests for comparing trend curves with application to national immunization survey (NIS).
Zhao, Zhen
2011-02-28
To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.
Detecting higher spin fields through statistical anisotropy in the CMB and galaxy power spectra
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Kehagias, Alex; Liguori, Michele; Riotto, Antonio; Shiraishi, Maresuke; Tansella, Vittorio
2018-01-01
Primordial inflation may represent the most powerful collider to test high-energy physics models. In this paper we study the impact on the inflationary power spectrum of the comoving curvature perturbation in the specific model where massive higher spin fields are rendered effectively massless during a de Sitter epoch through suitable couplings to the inflaton field. In particular, we show that such fields with spin s induce a distinctive statistical anisotropic signal on the power spectrum, in such a way that not only the usual g2 M-statistical anisotropy coefficients, but also higher-order ones (i.e., g4 M,g6 M,…,g(2 s -2 )M and g(2 s )M) are nonvanishing. We examine their imprints in the cosmic microwave background and galaxy power spectra. Our Fisher matrix forecasts indicate that the detectability of gL M depends very weakly on L : all coefficients could be detected in near future if their magnitudes are bigger than about 10-3.
A new u-statistic with superior design sensitivity in matched observational studies.
Rosenbaum, Paul R
2011-09-01
In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
Gene-Based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions.
Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y; Chen, Wei
2016-02-01
Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, here we develop Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT), which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. © 2016 WILEY PERIODICALS, INC.
Gene-based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions
Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E.; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y.; Chen, Wei
2015-01-01
Summary Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, we develop here Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT) which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. PMID:26782979
Van Wynsberge, Simon; Gilbert, Antoine; Guillemot, Nicolas; Heintz, Tom; Tremblay-Boyer, Laura
2017-07-01
Extensive biological field surveys are costly and time consuming. To optimize sampling and ensure regular monitoring on the long term, identifying informative indicators of anthropogenic disturbances is a priority. In this study, we used 1800 candidate indicators by combining metrics measured from coral, fish, and macro-invertebrate assemblages surveyed from 2006 to 2012 in the vicinity of an ongoing mining project in the Voh-Koné-Pouembout lagoon, New Caledonia. We performed a power analysis to identify a subset of indicators which would best discriminate temporal changes due to a simulated chronic anthropogenic impact. Only 4% of tested indicators were likely to detect a 10% annual decrease of values with sufficient power (>0.80). Corals generally exerted higher statistical power than macro-invertebrates and fishes because of lower natural variability and higher occurrence. For the same reasons, higher taxonomic ranks provided higher power than lower taxonomic ranks. Nevertheless, a number of families of common sedentary or sessile macro-invertebrates and fishes also performed well in detecting changes: Echinometridae, Isognomidae, Muricidae, Tridacninae, Arcidae, and Turbinidae for macro-invertebrates and Pomacentridae, Labridae, and Chaetodontidae for fishes. Interestingly, these families did not provide high power in all geomorphological strata, suggesting that the ability of indicators in detecting anthropogenic impacts was closely linked to reef geomorphology. This study provides a first operational step toward identifying statistically relevant indicators of anthropogenic disturbances in New Caledonia's coral reefs, which can be useful in similar tropical reef ecosystems where little information is available regarding the responses of ecological indicators to anthropogenic disturbances.
Jiang, Wei; Yu, Weichuan
2017-02-15
In genome-wide association studies (GWASs) of common diseases/traits, we often analyze multiple GWASs with the same phenotype together to discover associated genetic variants with higher power. Since it is difficult to access data with detailed individual measurements, summary-statistics-based meta-analysis methods have become popular to jointly analyze datasets from multiple GWASs. In this paper, we propose a novel summary-statistics-based joint analysis method based on controlling the joint local false discovery rate (Jlfdr). We prove that our method is the most powerful summary-statistics-based joint analysis method when controlling the false discovery rate at a certain level. In particular, the Jlfdr-based method achieves higher power than commonly used meta-analysis methods when analyzing heterogeneous datasets from multiple GWASs. Simulation experiments demonstrate the superior power of our method over meta-analysis methods. Also, our method discovers more associations than meta-analysis methods from empirical datasets of four phenotypes. The R-package is available at: http://bioinformatics.ust.hk/Jlfdr.html . eeyu@ust.hk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
2009-01-01
In high-dimensional studies such as genome-wide association studies, the correction for multiple testing in order to control total type I error results in decreased power to detect modest effects. We present a new analytical approach based on the higher criticism statistic that allows identification of the presence of modest effects. We apply our method to the genome-wide study of rheumatoid arthritis provided in the Genetic Analysis Workshop 16 Problem 1 data set. There is evidence for unknown bias in this study that could be explained by the presence of undetected modest effects. We compared the asymptotic and empirical thresholds for the higher criticism statistic. Using the asymptotic threshold we detected the presence of modest effects genome-wide. We also detected modest effects using 90th percentile of the empirical null distribution as a threshold; however, there is no such evidence when the 95th and 99th percentiles were used. While the higher criticism method suggests that there is some evidence for modest effects, interpreting individual single-nucleotide polymorphisms with significant higher criticism statistics is of undermined value. The goal of higher criticism is to alert the researcher that genetic effects remain to be discovered and to promote the use of more targeted and powerful studies to detect the remaining effects. PMID:20018032
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Seo, Jong-Geun; Kang, Kyunghun; Jung, Ji-Young; Park, Sung-Pa; Lee, Maan-Gee; Lee, Ho-Won
2014-12-01
In this pilot study, we analyzed relationships between quantitative EEG measurements and clinical parameters in idiopathic normal pressure hydrocephalus patients, along with differences in these quantitative EEG markers between cerebrospinal fluid tap test responders and nonresponders. Twenty-six idiopathic normal pressure hydrocephalus patients (9 cerebrospinal fluid tap test responders and 17 cerebrospinal fluid tap test nonresponders) constituted the final group for analysis. The resting EEG was recorded and relative powers were computed for seven frequency bands. Cerebrospinal fluid tap test nonresponders, when compared with responders, showed a statistically significant increase in alpha2 band power at the right frontal and centrotemporal regions. Higher delta2 band powers in the frontal, central, parietal, and occipital regions and lower alpha1 band powers in the right temporal region significantly correlated with poorer cognitive performance. Higher theta1 band powers in the left parietal and occipital regions significantly correlated with gait dysfunction. And higher delta1 band powers in the right frontal regions significantly correlated with urinary disturbance. Our findings may encourage further research using quantitative EEG in patients with ventriculomegaly as a potential electrophysiological marker for predicting cerebrospinal fluid tap test responders. This study additionally suggests that the delta, theta, and alpha bands are statistically correlated with the severity of symptoms in idiopathic normal pressure hydrocephalus patients.
The 1993 Mississippi river flood: A one hundred or a one thousand year event?
Malamud, B.D.; Turcotte, D.L.; Barton, C.C.
1996-01-01
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
Properties of different selection signature statistics and a new strategy for combining them.
Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H
2015-11-01
Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.
A global goodness-of-fit statistic for Cox regression models.
Parzen, M; Lipsitz, S R
1999-06-01
In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.
Saniova, Beata; Drobny, Michal; Drobna, Eva; Hamzik, Julian; Bakosova, Erika; Fischer, Martin
2016-01-01
The main objective was to indicate sufficient general anaesthesia (GA) inhibition for negative experience rejection in GA. We investigated the group of patients (n = 17, mean age 63.59 years, 9 male--65.78 years, 8 female - 61.13 years) during GA in open thorax surgery and analyzed EEG signal by power spectrum (pEEG) delta (DR), and gamma rhythms (GR). EEG was performed: OPO - the day before surgery and in surgery phases OP1-OP5 during GA. Particular GA phases: OP1 = after pre- medication, OP2 = surgery onset, OP3 = surgery with one-side lung ventilation, OP4 = end of surgery, both sides ventilation, OP5 = end of GA. pEEG registering in the left frontal region Fp1-A1 montage in 17 right handed persons. Mean DR power in OP2 phase is significantly higher than in phase OP5 and mean DR power in OP3 is higher than in OP5. One-lung ventilation did not change minimal alveolar concentration and gases should not accelerate decrease in mean DR power. Higher mean value of GR power in OPO than in OP3 was statistically significant. Mean GR power in OP3 is statistically significantly lower than in OP4 correlating with the same gases concentration in OP3 and OP4. Our results showed DR power decreased since OP2 till the end of GA it means inhibition represented by power DR fluently decreasing is sufficient for GA depth. GR power decay near the working memory could reduce conscious cognition and unpleasant explicit experience in GA.
2013-01-01
Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463
Higher order statistical moment application for solar PV potential analysis
NASA Astrophysics Data System (ADS)
Basri, Mohd Juhari Mat; Abdullah, Samizee; Azrulhisham, Engku Ahmad; Harun, Khairulezuan
2016-10-01
Solar photovoltaic energy could be as alternative energy to fossil fuel, which is depleting and posing a global warming problem. However, this renewable energy is so variable and intermittent to be relied on. Therefore the knowledge of energy potential is very important for any site to build this solar photovoltaic power generation system. Here, the application of higher order statistical moment model is being analyzed using data collected from 5MW grid-connected photovoltaic system. Due to the dynamic changes of skewness and kurtosis of AC power and solar irradiance distributions of the solar farm, Pearson system where the probability distribution is calculated by matching their theoretical moments with that of the empirical moments of a distribution could be suitable for this purpose. On the advantage of the Pearson system in MATLAB, a software programming has been developed to help in data processing for distribution fitting and potential analysis for future projection of amount of AC power and solar irradiance availability.
Influences of Atmospheric Stability State on Wind Turbine Aerodynamic Loadings
NASA Astrophysics Data System (ADS)
Vijayakumar, Ganesh; Lavely, Adam; Brasseur, James; Paterson, Eric; Kinzel, Michael
2011-11-01
Wind turbine power and loadings are influenced by the structure of atmospheric turbulence and thus on the stability state of the atmosphere. Statistical differences in loadings with atmospheric stability could impact controls, blade design, etc. Large-eddy simulation (LES) of the neutral and moderately convective atmospheric boundary layer (NBL, MCBL) are used as inflow to the NREL FAST advanced blade-element momentum theory code to predict wind turbine rotor power, sectional lift and drag, blade bending moments and shaft torque. Using horizontal homogeneity, we combine time and ensemble averages to obtain converged statistics equivalent to ``infinite'' time averages over a single turbine. The MCBL required longer effective time periods to obtain converged statistics than the NBL. Variances and correlation coefficients among wind velocities, turbine power and blade loadings were higher in the MCBL than the NBL. We conclude that the stability state of the ABL strongly influences wind turbine performance. Supported by NSF and DOE.
The statistical overlap theory of chromatography using power law (fractal) statistics.
Schure, Mark R; Davis, Joe M
2011-12-30
The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.
Non-gaussian statistics of pencil beam surveys
NASA Technical Reports Server (NTRS)
Amendola, Luca
1994-01-01
We study the effect of the non-Gaussian clustering of galaxies on the statistics of pencil beam surveys. We derive the probability from the power spectrum peaks by means of Edgeworth expansion and find that the higher order moments of the galaxy distribution play a dominant role. The probability of obtaining the 128 Mpc/h periodicity found in pencil beam surveys is raised by more than one order of magnitude, up to 1%. Further data are needed to decide if non-Gaussian distribution alone is sufficient to explain the 128 Mpc/h periodicity, or if extra large-scale power is necessary.
Low Energy Dissipation Nano Device Research
NASA Astrophysics Data System (ADS)
Yu, Jenny
2015-03-01
The development of research on energy dissipation has been rapid in energy efficient area. Nano-material power FET is operated as an RF power amplifier, the transport is ballistic, noise is limited and power dissipation is minimized. The goal is Green-save energy by developing the Graphene and carbon nantube microwave and high performance devices. Higher performing RF amplifiers can have multiple impacts on broadly field, for example communication equipment, (such as mobile phone and RADAR); higher power density and lower power dissipation will improve spectral efficiency which translates into higher system level bandwidth and capacity for communications equipment. Thus, fundamental studies of power handling capabilities of new RF (nano)technologies can have broad, sweeping impact. Because it is critical to maximizing the power handling ability of grephene and carbon nanotube FET, the initial task focuses on measuring and understanding the mechanism of electrical breakdown. We aim specifically to determine how the breakdown voltage in graphene and nanotubes is related to the source-drain spacing, electrode material and thickness, and substrate, and thus develop reliable statistics on the breakdown mechanism and probability.
Statistical power as a function of Cronbach alpha of instrument questionnaire items.
Heo, Moonseong; Kim, Namhee; Faith, Myles S
2015-10-14
In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.
Relationship Power, Sexual Decision Making, and HIV Risk Among Midlife and Older Women.
Altschuler, Joanne; Rhee, Siyon
2015-01-01
The number of midlife and older women with HIV/AIDS is high and increasing, especially among women of color. This article addresses these demographic realities by reporting on findings about self-esteem, relationship power, and HIV risk from a pilot study of midlife and older women. A purposive sample (N = 110) of ethnically, economically, and educationally diverse women 40 years and older from the Greater Los Angeles Area was surveyed to determine their levels of self-esteem, general relationship power, sexual decision-making power, safer sex behaviors, and HIV knowledge. Women with higher levels of self-esteem exercised greater power in their relationships with their partner. Women with higher levels of general relationship power and self-esteem tend to exercise greater power in sexual decision making, such as having sex and choosing sexual acts. Income and sexual decision-making power were statistically significant in predicting the use of condoms. Implications and recommendations for future HIV/AIDS research and intervention targeting midlife and older women are presented.
Pounds, Stan; Cao, Xueyuan; Cheng, Cheng; Yang, Jun; Campana, Dario; Evans, William E.; Pui, Ching-Hon; Relling, Mary V.
2010-01-01
Powerful methods for integrated analysis of multiple biological data sets are needed to maximize interpretation capacity and acquire meaningful knowledge. We recently developed Projection Onto the Most Interesting Statistical Evidence (PROMISE). PROMISE is a statistical procedure that incorporates prior knowledge about the biological relationships among endpoint variables into an integrated analysis of microarray gene expression data with multiple biological and clinical endpoints. Here, PROMISE is adapted to the integrated analysis of pharmacologic, clinical, and genome-wide genotype data that incorporating knowledge about the biological relationships among pharmacologic and clinical response data. An efficient permutation-testing algorithm is introduced so that statistical calculations are computationally feasible in this higher-dimension setting. The new method is applied to a pediatric leukemia data set. The results clearly indicate that PROMISE is a powerful statistical tool for identifying genomic features that exhibit a biologically meaningful pattern of association with multiple endpoint variables. PMID:21516175
Wong, Wing-Cheong; Ng, Hong-Kiat; Tantoso, Erwin; Soong, Richie; Eisenhaber, Frank
2018-02-12
Though earlier works on modelling transcript abundance from vertebrates to lower eukaroytes have specifically singled out the Zip's law, the observed distributions often deviate from a single power-law slope. In hindsight, while power-laws of critical phenomena are derived asymptotically under the conditions of infinite observations, real world observations are finite where the finite-size effects will set in to force a power-law distribution into an exponential decay and consequently, manifests as a curvature (i.e., varying exponent values) in a log-log plot. If transcript abundance is truly power-law distributed, the varying exponent signifies changing mathematical moments (e.g., mean, variance) and creates heteroskedasticity which compromises statistical rigor in analysis. The impact of this deviation from the asymptotic power-law on sequencing count data has never truly been examined and quantified. The anecdotal description of transcript abundance being almost Zipf's law-like distributed can be conceptualized as the imperfect mathematical rendition of the Pareto power-law distribution when subjected to the finite-size effects in the real world; This is regardless of the advancement in sequencing technology since sampling is finite in practice. Our conceptualization agrees well with our empirical analysis of two modern day NGS (Next-generation sequencing) datasets: an in-house generated dilution miRNA study of two gastric cancer cell lines (NUGC3 and AGS) and a publicly available spike-in miRNA data; Firstly, the finite-size effects causes the deviations of sequencing count data from Zipf's law and issues of reproducibility in sequencing experiments. Secondly, it manifests as heteroskedasticity among experimental replicates to bring about statistical woes. Surprisingly, a straightforward power-law correction that restores the distribution distortion to a single exponent value can dramatically reduce data heteroskedasticity to invoke an instant increase in signal-to-noise ratio by 50% and the statistical/detection sensitivity by as high as 30% regardless of the downstream mapping and normalization methods. Most importantly, the power-law correction improves concordance in significant calls among different normalization methods of a data series averagely by 22%. When presented with a higher sequence depth (4 times difference), the improvement in concordance is asymmetrical (32% for the higher sequencing depth instance versus 13% for the lower instance) and demonstrates that the simple power-law correction can increase significant detection with higher sequencing depths. Finally, the correction dramatically enhances the statistical conclusions and eludes the metastasis potential of the NUGC3 cell line against AGS of our dilution analysis. The finite-size effects due to undersampling generally plagues transcript count data with reproducibility issues but can be minimized through a simple power-law correction of the count distribution. This distribution correction has direct implication on the biological interpretation of the study and the rigor of the scientific findings. This article was reviewed by Oliviero Carugo, Thomas Dandekar and Sandor Pongor.
NASA Astrophysics Data System (ADS)
Chung, Moo K.; Kim, Seung-Goo; Schaefer, Stacey M.; van Reekum, Carien M.; Peschke-Schmitz, Lara; Sutterer, Matthew J.; Davidson, Richard J.
2014-03-01
The sparse regression framework has been widely used in medical image processing and analysis. However, it has been rarely used in anatomical studies. We present a sparse shape modeling framework using the Laplace- Beltrami (LB) eigenfunctions of the underlying shape and show its improvement of statistical power. Tradition- ally, the LB-eigenfunctions are used as a basis for intrinsically representing surface shapes as a form of Fourier descriptors. To reduce high frequency noise, only the first few terms are used in the expansion and higher frequency terms are simply thrown away. However, some lower frequency terms may not necessarily contribute significantly in reconstructing the surfaces. Motivated by this idea, we present a LB-based method to filter out only the significant eigenfunctions by imposing a sparse penalty. For dense anatomical data such as deformation fields on a surface mesh, the sparse regression behaves like a smoothing process, which will reduce the error of incorrectly detecting false negatives. Hence the statistical power improves. The sparse shape model is then applied in investigating the influence of age on amygdala and hippocampus shapes in the normal population. The advantage of the LB sparse framework is demonstrated by showing the increased statistical power.
Pathway analysis with next-generation sequencing data.
Zhao, Jinying; Zhu, Yun; Boerwinkle, Eric; Xiong, Momiao
2015-04-01
Although pathway analysis methods have been developed and successfully applied to association studies of common variants, the statistical methods for pathway-based association analysis of rare variants have not been well developed. Many investigators observed highly inflated false-positive rates and low power in pathway-based tests of association of rare variants. The inflated false-positive rates and low true-positive rates of the current methods are mainly due to their lack of ability to account for gametic phase disequilibrium. To overcome these serious limitations, we develop a novel statistic that is based on the smoothed functional principal component analysis (SFPCA) for pathway association tests with next-generation sequencing data. The developed statistic has the ability to capture position-level variant information and account for gametic phase disequilibrium. By intensive simulations, we demonstrate that the SFPCA-based statistic for testing pathway association with either rare or common or both rare and common variants has the correct type 1 error rates. Also the power of the SFPCA-based statistic and 22 additional existing statistics are evaluated. We found that the SFPCA-based statistic has a much higher power than other existing statistics in all the scenarios considered. To further evaluate its performance, the SFPCA-based statistic is applied to pathway analysis of exome sequencing data in the early-onset myocardial infarction (EOMI) project. We identify three pathways significantly associated with EOMI after the Bonferroni correction. In addition, our preliminary results show that the SFPCA-based statistic has much smaller P-values to identify pathway association than other existing methods.
Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut
2015-01-01
The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zipkin, Elise F.; Kinlan, Brian P.; Sussman, Allison; Rypkema, Diana; Wimer, Mark; O'Connell, Allan F.
2015-01-01
Estimating patterns of habitat use is challenging for marine avian species because seabirds tend to aggregate in large groups and it can be difficult to locate both individuals and groups in vast marine environments. We developed an approach to estimate the statistical power of discrete survey events to identify species-specific hotspots and coldspots of long-term seabird abundance in marine environments. We illustrate our approach using historical seabird data from survey transects in the U.S. Atlantic Ocean Outer Continental Shelf (OCS), an area that has been divided into “lease blocks” for proposed offshore wind energy development. For our power analysis, we examined whether discrete lease blocks within the region could be defined as hotspots (3 × mean abundance in the OCS) or coldspots (1/3 ×) for individual species within a given season. For each of 74 species/season combinations, we determined which of eight candidate statistical distributions (ranging in their degree of skewedness) best fit the count data. We then used the selected distribution and estimates of regional prevalence to calculate and map statistical power to detect hotspots and coldspots, and estimate the p-value from Monte Carlo significance tests that specific lease blocks are in fact hotspots or coldspots relative to regional average abundance. The power to detect species-specific hotspots was higher than that of coldspots for most species because species-specific prevalence was relatively low (mean: 0.111; SD: 0.110). The number of surveys required for adequate power (> 0.6) was large for most species (tens to hundreds) using this hotspot definition. Regulators may need to accept higher proportional effect sizes, combine species into groups, and/or broaden the spatial scale by combining lease blocks in order to determine optimal placement of wind farms. Our power analysis approach provides a general framework for both retrospective analyses and future avian survey design and is applicable to a broad range of research and conservation problems.
Advanced statistical energy analysis
NASA Astrophysics Data System (ADS)
Heron, K. H.
1994-09-01
A high-frequency theory (advanced statistical energy analysis (ASEA)) is developed which takes account of the mechanism of tunnelling and uses a ray theory approach to track the power flowing around a plate or a beam network and then uses statistical energy analysis (SEA) to take care of any residual power. ASEA divides the energy of each sub-system into energy that is freely available for transfer to other sub-systems and energy that is fixed within the sub-systems that are physically separate and can be interpreted as a series of mathematical models, the first of which is identical to standard SEA and subsequent higher order models are convergent on an accurate prediction. Using a structural assembly of six rods as an example, ASEA is shown to converge onto the exact results while SEA is shown to overpredict by up to 60 dB.
Relationship Power and Sexual Violence Among HIV-Positive Women in Rural Uganda
Tsai, Alexander C.; Clark, Gina M.; Boum, Yap; Hatcher, Abigail M.; Kawuma, Annet; Hunt, Peter W.; Martin, Jeffrey N.; Bangsberg, David R.; Weiser, Sheri D.
2016-01-01
Gender-based power imbalances place women at significant risk for sexual violence, however, little research has examined this association among women living with HIV/AIDS. We performed a cross-sectional analysis of relationship power and sexual violence among HIV-positive women on anti-retroviral therapy in rural Uganda. Relationship power was measured using the Sexual Relationship Power Scale (SRPS), a validated measure consisting of two subscales: relationship control (RC) and decision-making dominance. We used multivariable logistic regression to test for associations between the SRPS and two dependent variables: recent forced sex and transactional sex. Higher relationship power (full SRPS) was associated with reduced odds of forced sex (AOR = 0.24; 95 % CI 0.07–0.80; p = 0.020). The association between higher relationship power and transactional sex was strong and in the expected direction, but not statistically significant (AOR = 0.47; 95 % CI 0.18–1.22; p = 0.119). Higher RC was associated with reduced odds of both forced sex (AOR = 0.18; 95 % CI 0.06–0.59; p < 0.01) and transactional sex (AOR = 0.38; 95 % CI 0.15–0.99; p = 0.048). Violence prevention interventions with HIV-positive women should consider approaches that increase women’s power in their relationships. PMID:27052844
Empathy, Sense of Power, and Personality: Do They Change During Pediatric Residency?
Greenberg, Larrie; Agrawal, Dewesh; Toto, Regina; Blatt, Benjamin
2015-08-01
Empathy is a critical competency in medicine. Prior studies demonstrate a longitudinal decrease in empathy during residency; however, they have not included pediatric residents. The relations among the expression of empathy, sense of power (ability to influence other's behavior), and personality traits in residents also have not been addressed. Lastly, there are no data on how residents compare with the general nonmedical population in their expression of empathy. The purposes of our study were to assess whether empathy, sense of power, and personality type were statistically correlated; if resident empathy declines over time; and how resident empathy compares with that of nonmedical peers. In 2010, a cohort of individuals entering pediatric residency were given three validated survey instruments at the beginning of their first and third years of training to explore longitudinal changes in empathy, sense of power, and major personality traits. We found no decrease in resident empathy in 2 years of pediatric training, no changes in their sense of power, and no statistically significant correlation between empathetic tendencies and sense of power. When compared with the general nonmedical population, pediatric residents rated themselves higher in empathy. As expected, the two components of empathy (empathic concern and perspective taking) were moderately correlated. Of the major personality traits, only agreeableness showed significant correlation with empathy. Pediatric resident empathy did not decrease longitudinally, unlike studies in other residents. There was no inverse relation between self-perceptions of sense of power and empathy as is present in the business literature. Finally, pediatric resident empathy was significantly higher when compared with a general nonmedical population.
NASA Astrophysics Data System (ADS)
Hinterreiter, J.; Veronig, A. M.; Thalmann, J. K.; Tschernitz, J.; Pötzi, W.
2018-03-01
A statistical study of the chromospheric ribbon evolution in Hα two-ribbon flares was performed. The data set consists of 50 confined (62%) and eruptive (38%) flares that occurred from June 2000 to June 2015. The flares were selected homogeneously over the Hα and Geostationary Operational Environmental Satellite (GOES) classes, with an emphasis on including powerful confined flares and weak eruptive flares. Hα filtergrams from the Kanzelhöhe Observatory in combination with Michelson Doppler Imager (MDI) and Helioseismic and Magnetic Imager (HMI) magnetograms were used to derive the ribbon separation, the ribbon-separation velocity, the magnetic-field strength, and the reconnection electric field. We find that eruptive flares reveal statistically larger ribbon separation and higher ribbon-separation velocities than confined flares. In addition, the ribbon separation of eruptive flares correlates with the GOES SXR flux, whereas no clear dependence was found for confined flares. The maximum ribbon-separation velocity is not correlated with the GOES flux, but eruptive flares reveal on average a higher ribbon-separation velocity (by ≈ 10 km s-1). The local reconnection electric field of confined (cc=0.50 ±0.02) and eruptive (cc=0.77 ±0.03) flares correlates with the GOES flux, indicating that more powerful flares involve stronger reconnection electric fields. In addition, eruptive flares with higher electric-field strengths tend to be accompanied by faster coronal mass ejections.
Higher-order cumulants and spectral kurtosis for early detection of subterranean termites
NASA Astrophysics Data System (ADS)
de la Rosa, Juan José González; Moreno Muñoz, Antonio
2008-02-01
This paper deals with termite detection in non-favorable SNR scenarios via signal processing using higher-order statistics. The results could be extrapolated to all impulse-like insect emissions; the situation involves non-destructive termite detection. Fourth-order cumulants in time and frequency domains enhance the detection and complete the characterization of termite emissions, non-Gaussian in essence. Sliding higher-order cumulants offer distinctive time instances, as a complement to the sliding variance, which only reveal power excesses in the signal; even for low-amplitude impulses. The spectral kurtosis reveals non-Gaussian characteristics (the peakedness of the probability density function) associated to these non-stationary measurements, specially in the near ultrasound frequency band. Contrasted estimators have been used to compute the higher-order statistics. The inedited findings are shown via graphical examples.
Effects of Muslims praying (Salat) on EEG gamma activity.
Doufesh, Hazem; Ibrahim, Fatimah; Safari, Mohammad
2016-08-01
This study investigates the difference of mean gamma EEG power between actual and mimic Salat practices in twenty healthy Muslim subjects. In the actual Salat practice, the participants were asked to recite and performing the physical steps in all four stages of Salat; whereas in the mimic Salat practice, they were instructed to perform only the physical steps without recitation. The gamma power during actual Salat was statistically higher than during mimic Salat in the frontal and parietal regions in all stages. In the actual Salat practice, the left hemisphere exhibited significantly higher mean gamma power in all cerebral regions and all stages, except the central-parietal region in the sitting position, and the frontal area in the bowing position. Increased gamma power during Salat, possibly related to an increase in cognitive and attentional processing, supports the concept of Salat as a focus attention meditation. Copyright © 2016 Elsevier Ltd. All rights reserved.
Planck 2015 results. XXII. A map of the thermal Sunyaev-Zeldovich effect
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Arnaud, M.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartlett, J. G.; Bartolo, N.; Battaner, E.; Battye, R.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Challinor, A.; Chiang, H. C.; Christensen, P. R.; Churazov, E.; Clements, D. L.; Colombo, L. P. L.; Combet, C.; Comis, B.; Coulais, A.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Désert, F.-X.; Dickinson, C.; Diego, J. M.; Dolag, K.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Fraisse, A. A.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Génova-Santos, R. T.; Giard, M.; González-Nuevo, J.; Górski, K. M.; Gregorio, A.; Gruppuso, A.; Gudmundsson, J. E.; Hansen, F. K.; Harrison, D. L.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Holmes, W. A.; Hornstrup, A.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jones, W. C.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kneissl, R.; Knoche, J.; Kunz, M.; Kurki-Suonio, H.; Lacasa, F.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Macías-Pérez, J. F.; Maffei, B.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; Melchiorri, A.; Melin, J.-B.; Migliaccio, M.; Miville-Deschênes, M.-A.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Noviello, F.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Pasian, F.; Patanchon, G.; Perdereau, O.; Perotto, L.; Pettorino, V.; Piacentini, F.; Piat, M.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Pratt, G. W.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Reinecke, M.; Remazeilles, M.; Renault, C.; Renzi, A.; Ristorcelli, I.; Rocha, G.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Sauvé, A.; Savelainen, M.; Savini, G.; Scott, D.; Spencer, L. D.; Stolyarov, V.; Stompor, R.; Sunyaev, R.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tramonte, D.; Tristram, M.; Tucci, M.; Tuovinen, J.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Wehus, I. K.; Yvon, D.; Zacchei, A.; Zonca, A.
2016-09-01
We have constructed all-sky Compton parameters maps, y-maps, of the thermal Sunyaev-Zeldovich (tSZ) effect by applying specifically tailored component separation algorithms to the 30 to 857 GHz frequency channel maps from the Planck satellite. These reconstructed y-maps are delivered as part of the Planck 2015 release. The y-maps are characterized in terms of noise properties and residual foreground contamination, mainly thermal dust emission at large angular scales, and cosmic infrared background and extragalactic point sources at small angular scales. Specific masks are defined to minimize foreground residuals and systematics. Using these masks, we compute the y-map angular power spectrum and higher order statistics. From these we conclude that the y-map is dominated by tSZ signal in the multipole range, 20 <ℓ< 600. We compare the measured tSZ power spectrum and higher order statistics to various physically motivated models and discuss the implications of our results in terms of cluster physics and cosmology.
Fully moderated T-statistic for small sample size gene expression arrays.
Yu, Lianbo; Gulati, Parul; Fernandez, Soledad; Pennell, Michael; Kirschner, Lawrence; Jarjoura, David
2011-09-15
Gene expression microarray experiments with few replications lead to great variability in estimates of gene variances. Several Bayesian methods have been developed to reduce this variability and to increase power. Thus far, moderated t methods assumed a constant coefficient of variation (CV) for the gene variances. We provide evidence against this assumption, and extend the method by allowing the CV to vary with gene expression. Our CV varying method, which we refer to as the fully moderated t-statistic, was compared to three other methods (ordinary t, and two moderated t predecessors). A simulation study and a familiar spike-in data set were used to assess the performance of the testing methods. The results showed that our CV varying method had higher power than the other three methods, identified a greater number of true positives in spike-in data, fit simulated data under varying assumptions very well, and in a real data set better identified higher expressing genes that were consistent with functional pathways associated with the experiments.
Wave energy resource of Brazil: An analysis from 35 years of ERA-Interim reanalysis data
Araújo, Alex Maurício
2017-01-01
This paper presents a characterization of the wave power resource and an analysis of the wave power output for three (AquaBuoy, Pelamis and Wave Dragon) different wave energy converters (WEC) over the Brazilian offshore. To do so it used a 35 years reanalysis database from the ERA-Interim project. Annual and seasonal statistical analyzes of significant height and energy period were performed, and the directional variability of the incident waves were evaluated. The wave power resource was characterized in terms of the statistical parameters of mean, maximum, 95th percentile and standard deviation, and in terms of the temporal variability coefficients COV, SV e MV. From these analyses, the total annual wave power resource available over the Brazilian offshore was estimated in 89.97 GW, with largest mean wave power of 20.63 kW/m in the southernmost part of the study area. The analysis of the three WEC was based in the annual wave energy output and in the capacity factor. The higher capacity factor was 21.85% for Pelamis device at the southern region of the study area. PMID:28817731
Wave energy resource of Brazil: An analysis from 35 years of ERA-Interim reanalysis data.
Espindola, Rafael Luz; Araújo, Alex Maurício
2017-01-01
This paper presents a characterization of the wave power resource and an analysis of the wave power output for three (AquaBuoy, Pelamis and Wave Dragon) different wave energy converters (WEC) over the Brazilian offshore. To do so it used a 35 years reanalysis database from the ERA-Interim project. Annual and seasonal statistical analyzes of significant height and energy period were performed, and the directional variability of the incident waves were evaluated. The wave power resource was characterized in terms of the statistical parameters of mean, maximum, 95th percentile and standard deviation, and in terms of the temporal variability coefficients COV, SV e MV. From these analyses, the total annual wave power resource available over the Brazilian offshore was estimated in 89.97 GW, with largest mean wave power of 20.63 kW/m in the southernmost part of the study area. The analysis of the three WEC was based in the annual wave energy output and in the capacity factor. The higher capacity factor was 21.85% for Pelamis device at the southern region of the study area.
Zhu, Yun; Fan, Ruzong; Xiong, Momiao
2017-01-01
Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore correlation information of genetic variants, effectively reduce data dimensions, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new statistic method referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the ten competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and ten other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the ten other statistics. PMID:29040274
Buu, Anne; Williams, L Keoki; Yang, James J
2018-03-01
We propose a new genome-wide association test for mixed binary and continuous phenotypes that uses an efficient numerical method to estimate the empirical distribution of the Fisher's combination statistic under the null hypothesis. Our simulation study shows that the proposed method controls the type I error rate and also maintains its power at the level of the permutation method. More importantly, the computational efficiency of the proposed method is much higher than the one of the permutation method. The simulation results also indicate that the power of the test increases when the genetic effect increases, the minor allele frequency increases, and the correlation between responses decreases. The statistical analysis on the database of the Study of Addiction: Genetics and Environment demonstrates that the proposed method combining multiple phenotypes can increase the power of identifying markers that may not be, otherwise, chosen using marginal tests.
Analysis of broadcasting satellite service feeder link power control and polarization
NASA Technical Reports Server (NTRS)
Sullivan, T. M.
1982-01-01
Statistical analyses of carrier to interference power ratios (C/Is) were performed in assessing 17.5 GHz feeder links using (1) fixed power and power control, and (2) orthogonal linear and orthogonal circular polarizations. The analysis methods and attenuation/depolarization data base were based on CCIR findings to the greatest possible extent. Feeder links using adaptive power control were found to neither cause or suffer significant C/I degradation relative to that for fixed power feeder links having similar or less stringent availability objectives. The C/Is for sharing between orthogonal linearly polarized feeder links were found to be significantly higher than those for circular polarization only in links to nominally colocated satellites from nominally colocated Earth stations in high attenuation environments.
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling
Wood, John
2017-01-01
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080
Fraley, R. Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159
Jenkins, Martin
2016-01-01
Objective. In clinical trials of RA, it is common to assess effectiveness using end points based upon dichotomized continuous measures of disease activity, which classify patients as responders or non-responders. Although dichotomization generally loses statistical power, there are good clinical reasons to use these end points; for example, to allow for patients receiving rescue therapy to be assigned as non-responders. We adopt a statistical technique called the augmented binary method to make better use of the information provided by these continuous measures and account for how close patients were to being responders. Methods. We adapted the augmented binary method for use in RA clinical trials. We used a previously published randomized controlled trial (Oral SyK Inhibition in Rheumatoid Arthritis-1) to assess its performance in comparison to a standard method treating patients purely as responders or non-responders. The power and error rate were investigated by sampling from this study. Results. The augmented binary method reached similar conclusions to standard analysis methods but was able to estimate the difference in response rates to a higher degree of precision. Results suggested that CI widths for ACR responder end points could be reduced by at least 15%, which could equate to reducing the sample size of a study by 29% to achieve the same statistical power. For other end points, the gain was even higher. Type I error rates were not inflated. Conclusion. The augmented binary method shows considerable promise for RA trials, making more efficient use of patient data whilst still reporting outcomes in terms of recognized response end points. PMID:27338084
On use of the multistage dose-response model for assessing laboratory animal carcinogenicity
Nitcheva, Daniella; Piegorsch, Walter W.; West, R. Webster
2007-01-01
We explore how well a statistical multistage model describes dose-response patterns in laboratory animal carcinogenicity experiments from a large database of quantal response data. The data are collected from the U.S. EPA’s publicly available IRIS data warehouse and examined statistically to determine how often higher-order values in the multistage predictor yield significant improvements in explanatory power over lower-order values. Our results suggest that the addition of a second-order parameter to the model only improves the fit about 20% of the time, while adding even higher-order terms apparently does not contribute to the fit at all, at least with the study designs we captured in the IRIS database. Also included is an examination of statistical tests for assessing significance of higher-order terms in a multistage dose-response model. It is noted that bootstrap testing methodology appears to offer greater stability for performing the hypothesis tests than a more-common, but possibly unstable, “Wald” test. PMID:17490794
Relationship Power and Sexual Violence Among HIV-Positive Women in Rural Uganda.
Conroy, Amy A; Tsai, Alexander C; Clark, Gina M; Boum, Yap; Hatcher, Abigail M; Kawuma, Annet; Hunt, Peter W; Martin, Jeffrey N; Bangsberg, David R; Weiser, Sheri D
2016-09-01
Gender-based power imbalances place women at significant risk for sexual violence, however, little research has examined this association among women living with HIV/AIDS. We performed a cross-sectional analysis of relationship power and sexual violence among HIV-positive women on anti-retroviral therapy in rural Uganda. Relationship power was measured using the Sexual Relationship Power Scale (SRPS), a validated measure consisting of two subscales: relationship control (RC) and decision-making dominance. We used multivariable logistic regression to test for associations between the SRPS and two dependent variables: recent forced sex and transactional sex. Higher relationship power (full SRPS) was associated with reduced odds of forced sex (AOR = 0.24; 95 % CI 0.07-0.80; p = 0.020). The association between higher relationship power and transactional sex was strong and in the expected direction, but not statistically significant (AOR = 0.47; 95 % CI 0.18-1.22; p = 0.119). Higher RC was associated with reduced odds of both forced sex (AOR = 0.18; 95 % CI 0.06-0.59; p < 0.01) and transactional sex (AOR = 0.38; 95 % CI 0.15-0.99; p = 0.048). Violence prevention interventions with HIV-positive women should consider approaches that increase women's power in their relationships.
Prevalence of diseases and statistical power of the Japan Nurses' Health Study.
Fujita, Toshiharu; Hayashi, Kunihiko; Katanoda, Kota; Matsumura, Yasuhiro; Lee, Jung Su; Takagi, Hirofumi; Suzuki, Shosuke; Mizunuma, Hideki; Aso, Takeshi
2007-10-01
The Japan Nurses' Health Study (JNHS) is a long-term, large-scale cohort study investigating the effects of various lifestyle factors and healthcare habits on the health of Japanese women. Based on currently limited statistical data regarding the incidence of disease among Japanese women, our initial sample size was tentatively set at 50,000 during the design phase. The actual number of women who agreed to participate in follow-up surveys was approximately 18,000. Taking into account the actual sample size and new information on disease frequency obtained during the baseline component, we established the prevalence of past diagnoses of target diseases, predicted their incidence, and calculated the statistical power for JNHS follow-up surveys. For all diseases except ovarian cancer, the prevalence of a past diagnosis increased markedly with age, and incidence rates could be predicted based on the degree of increase in prevalence between two adjacent 5-yr age groups. The predicted incidence rate for uterine myoma, hypercholesterolemia, and hypertension was > or =3.0 (per 1,000 women, per year), while the rate of thyroid disease, hepatitis, gallstone disease, and benign breast tumor was predicted to be > or =1.0. For these diseases, the statistical power to detect risk factors with a relative risk of 1.5 or more within ten years, was 70% or higher.
Evaluating and Reporting Statistical Power in Counseling Research
ERIC Educational Resources Information Center
Balkin, Richard S.; Sheperis, Carl J.
2011-01-01
Despite recommendations from the "Publication Manual of the American Psychological Association" (6th ed.) to include information on statistical power when publishing quantitative results, authors seldom include analysis or discussion of statistical power. The rationale for discussing statistical power is addressed, approaches to using "G*Power" to…
Power through Struggle in Introductory Statistics
ERIC Educational Resources Information Center
Autin, Melanie; Bateiha, Summer; Marchionda, Hope
2013-01-01
Traditional classroom instruction consists of teacher-centered learning in which the instructor presents course material through lectures. A recent trend in higher education is the implementation of student-centered learning in which students take a more active role in the learning process. The purpose of this article is to describe the discomfort…
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling.
Nord, Camilla L; Valton, Vincent; Wood, John; Roiser, Jonathan P
2017-08-23
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered-some very seriously so-but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. Copyright © 2017 Nord, Valton et al.
ERIC Educational Resources Information Center
Sinharay, Sandip
2017-01-01
Karabatsos compared the power of 36 person-fit statistics using receiver operating characteristics curves and found the "H[superscript T]" statistic to be the most powerful in identifying aberrant examinees. He found three statistics, "C", "MCI", and "U3", to be the next most powerful. These four statistics,…
Pc-5 wave power in the plasmasphere and trough: CRRES observations
NASA Astrophysics Data System (ADS)
Hartinger, M.; Moldwin, M.; Angelopoulos, V.; Takahashi, K.; Singer, H. J.; Anderson, R. R.
2009-12-01
The CRRES (Combined Release and Radiation Effects Satellite) mission provides an opportunity to study the distribution of MHD wave power in the inner magnetosphere both inside the high-density plasmasphere and in the low-density trough. We present a statistical survey of Pc-5 wave power using CRRES magnetometer and plasma wave data separated into plasmasphere and trough intervals. Using a database of plasmapause crossings, we examined differences in power spectral density between the plasmasphere and trough regions. We found significant differences between the plasmasphere and trough in the radial profiles of Pc-5 wave power. On average, wave power was higher in the trough, but the difference in power depended on magnetic local time. Our study shows that determining the plasmapause location is important for understanding and modeling the MHD wave environment in the Pc-5 frequency band.
Refining the Use of Linkage Disequilibrium as a Robust Signature of Selective Sweeps.
Jacobs, Guy S; Sluckin, Tim J; Kivisild, Toomas
2016-08-01
During a selective sweep, characteristic patterns of linkage disequilibrium can arise in the genomic region surrounding a selected locus. These have been used to infer past selective sweeps. However, the recombination rate is known to vary substantially along the genome for many species. We here investigate the effectiveness of current (Kelly's [Formula: see text] and [Formula: see text]) and novel statistics at inferring hard selective sweeps based on linkage disequilibrium distortions under different conditions, including a human-realistic demographic model and recombination rate variation. When the recombination rate is constant, Kelly's [Formula: see text] offers high power, but is outperformed by a novel statistic that we test, which we call [Formula: see text] We also find this statistic to be effective at detecting sweeps from standing variation. When recombination rate fluctuations are included, there is a considerable reduction in power for all linkage disequilibrium-based statistics. However, this can largely be reversed by appropriately controlling for expected linkage disequilibrium using a genetic map. To further test these different methods, we perform selection scans on well-characterized HapMap data, finding that all three statistics-[Formula: see text] Kelly's [Formula: see text] and [Formula: see text]-are able to replicate signals at regions previously identified as selection candidates based on population differentiation or the site frequency spectrum. While [Formula: see text] replicates most candidates when recombination map data are not available, the [Formula: see text] and [Formula: see text] statistics are more successful when recombination rate variation is controlled for. Given both this and their higher power in simulations of selective sweeps, these statistics are preferred when information on local recombination rate variation is available. Copyright © 2016 by the Genetics Society of America.
TATES: Efficient Multivariate Genotype-Phenotype Analysis for Genome-Wide Association Studies
van der Sluis, Sophie; Posthuma, Danielle; Dolan, Conor V.
2013-01-01
To date, the genome-wide association study (GWAS) is the primary tool to identify genetic variants that cause phenotypic variation. As GWAS analyses are generally univariate in nature, multivariate phenotypic information is usually reduced to a single composite score. This practice often results in loss of statistical power to detect causal variants. Multivariate genotype–phenotype methods do exist but attain maximal power only in special circumstances. Here, we present a new multivariate method that we refer to as TATES (Trait-based Association Test that uses Extended Simes procedure), inspired by the GATES procedure proposed by Li et al (2011). For each component of a multivariate trait, TATES combines p-values obtained in standard univariate GWAS to acquire one trait-based p-value, while correcting for correlations between components. Extensive simulations, probing a wide variety of genotype–phenotype models, show that TATES's false positive rate is correct, and that TATES's statistical power to detect causal variants explaining 0.5% of the variance can be 2.5–9 times higher than the power of univariate tests based on composite scores and 1.5–2 times higher than the power of the standard MANOVA. Unlike other multivariate methods, TATES detects both genetic variants that are common to multiple phenotypes and genetic variants that are specific to a single phenotype, i.e. TATES provides a more complete view of the genetic architecture of complex traits. As the actual causal genotype–phenotype model is usually unknown and probably phenotypically and genetically complex, TATES, available as an open source program, constitutes a powerful new multivariate strategy that allows researchers to identify novel causal variants, while the complexity of traits is no longer a limiting factor. PMID:23359524
NASA Technical Reports Server (NTRS)
Carrier, J.; Land, S.; Buysse, D. J.; Kupfer, D. J.; Monk, T. H.
2001-01-01
The effects of age and gender on sleep EEG power spectral density were assessed in a group of 100 subjects aged 20 to 60 years. We propose a new statistical strategy (mixed-model using fixed-knot regression splines) to analyze quantitative EEG measures. The effect of gender varied according to frequency, but no interactions emerged between age and gender, suggesting that the aging process does not differentially influence men and women. Women had higher power density than men in delta, theta, low alpha, and high spindle frequency range. The effect of age varied according to frequency and across the night. The decrease in power with age was not restricted to slow-wave activity, but also included theta and sigma activity. With increasing age, the attenuation over the night in power density between 1.25 and 8.00 Hz diminished, and the rise in power between 12.25 and 14.00 Hz across the night decreased. Increasing age was associated with higher power in the beta range. These results suggest that increasing age may be related to an attenuation of homeostatic sleep pressure and to an increase in cortical activation during sleep.
Siddiqi, Ariba; Arjunan, Sridhar P; Kumar, Dinesh K
2016-08-01
Age-associated changes in the surface electromyogram (sEMG) of Tibialis Anterior (TA) muscle can be attributable to neuromuscular alterations that precede strength loss. We have used our sEMG model of the Tibialis Anterior to interpret the age-related changes and compared with the experimental sEMG. Eighteen young (20-30 years) and 18 older (60-85 years) performed isometric dorsiflexion at 6 different percentage levels of maximum voluntary contractions (MVC), and their sEMG from the TA muscle was recorded. Six different age-related changes in the neuromuscular system were simulated using the sEMG model at the same MVCs as the experiment. The maximal power of the spectrum, Gaussianity and Linearity Test Statistics were computed from the simulated and experimental sEMG. A correlation analysis at α=0.05 was performed between the simulated and experimental age-related change in the sEMG features. The results show the loss in motor units was distinguished by the Gaussianity and Linearity test statistics; while the maximal power of the PSD distinguished between the muscular factors. The simulated condition of 40% loss of motor units with halved the number of fast fibers best correlated with the age-related change observed in the experimental sEMG higher order statistical features. The simulated aging condition found by this study corresponds with the moderate motor unit remodelling and negligible strength loss reported in literature for the cohorts aged 60-70 years.
Output power distributions of terminals in a 3G mobile communication network.
Persson, Tomas; Törnevik, Christer; Larsson, Lars-Eric; Lovén, Jan
2012-05-01
The objective of this study was to examine the distribution of the output power of mobile phones and other terminals connected to a 3G network in Sweden. It is well known that 3G terminals can operate with very low output power, particularly for voice calls. Measurements of terminal output power were conducted in the Swedish TeliaSonera 3G network in November 2008 by recording network statistics. In the analysis, discrimination was made between rural, suburban, urban, and dedicated indoor networks. In addition, information about terminal output power was possible to collect separately for voice and data traffic. Information from six different Radio Network Controllers (RNCs) was collected during at least 1 week. In total, more than 800000 h of voice calls were collected and in addition to that a substantial amount of data traffic. The average terminal output power for 3G voice calls was below 1 mW for any environment including rural, urban, and dedicated indoor networks. This is <1% of the maximum available output power. For data applications the average output power was about 6-8 dB higher than for voice calls. For rural areas the output power was about 2 dB higher, on average, than in urban areas. Copyright © 2011 Wiley Periodicals, Inc.
Dong, Jian-Jun; Li, Qing-Liang; Yin, Hua; Zhong, Cheng; Hao, Jun-Guang; Yang, Pan-Fei; Tian, Yu-Hong; Jia, Shi-Ru
2014-10-15
Sensory evaluation is regarded as a necessary procedure to ensure a reproducible quality of beer. Meanwhile, high-throughput analytical methods provide a powerful tool to analyse various flavour compounds, such as higher alcohol and ester. In this study, the relationship between flavour compounds and sensory evaluation was established by non-linear models such as partial least squares (PLS), genetic algorithm back-propagation neural network (GA-BP), support vector machine (SVM). It was shown that SVM with a Radial Basis Function (RBF) had a better performance of prediction accuracy for both calibration set (94.3%) and validation set (96.2%) than other models. Relatively lower prediction abilities were observed for GA-BP (52.1%) and PLS (31.7%). In addition, the kernel function of SVM played an essential role of model training when the prediction accuracy of SVM with polynomial kernel function was 32.9%. As a powerful multivariate statistics method, SVM holds great potential to assess beer quality. Copyright © 2014 Elsevier Ltd. All rights reserved.
Entropy Based Genetic Association Tests and Gene-Gene Interaction Tests
de Andrade, Mariza; Wang, Xin
2011-01-01
In the past few years, several entropy-based tests have been proposed for testing either single SNP association or gene-gene interaction. These tests are mainly based on Shannon entropy and have higher statistical power when compared to standard χ2 tests. In this paper, we extend some of these tests using a more generalized entropy definition, Rényi entropy, where Shannon entropy is a special case of order 1. The order λ (>0) of Rényi entropy weights the events (genotype/haplotype) according to their probabilities (frequencies). Higher λ places more emphasis on higher probability events while smaller λ (close to 0) tends to assign weights more equally. Thus, by properly choosing the λ, one can potentially increase the power of the tests or the p-value level of significance. We conducted simulation as well as real data analyses to assess the impact of the order λ and the performance of these generalized tests. The results showed that for dominant model the order 2 test was more powerful and for multiplicative model the order 1 or 2 had similar power. The analyses indicate that the choice of λ depends on the underlying genetic model and Shannon entropy is not necessarily the most powerful entropy measure for constructing genetic association or interaction tests. PMID:23089811
Testing higher-order Lagrangian perturbation theory against numerical simulation. 1: Pancake models
NASA Technical Reports Server (NTRS)
Buchert, T.; Melott, A. L.; Weiss, A. G.
1993-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of quasi-linear scales. The Lagrangian theory of gravitational instability of an Einstein-de Sitter dust cosmogony investigated and solved up to the third order is compared with numerical simulations. In this paper we study the dynamics of pancake models as a first step. In previous work the accuracy of several analytical approximations for the modeling of large-scale structure in the mildly non-linear regime was analyzed in the same way, allowing for direct comparison of the accuracy of various approximations. In particular, the Zel'dovich approximation (hereafter ZA) as a subclass of the first-order Lagrangian perturbation solutions was found to provide an excellent approximation to the density field in the mildly non-linear regime (i.e. up to a linear r.m.s. density contrast of sigma is approximately 2). The performance of ZA in hierarchical clustering models can be greatly improved by truncating the initial power spectrum (smoothing the initial data). We here explore whether this approximation can be further improved with higher-order corrections in the displacement mapping from homogeneity. We study a single pancake model (truncated power-spectrum with power-spectrum with power-index n = -1) using cross-correlation statistics employed in previous work. We found that for all statistical methods used the higher-order corrections improve the results obtained for the first-order solution up to the stage when sigma (linear theory) is approximately 1. While this improvement can be seen for all spatial scales, later stages retain this feature only above a certain scale which is increasing with time. However, third-order is not much improvement over second-order at any stage. The total breakdown of the perturbation approach is observed at the stage, where sigma (linear theory) is approximately 2, which corresponds to the onset of hierarchical clustering. This success is found at a considerable higher non-linearity than is usual for perturbation theory. Whether a truncation of the initial power-spectrum in hierarchical models retains this improvement will be analyzed in a forthcoming work.
Austin, Peter C; Schuster, Tibor; Platt, Robert W
2015-10-15
Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.
HOS network-based classification of power quality events via regression algorithms
NASA Astrophysics Data System (ADS)
Palomares Salas, José Carlos; González de la Rosa, Juan José; Sierra Fernández, José María; Pérez, Agustín Agüera
2015-12-01
This work compares seven regression algorithms implemented in artificial neural networks (ANNs) supported by 14 power-quality features, which are based in higher-order statistics. Combining time and frequency domain estimators to deal with non-stationary measurement sequences, the final goal of the system is the implementation in the future smart grid to guarantee compatibility between all equipment connected. The principal results are based in spectral kurtosis measurements, which easily adapt to the impulsive nature of the power quality events. These results verify that the proposed technique is capable of offering interesting results for power quality (PQ) disturbance classification. The best results are obtained using radial basis networks, generalized regression, and multilayer perceptron, mainly due to the non-linear nature of data.
Statistical Power in Meta-Analysis
ERIC Educational Resources Information Center
Liu, Jin
2015-01-01
Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…
Hinterreiter, J; Veronig, A M; Thalmann, J K; Tschernitz, J; Pötzi, W
2018-01-01
A statistical study of the chromospheric ribbon evolution in H[Formula: see text] two-ribbon flares was performed. The data set consists of 50 confined (62%) and eruptive (38%) flares that occurred from June 2000 to June 2015. The flares were selected homogeneously over the H[Formula: see text] and Geostationary Operational Environmental Satellite (GOES) classes, with an emphasis on including powerful confined flares and weak eruptive flares. H[Formula: see text] filtergrams from the Kanzelhöhe Observatory in combination with Michelson Doppler Imager (MDI) and Helioseismic and Magnetic Imager (HMI) magnetograms were used to derive the ribbon separation, the ribbon-separation velocity, the magnetic-field strength, and the reconnection electric field. We find that eruptive flares reveal statistically larger ribbon separation and higher ribbon-separation velocities than confined flares. In addition, the ribbon separation of eruptive flares correlates with the GOES SXR flux, whereas no clear dependence was found for confined flares. The maximum ribbon-separation velocity is not correlated with the GOES flux, but eruptive flares reveal on average a higher ribbon-separation velocity (by ≈ 10 km s -1 ). The local reconnection electric field of confined ([Formula: see text]) and eruptive ([Formula: see text]) flares correlates with the GOES flux, indicating that more powerful flares involve stronger reconnection electric fields. In addition, eruptive flares with higher electric-field strengths tend to be accompanied by faster coronal mass ejections. The online version of this article (10.1007/s11207-018-1253-1) contains supplementary material, which is available to authorized users.
2016-12-01
KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination
The Role of Margin in Link Design and Optimization
NASA Technical Reports Server (NTRS)
Cheung, K.
2015-01-01
Link analysis is a system engineering process in the design, development, and operation of communication systems and networks. Link models that are mathematical abstractions representing the useful signal power and the undesirable noise and attenuation effects (including weather effects if the signal path transverses through the atmosphere) that are integrated into the link budget calculation that provides the estimates of signal power and noise power at the receiver. Then the link margin is applied which attempts to counteract the fluctuations of the signal and noise power to ensure reliable data delivery from transmitter to receiver. (Link margin is dictated by the link margin policy or requirements.) A simple link budgeting approach assumes link parameters to be deterministic values typically adopted a rule-of-thumb policy of 3 dB link margin. This policy works for most S- and X-band links due to their insensitivity to weather effects. But for higher frequency links like Ka-band, Ku-band, and optical communication links, it is unclear if a 3 dB link margin would guarantee link closure. Statistical link analysis that adopted the 2-sigma or 3-sigma link margin incorporates link uncertainties in the sigma calculation. (The Deep Space Network (DSN) link margin policies are 2-sigma for downlink and 3-sigma for uplink.) The link reliability can therefore be quantified statistically even for higher frequency links. However in the current statistical link analysis approach, link reliability is only expressed as the likelihood of exceeding the signal-to-noise ratio (SNR) threshold that corresponds to a given bit-error-rate (BER) or frame-error-rate (FER) requirement. The method does not provide the true BER or FER estimate of the link with margin, or the required signalto-noise ratio (SNR) that would meet the BER or FER requirement in the statistical sense. In this paper, we perform in-depth analysis on the relationship between BER/FER requirement, operating SNR, and coding performance curve, in the case when the channel coherence time of link fluctuation is comparable or larger than the time duration of a codeword. We compute the "true" SNR design point that would meet the BER/FER requirement by taking into account the fluctuation of signal power and noise power at the receiver, and the shape of the coding performance curve. This analysis yields a number of valuable insights on the design choices of coding scheme and link margin for the reliable data delivery of a communication system - space and ground. We illustrate the aforementioned analysis using a number of standard NASA error-correcting codes.
NASA Astrophysics Data System (ADS)
Chen, Lin; Abbey, Craig K.; Boone, John M.
2013-03-01
Previous research has demonstrated that a parameter extracted from a power function fit to the anatomical noise power spectrum, β, may be predictive of breast mass lesion detectability in x-ray based medical images of the breast. In this investigation, the value of β was compared with a number of other more widely used parameters, in order to determine the relationship between β and these other parameters. This study made use of breast CT data sets, acquired on two breast CT systems developed in our laboratory. A total of 185 breast data sets in 183 women were used, and only the unaffected breast was used (where no lesion was suspected). The anatomical noise power spectrum computed from two-dimensional region of interests (ROIs), was fit to a power function (NPS(f) = α f-β), and the exponent parameter (β) was determined using log/log linear regression. Breast density for each of the volume data sets was characterized in previous work. The breast CT data sets analyzed in this study were part of a previous study which evaluated the receiver operating characteristic (ROC) curve performance using simulated spherical lesions and a pre-whitened matched filter computer observer. This ROC information was used to compute the detectability index as well as the sensitivity at 95% specificity. The fractal dimension was computed from the same ROIs which were used for the assessment of β. The value of β was compared to breast density, detectability index, sensitivity, and fractal dimension, and the slope of these relationships was investigated to assess statistical significance from zero slope. A statistically significant non-zero slope was considered to be a positive association in this investigation. All comparisons between β and breast density, detectability index, sensitivity at 95% specificity, and fractal dimension demonstrated statistically significant association with p < 0.001 in all cases. The value of β was also found to be associated with patient age and breast diameter, parameters both related to breast density. In all associations between other parameters, lower values of β were associated with increased breast cancer detection performance. Specifically, lower values of β were associated with lower breast density, higher detectability index, higher sensitivity, and lower fractal dimension values. While causality was not and probably cannot be demonstrated, the strong, statistically significant association between the β metric and the other more widely used parameters suggest that β may be considered as a surrogate measure for breast cancer detection performance. These findings are specific to breast parenchymal patterns and mass lesions only.
Refining the Use of Linkage Disequilibrium as a Robust Signature of Selective Sweeps
Jacobs, Guy S.; Sluckin, Timothy J.; Kivisild, Toomas
2016-01-01
During a selective sweep, characteristic patterns of linkage disequilibrium can arise in the genomic region surrounding a selected locus. These have been used to infer past selective sweeps. However, the recombination rate is known to vary substantially along the genome for many species. We here investigate the effectiveness of current (Kelly’s ZnS and ωmax) and novel statistics at inferring hard selective sweeps based on linkage disequilibrium distortions under different conditions, including a human-realistic demographic model and recombination rate variation. When the recombination rate is constant, Kelly’s ZnS offers high power, but is outperformed by a novel statistic that we test, which we call Zα. We also find this statistic to be effective at detecting sweeps from standing variation. When recombination rate fluctuations are included, there is a considerable reduction in power for all linkage disequilibrium-based statistics. However, this can largely be reversed by appropriately controlling for expected linkage disequilibrium using a genetic map. To further test these different methods, we perform selection scans on well-characterized HapMap data, finding that all three statistics—ωmax, Kelly’s ZnS, and Zα—are able to replicate signals at regions previously identified as selection candidates based on population differentiation or the site frequency spectrum. While ωmax replicates most candidates when recombination map data are not available, the ZnS and Zα statistics are more successful when recombination rate variation is controlled for. Given both this and their higher power in simulations of selective sweeps, these statistics are preferred when information on local recombination rate variation is available. PMID:27516617
Raja, R; Nayak, A K; Shukla, A K; Rao, K S; Gautam, Priyanka; Lal, B; Tripathi, R; Shahid, M; Panda, B B; Kumar, A; Bhattacharyya, P; Bardhan, G; Gupta, S; Patra, D K
2015-11-01
Thermal power stations apart from being source of energy supply are causing soil pollution leading to its degradation in fertility and contamination. Fine particle and trace element emissions from energy production in coal-fired thermal power plants are associated with significant adverse effects on human, animal, and soil health. Contamination of soil with cadmium, nickel, copper, lead, arsenic, chromium, and zinc can be a primary route of human exposure to these potentially toxic elements. The environmental evaluation of surrounding soil of thermal power plants in Odisha may serve a model study to get the insight into hazards they are causing. The study investigates the impact of fly ash-fugitive dust (FAFD) deposition from coal-fired thermal power plant emissions on soil properties including trace element concentration, pH, and soil enzymatic activities. Higher FAFD deposition was found in the close proximity of power plants, which led to high pH and greater accumulation of heavy metals. Among the three power plants, in the vicinity of NALCO, higher concentrations of soil organic carbon and nitrogen was observed whereas, higher phosphorus content was recorded in the proximity of NTPC. Multivariate statistical analysis of different variables and their association indicated that FAFD deposition and soil properties were influenced by the source of emissions and distance from source of emission. Pollution in soil profiles and high risk areas were detected and visualized using surface maps based on Kriging interpolation. The concentrations of chromium and arsenic were higher in the soil where FAFD deposition was more. Observance of relatively high concentration of heavy metals like cadmium, lead, nickel, and arsenic and a low concentration of enzymatic activity in proximity to the emission source indicated a possible link with anthropogenic emissions.
ERIC Educational Resources Information Center
What Works Clearinghouse, 2013
2013-01-01
This study examined whether attending a Knowledge is Power Program (KIPP) middle school improved students' reading, math, social studies, and science achievement for up to 4 years following enrollment. The study reported that students attending KIPP middle schools scored statistically significantly higher than matched students on all of the state…
Invisible Ink: An Analysis of Meaning Contained in Gender, Race, Performance, and Power Discourses
ERIC Educational Resources Information Center
Griggs, Susan A.
2012-01-01
The number of females in senior level leadership positions in higher education is substantially fewer than males. Yet female students in these same institutions represent over half the population (National Center for Educational Statistics, 2010). The leadership gender gap is a phenomenon that has undergone numerous studies in search of reasons…
Clinical nutrition managers have access to sources of empowerment.
Mislevy, J M; Schiller, M R; Wolf, K N; Finn, S C
2000-09-01
To ascertain perceived access of dietitians to power in the workplace. The conceptual framework was Kanter's theory of organizational power. The Conditions for Work Effectiveness Questionnaire was used to measure perceived access to sources of power: information, support, resources, and opportunities. Demographic data were collected to identify factors that may enhance empowerment. The questionnaire was sent to a random sample of 348 dietitians chosen from members of the Clinical Nutrition Management dietetic practice group of the American Dietetic Association. Blank questionnaires were returned by 99 (28.4%) people not working as clinical nutrition managers, which left 249 in the sample. Descriptive statistics were used to organize and summarize data. One-way analysis of variance and t tests were performed to identify differences in responses based on levels of education, work setting, and information technology skills. Usable questionnaires were received from 178 people (71.5%). On a 5-point scale, scores for access to information (mean +/- standard deviation [SD] = 3.8 +/- 0.7), opportunity (mean +/- SD = 3.6 +/- 0.7), support (mean +/- SD = 3.2 +/- 0.9), and resources (mean +/- SD = 3.1 +/- 0.8) demonstrated that clinical nutrition managers perceived themselves as having substantial access to sources of empowerment. Those having higher levels of education, working in larger hospitals, having better-developed information technology skills, and using information technology more frequently had statistically significant higher empowerment scores (P = < or = .05) than contrasting groups in each category. Clinical nutrition managers are empowered and able to assume leadership roles in today's health care settings. Their power may be enhanced by asserting more pressure to gain greater access to sources of power: support, information, resources, and opportunities.
The Effects of Sweet, Bitter, Salty and Sour Stimuli on Alpha Rhythm. A Meg Study.
Kotini, Athanasia; Anninos, Photios; Gemousakakis, Triandafillos; Adamopoulos, Adam
2016-09-01
the possible diff erences in processing gustatory stimuli in healthy subjects was investigated by magnetoencephalography (meg). meg recordings were evaluated for 10 healthy volunteers (3 men within the age range 20-46 years, 7 women within the age range 10-28 years), with four diff erent gustatory stimuli: sweet, bi" er, sour and salty. Fast fourier transform was performed on meg epochs recorded for the above conditions and the eff ect of each kind of stimuli on alpha rhythm was examined. A significant higher percent of alpha power was found irrespective of hemispheric side in all gustatory states located mainly at the occipital, le$ and right parietal lobes. One female volunteer experienced no statistically signifi cance when comparing normal with salty and sour taste respectively. Two female volunteers exhibited no statistically signifi cance when comparing their normal with their salty taste. One male volunteer experienced no statistically signifi cance when comparing the normalbitter and normal-salty states correspondingly. All the other subjects showed statistically signifi cant changes in alpha power for the 4 gustatory stimuli. The pattern of activation caused by the four stimuli indicated elevated gustatory processing mechanisms. This cortical activation might have applicability in modulation of brain status.
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J.
2014-01-01
It is widely believed that sensory systems are optimized for processing stimuli occurring in the natural environment. However, it remains unknown whether this principle applies to the vestibular system, which contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. Here we quantified, for the first time, the statistics of natural vestibular inputs experienced by freely moving human subjects during typical everyday activities. Although previous studies have found that the power spectra of natural signals across sensory modalities decay as a power law (i.e., as 1/fα), we found that this did not apply to natural vestibular stimuli. Instead, power decreased slowly at lower and more rapidly at higher frequencies for all motion dimensions. We further establish that this unique stimulus structure is the result of active motion as well as passive biomechanical filtering occurring before any neural processing. Notably, the transition frequency (i.e., frequency at which power starts to decrease rapidly) was lower when subjects passively experienced sensory stimulation than when they actively controlled stimulation through their own movement. In contrast to signals measured at the head, the spectral content of externally generated (i.e., passive) environmental motion did follow a power law. Specifically, transformations caused by both motor control and biomechanics shape the statistics of natural vestibular stimuli before neural processing. We suggest that the unique structure of natural vestibular stimuli will have important consequences on the neural coding strategies used by this essential sensory system to represent self-motion in everyday life. PMID:24920638
Monitoring the impact of Bt maize on butterflies in the field: estimation of required sample sizes.
Lang, Andreas
2004-01-01
The monitoring of genetically modified organisms (GMOs) after deliberate release is important in order to assess and evaluate possible environmental effects. Concerns have been raised that the transgenic crop, Bt maize, may affect butterflies occurring in field margins. Therefore, a monitoring of butterflies was suggested accompanying the commercial cultivation of Bt maize. In this study, baseline data on the butterfly species and their abundance in maize field margins is presented together with implications for butterfly monitoring. The study was conducted in Bavaria, South Germany, between 2000-2002. A total of 33 butterfly species was recorded in field margins. A small number of species dominated the community, and butterflies observed were mostly common species. Observation duration was the most important factor influencing the monitoring results. Field margin size affected the butterfly abundance, and habitat diversity had a tendency to influence species richness. Sample size and statistical power analyses indicated that a sample size in the range of 75 to 150 field margins for treatment (transgenic maize) and control (conventional maize) would detect (power of 80%) effects larger than 15% in species richness and the butterfly abundance pooled across species. However, a much higher number of field margins must be sampled in order to achieve a higher statistical power, to detect smaller effects, and to monitor single butterfly species.
Topography and Higher Order Corneal Aberrations of the Fellow Eye in Unilateral Keratoconus.
Aksoy, Sibel; Akkaya, Sezen; Özkurt, Yelda; Kurna, Sevda; Açıkalın, Banu; Şengör, Tomris
2017-10-01
Comparison of topography and corneal higher order aberrations (HOA) data of fellow normal eyes of unilateral keratoconus patients with keratoconus eyes and control group. The records of 196 patients with keratoconus were reviewed. Twenty patients were identified as unilateral keratoconus. The best corrected visual acuity (BCVA), topography and aberration data of the unilateral keratoconus patients' normal eyes were compared with their contralateral keratoconus eyes and with control group eyes. For statistical analysis, flat and steep keratometry values, average corneal power, cylindrical power, surface regularity index (SRI), surface asymmetry index (SAI), inferior-superior ratio (I-S), keratoconus prediction index, and elevation-depression power (EDP) and diameter (EDD) topography indices were selected. Mean age of the unilateral keratoconus patients was 26.05±4.73 years and that of the control group was 23.6±8.53 years (p>0.05). There was no statistical difference in BCVA between normal and control eyes (p=0.108), whereas BCVA values were significantly lower in eyes with keratoconus (p=0.001). Comparison of quantitative topographic indices between the groups showed that all indices except the I-S ratio were significantly higher in the normal group than in the control group (p<0.05). The most obvious differences were in the SRI, SAI, EDP, and EDD values. All topographic indices were higher in the keratoconus eyes compared to the normal fellow eyes. There was no difference between normal eyes and the control group in terms of spherical aberration, while coma, trefoil, irregular astigmatism, and total HOA values were higher in the normal eyes of unilateral keratoconus patients (p<0.05). All HOA values were higher in keratoconus eyes than in the control group. According to our study, SRI, SAI, EDP, EDD values, and HOA other than spherical aberration were higher in the clinically and topographically normal fellow eyes of unilateral keratoconus patients when compared to a control group. This finding may be due to the mild asymmetric and morphologic changes in the subclinical stage of keratoconus leading to deterioration in the indicators of corneal irregularity and elevation changes. Therefore, these eyes may be exhibiting the early form of the disease.
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
Explanation of Two Anomalous Results in Statistical Mediation Analysis.
Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P
2012-01-01
Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.
Rubio-Arias, Jacobo Ángel; Ramos-Campo, Domingo Jesús; Peña Amaro, José; Esteban, Paula; Mendizábal, Susana; Jiménez, José Fernando
2017-11-01
The purpose of this study was to analyse gender differences in neuromuscular behaviour of the gastrocnemius and vastus lateralis during the take-off phase of a countermovement jump (CMJ), using direct measures (ground reaction forces, muscle activity and dynamic ultrasound). Sixty-four young adults (aged 18-25 years) participated voluntarily in this study, 35 men and 29 women. The firing of the trigger allowed obtainment of data collection vertical ground reaction forces (GRF), surface electromyography activity (sEMG) and dynamic ultrasound gastrocnemius of both legs. Statistically significant gender differences were observed in the jump performance, which appear to be based on differences in muscle architecture and the electrical activation of the gastrocnemius muscles and vastus lateralis. So while men developed greater peak power, velocity take-offs and jump heights, jump kinetics compared to women, women also required a higher electrical activity to develop lower power values. Additionally, the men had higher values pennation angles and muscle thickness than women. Men show higher performance of the jump test than women, due to significant statistical differences in the values of muscle architecture (pennation angle and thickness muscle), lower Neural Efficiency Index and a higher amount of sEMG activity per second during the take-off phase of a CMJ. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.
Luo, Li; Zhu, Yun
2012-01-01
Abstract The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T2, collapsing method, multivariate and collapsing (CMC) method, individual χ2 test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets. PMID:22651812
Luo, Li; Zhu, Yun; Xiong, Momiao
2012-06-01
The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.
Sarshar, Mohammad; Wong, Winson T.; Anvari, Bahman
2014-01-01
Abstract. Optical tweezers have become an important instrument in force measurements associated with various physical, biological, and biophysical phenomena. Quantitative use of optical tweezers relies on accurate calibration of the stiffness of the optical trap. Using the same optical tweezers platform operating at 1064 nm and beads with two different diameters, we present a comparative study of viscous drag force, equipartition theorem, Boltzmann statistics, and power spectral density (PSD) as methods in calibrating the stiffness of a single beam gradient force optical trap at trapping laser powers in the range of 0.05 to 1.38 W at the focal plane. The equipartition theorem and Boltzmann statistic methods demonstrate a linear stiffness with trapping laser powers up to 355 mW, when used in conjunction with video position sensing means. The PSD of a trapped particle’s Brownian motion or measurements of the particle displacement against known viscous drag forces can be reliably used for stiffness calibration of an optical trap over a greater range of trapping laser powers. Viscous drag stiffness calibration method produces results relevant to applications where trapped particle undergoes large displacements, and at a given position sensing resolution, can be used for stiffness calibration at higher trapping laser powers than the PSD method. PMID:25375348
Kovač, Marko; Bauer, Arthur; Ståhl, Göran
2014-01-01
Backgrounds, Material and Methods To meet the demands of sustainable forest management and international commitments, European nations have designed a variety of forest-monitoring systems for specific needs. While the majority of countries are committed to independent, single-purpose inventorying, a minority of countries have merged their single-purpose forest inventory systems into integrated forest resource inventories. The statistical efficiencies of the Bavarian, Slovene and Swedish integrated forest resource inventory designs are investigated with the various statistical parameters of the variables of growing stock volume, shares of damaged trees, and deadwood volume. The parameters are derived by using the estimators for the given inventory designs. The required sample sizes are derived via the general formula for non-stratified independent samples and via statistical power analyses. The cost effectiveness of the designs is compared via two simple cost effectiveness ratios. Results In terms of precision, the most illustrative parameters of the variables are relative standard errors; their values range between 1% and 3% if the variables’ variations are low (s%<80%) and are higher in the case of higher variations. A comparison of the actual and required sample sizes shows that the actual sample sizes were deliberately set high to provide precise estimates for the majority of variables and strata. In turn, the successive inventories are statistically efficient, because they allow detecting the mean changes of variables with powers higher than 90%; the highest precision is attained for the changes of growing stock volume and the lowest for the changes of the shares of damaged trees. Two indicators of cost effectiveness also show that the time input spent for measuring one variable decreases with the complexity of inventories. Conclusion There is an increasing need for credible information on forest resources to be used for decision making and national and international policy making. Such information can be cost-efficiently provided through integrated forest resource inventories. PMID:24941120
Low statistical power in biomedical science: a review of three human research domains.
Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R
2017-02-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.
Low statistical power in biomedical science: a review of three human research domains
Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois
2017-01-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409
Toward "Constructing" the Concept of Statistical Power: An Optical Analogy.
ERIC Educational Resources Information Center
Rogers, Bruce G.
This paper presents a visual analogy that may be used by instructors to teach the concept of statistical power in statistical courses. Statistical power is mathematically defined as the probability of rejecting a null hypothesis when that null is false, or, equivalently, the probability of detecting a relationship when it exists. The analogy…
Bon-EV: an improved multiple testing procedure for controlling false discovery rates.
Li, Dongmei; Xie, Zidian; Zand, Martin; Fogg, Thomas; Dye, Timothy
2017-01-03
Stability of multiple testing procedures, defined as the standard deviation of total number of discoveries, can be used as an indicator of variability of multiple testing procedures. Improving stability of multiple testing procedures can help to increase the consistency of findings from replicated experiments. Benjamini-Hochberg's and Storey's q-value procedures are two commonly used multiple testing procedures for controlling false discoveries in genomic studies. Storey's q-value procedure has higher power and lower stability than Benjamini-Hochberg's procedure. To improve upon the stability of Storey's q-value procedure and maintain its high power in genomic data analysis, we propose a new multiple testing procedure, named Bon-EV, to control false discovery rate (FDR) based on Bonferroni's approach. Simulation studies show that our proposed Bon-EV procedure can maintain the high power of the Storey's q-value procedure and also result in better FDR control and higher stability than Storey's q-value procedure for samples of large size(30 in each group) and medium size (15 in each group) for either independent, somewhat correlated, or highly correlated test statistics. When sample size is small (5 in each group), our proposed Bon-EV procedure has performance between the Benjamini-Hochberg procedure and the Storey's q-value procedure. Examples using RNA-Seq data show that the Bon-EV procedure has higher stability than the Storey's q-value procedure while maintaining equivalent power, and higher power than the Benjamini-Hochberg's procedure. For medium or large sample sizes, the Bon-EV procedure has improved FDR control and stability compared with the Storey's q-value procedure and improved power compared with the Benjamini-Hochberg procedure. The Bon-EV multiple testing procedure is available as the BonEV package in R for download at https://CRAN.R-project.org/package=BonEV .
Overweight and pregnancy complications.
Abrams, B; Parker, J
1988-01-01
The association between increased prepregnancy weight for height and seven pregnancy complications was studied in a multi-racial sample of more than 4100 recent deliveries. Body mass indices were calculated and used to classify women as average weight (90-119 percent of ideal or BMI 19.21-25.60), moderately overweight (120-135 percent ideal or BMI 25.61-28.90), and very overweight (greater than 135 percent ideal or BMI greater than 28.91) prior to pregnancy. Compared to women of average weight for height, very overweight women had a higher risk of diabetes, hypertension, pregnancy-induced hypertension and primary cesarean section delivery. Moderately overweight women were also at higher risk than average for diabetes, pregnancy-induced hypertension and primary cesarean deliveries but the relative risks were of a smaller magnitude than for very overweight women. With women of average prepregnancy body mass as reference, moderately elevated, but not significant relative risks were found for perinatal mortality in the very overweight group, for urinary tract infections in both overweight groups, and a decreased risk for anemia was found in the very overweight group. However, post-hoc power analyses indicated that the number of overweight women in the sample did not allow adequate statistical power to detect these small differences in risk. To overcome limitations associated with low statistical power, the results of three recent studies of these outcomes in very overweight pregnant women were combined and summarized using Mantel-Haenzel techniques. This second, larger analysis suggested that very overweight women are at significantly higher risk for all seven outcomes studied. Summary results for moderately overweight women could not be calculated, since only two of the studies had evaluated moderately overweight women separately. These latter results support other findings that both moderate overweight and very overweight are risk factors during pregnancy, with the highest risk occurring in the heaviest group. Although these results indicate that moderate overweight is a risk factor during pregnancy, additional studies are needed to confirm the impact of being 20-35 percent above ideal weight prior to pregnancy. The results of this analysis also imply that since the baseline incidence of many perinatal complications is low, studies relating overweight and pregnancy complications should include large enough samples of overweight women so that there is adequate statistical power to reliably detect differences in complication rates.
Scintillation statistics measured in an earth-space-earth retroreflector link
NASA Technical Reports Server (NTRS)
Bufton, J. L.
1977-01-01
Scintillation was measured in a vertical path from a ground-based laser transmitter to the Geos 3 satellite and back to a ground-based receiver telescope and, the experimental results were compared with analytical results presented in a companion paper (Bufton, 1977). The normalized variance, the probability density function and the power spectral density of scintillation were all measured. Moments of the satellite scintillation data in terms of normalized variance were lower than expected. The power spectrum analysis suggests that there were scintillation components at frequencies higher than the 250 Hz bandwidth available in the experiment.
Improved Statistics for Genome-Wide Interaction Analysis
Ueki, Masao; Cordell, Heather J.
2012-01-01
Recently, Wu and colleagues [1] proposed two novel statistics for genome-wide interaction analysis using case/control or case-only data. In computer simulations, their proposed case/control statistic outperformed competing approaches, including the fast-epistasis option in PLINK and logistic regression analysis under the correct model; however, reasons for its superior performance were not fully explored. Here we investigate the theoretical properties and performance of Wu et al.'s proposed statistics and explain why, in some circumstances, they outperform competing approaches. Unfortunately, we find minor errors in the formulae for their statistics, resulting in tests that have higher than nominal type 1 error. We also find minor errors in PLINK's fast-epistasis and case-only statistics, although theory and simulations suggest that these errors have only negligible effect on type 1 error. We propose adjusted versions of all four statistics that, both theoretically and in computer simulations, maintain correct type 1 error rates under the null hypothesis. We also investigate statistics based on correlation coefficients that maintain similar control of type 1 error. Although designed to test specifically for interaction, we show that some of these previously-proposed statistics can, in fact, be sensitive to main effects at one or both loci, particularly in the presence of linkage disequilibrium. We propose two new “joint effects” statistics that, provided the disease is rare, are sensitive only to genuine interaction effects. In computer simulations we find, in most situations considered, that highest power is achieved by analysis under the correct genetic model. Such an analysis is unachievable in practice, as we do not know this model. However, generally high power over a wide range of scenarios is exhibited by our joint effects and adjusted Wu statistics. We recommend use of these alternative or adjusted statistics and urge caution when using Wu et al.'s originally-proposed statistics, on account of the inflated error rate that can result. PMID:22496670
Biomonitoring of organochlorines in women with benign and malignant breast disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siddiqui, M.K.J.; Anand, M.; Mehrotra, P.K.
2005-06-01
Established risk factors for breast cancer explain breast cancer risk only partially. Organochlorines are considered to be a possible cause for hormone-dependent cancers. A hospital-based case-control study, the first from India, was conducted among 50 women undergoing surgery for breast disease to examine the association between organochlorine exposure and breast cancer risk. Blood, tumor, and surrounding adipose tissue of the breast were collected from the subjects with benign (control) and malignant breast (study) lesions and analyzed to determine organochlorine insecticides using a gas-liquid chromatograph equipped with an electron capture detector. The {alpha}, {beta}, {gamma}, and {delta} isomers of hexachlorocyclohexane (HCH),more » p,p'-dichlorodiphenyltrichloroethane (DDT), o,p'-DDT, p,p-dichlorodiphenyldichloroethylene, and p,p'-dichlorodiphenyldichloroethane were frequently detected in three specimens. Total HCH and total DDT levels were higher in the blood of the study group (25 cases) than in those of the controls (25 cases) with only {gamma}-HCH being significantly different (P0.05). However, both total HCH and total DDT were higher in the tumor tissues of the controls than in those of the study group; {gamma}-HCH was significantly different (P0.05). The level of total HCH ({alpha}-HCH was significantly different, P0.05) was higher in the breast adipose tissue of the study group, whereas total DDT was higher in the breast adipose tissue of the control group. The distribution of known confounders of breast cancer including age, body mass index, age at menarche and menopause, duration of breast feeding, and family history related to breast disease did not differ significantly between benign and malignant groups. This pilot study with limited statistical power does not support a positive association between exposure to organochlorines and risk of breast cancer but paves the way for a larger Indian study with greater statistical power encompassing different regions of the country to enable statistically sound conclusions.« less
Carriot, Jérome; Jamali, Mohsen; Chacron, Maurice J; Cullen, Kathleen E
2014-06-11
It is widely believed that sensory systems are optimized for processing stimuli occurring in the natural environment. However, it remains unknown whether this principle applies to the vestibular system, which contributes to essential brain functions ranging from the most automatic reflexes to spatial perception and motor coordination. Here we quantified, for the first time, the statistics of natural vestibular inputs experienced by freely moving human subjects during typical everyday activities. Although previous studies have found that the power spectra of natural signals across sensory modalities decay as a power law (i.e., as 1/f(α)), we found that this did not apply to natural vestibular stimuli. Instead, power decreased slowly at lower and more rapidly at higher frequencies for all motion dimensions. We further establish that this unique stimulus structure is the result of active motion as well as passive biomechanical filtering occurring before any neural processing. Notably, the transition frequency (i.e., frequency at which power starts to decrease rapidly) was lower when subjects passively experienced sensory stimulation than when they actively controlled stimulation through their own movement. In contrast to signals measured at the head, the spectral content of externally generated (i.e., passive) environmental motion did follow a power law. Specifically, transformations caused by both motor control and biomechanics shape the statistics of natural vestibular stimuli before neural processing. We suggest that the unique structure of natural vestibular stimuli will have important consequences on the neural coding strategies used by this essential sensory system to represent self-motion in everyday life. Copyright © 2014 the authors 0270-6474/14/348347-11$15.00/0.
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
The Statistical Power of Planned Comparisons.
ERIC Educational Resources Information Center
Benton, Roberta L.
Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…
Willis, Brian H; Riley, Richard D
2017-09-20
An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Error, Power, and Blind Sentinels: The Statistics of Seagrass Monitoring
Schultz, Stewart T.; Kruschel, Claudia; Bakran-Petricioli, Tatjana; Petricioli, Donat
2015-01-01
We derive statistical properties of standard methods for monitoring of habitat cover worldwide, and criticize them in the context of mandated seagrass monitoring programs, as exemplified by Posidonia oceanica in the Mediterranean Sea. We report the novel result that cartographic methods with non-trivial classification errors are generally incapable of reliably detecting habitat cover losses less than about 30 to 50%, and the field labor required to increase their precision can be orders of magnitude higher than that required to estimate habitat loss directly in a field campaign. We derive a universal utility threshold of classification error in habitat maps that represents the minimum habitat map accuracy above which direct methods are superior. Widespread government reliance on blind-sentinel methods for monitoring seafloor can obscure the gradual and currently ongoing losses of benthic resources until the time has long passed for meaningful management intervention. We find two classes of methods with very high statistical power for detecting small habitat cover losses: 1) fixed-plot direct methods, which are over 100 times as efficient as direct random-plot methods in a variable habitat mosaic; and 2) remote methods with very low classification error such as geospatial underwater videography, which is an emerging, low-cost, non-destructive method for documenting small changes at millimeter visual resolution. General adoption of these methods and their further development will require a fundamental cultural change in conservation and management bodies towards the recognition and promotion of requirements of minimal statistical power and precision in the development of international goals for monitoring these valuable resources and the ecological services they provide. PMID:26367863
Melody and pitch processing in five musical savants with congenital blindness.
Pring, Linda; Woolf, Katherine; Tadic, Valerie
2008-01-01
We examined absolute-pitch (AP) and short-term musical memory abilities of five musical savants with congenital blindness, seven musicians, and seven non-musicians with good vision and normal intelligence in two experiments. In the first, short-term memory for musical phrases was tested and the savants and musicians performed statistically indistinguishably, both significantly outperforming the non-musicians and remembering more material from the C major scale sequences than random trials. In the second experiment, participants learnt associations between four pitches and four objects using a non-verbal paradigm. This experiment approximates to testing AP ability. Low statistical power meant the savants were not statistically better than the musicians, although only the savants scored statistically higher than the non-musicians. The results are evidence for a musical module, separate from general intelligence; they also support the anecdotal reporting of AP in musical savants, which is thought to be necessary for the development of musical-savant skill.
Image statistics underlying natural texture selectivity of neurons in macaque V4
Okazawa, Gouki; Tajima, Satohiro; Komatsu, Hidehiko
2015-01-01
Our daily visual experiences are inevitably linked to recognizing the rich variety of textures. However, how the brain encodes and differentiates a plethora of natural textures remains poorly understood. Here, we show that many neurons in macaque V4 selectively encode sparse combinations of higher-order image statistics to represent natural textures. We systematically explored neural selectivity in a high-dimensional texture space by combining texture synthesis and efficient-sampling techniques. This yielded parameterized models for individual texture-selective neurons. The models provided parsimonious but powerful predictors for each neuron’s preferred textures using a sparse combination of image statistics. As a whole population, the neuronal tuning was distributed in a way suitable for categorizing textures and quantitatively predicts human ability to discriminate textures. Together, we suggest that the collective representation of visual image statistics in V4 plays a key role in organizing the natural texture perception. PMID:25535362
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four…
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
Effect of gear ratio on peak power and time to peak power in BMX cyclists.
Rylands, Lee P; Roberts, Simon J; Hurst, Howard T
2017-03-01
The aim of this study was to ascertain if gear ratio selection would have an effect on peak power and time to peak power production in elite Bicycle Motocross (BMX) cyclists. Eight male elite BMX riders volunteered for the study. Each rider performed three, 10-s maximal sprints on an Olympic standard indoor BMX track. The riders' bicycles were fitted with a portable SRM power meter. Each rider performed the three sprints using gear ratios of 41/16, 43/16 and 45/16 tooth. The results from the 41/16 and 45/16 gear ratios were compared to the current standard 43/16 gear ratio. Statistically, significant differences were found between the gear ratios for peak power (F(2,14) = 6.448; p = .010) and peak torque (F(2,14) = 4.777; p = .026), but no significant difference was found for time to peak power (F(2,14) = 0.200; p = .821). When comparing gear ratios, the results showed a 45/16 gear ratio elicited the highest peak power,1658 ± 221 W, compared to 1436 ± 129 W and 1380 ± 56 W, for the 43/16 and 41/16 ratios, respectively. The time to peak power showed a 41/16 tooth gear ratio attained peak power in -0.01 s and a 45/16 in 0.22 s compared to the 43/16. The findings of this study suggest that gear ratio choice has a significant effect on peak power production, though time to peak power output is not significantly affected. Therefore, selecting a higher gear ratio results in riders attaining higher power outputs without reducing their start time.
NASA Astrophysics Data System (ADS)
Flanigan, D.; McCarrick, H.; Jones, G.; Johnson, B. R.; Abitbol, M. H.; Ade, P.; Araujo, D.; Bradford, K.; Cantor, R.; Che, G.; Day, P.; Doyle, S.; Kjellstrand, C. B.; Leduc, H.; Limon, M.; Luu, V.; Mauskopf, P.; Miller, A.; Mroczkowski, T.; Tucker, C.; Zmuidzinas, J.
2016-02-01
We report photon-noise limited performance of horn-coupled, aluminum lumped-element kinetic inductance detectors at millimeter wavelengths. The detectors are illuminated by a millimeter-wave source that uses an active multiplier chain to produce radiation between 140 and 160 GHz. We feed the multiplier with either amplified broadband noise or a continuous-wave tone from a microwave signal generator. We demonstrate that the detector response over a 40 dB range of source power is well-described by a simple model that considers the number of quasiparticles. The detector noise-equivalent power (NEP) is dominated by photon noise when the absorbed power is greater than approximately 1 pW, which corresponds to NEP≈2 ×10-17 W Hz-1 /2 , referenced to absorbed power. At higher source power levels, we observe the relationships between noise and power expected from the photon statistics of the source signal: NEP∝P for broadband (chaotic) illumination and NEP∝P1 /2 for continuous-wave (coherent) illumination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Electric Power Annual presents a summary of electric utility statistics at national, regional and State levels. The objective of the publication is to provide industry decisionmakers, government policymakers, analysts and the general public with historical data that may be used in understanding US electricity markets. The Electric Power Annual is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels; Energy Information Administration (EIA); US Department of Energy. ``The US Electric Power Industry at a Glance`` section presents a profile of the electric power industry ownership and performance, and a review of key statistics formore » the year. Subsequent sections present data on generating capability, including proposed capability additions; net generation; fossil-fuel statistics; retail sales; revenue; financial statistics; environmental statistics; electric power transactions; demand-side management; and nonutility power producers. In addition, the appendices provide supplemental data on major disturbances and unusual occurrences in US electricity power systems. Each section contains related text and tables and refers the reader to the appropriate publication that contains more detailed data on the subject matter. Monetary values in this publication are expressed in nominal terms.« less
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
New powerful statistics for alignment-free sequence comparison under a pattern transfer model.
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S; Sun, Fengzhu
2011-09-07
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D*2 and D(s)2 showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D*2 and D(s)2 by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. Copyright © 2011 Elsevier Ltd. All rights reserved.
New Powerful Statistics for Alignment-free Sequence Comparison Under a Pattern Transfer Model
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S.; Sun, Fengzhu
2011-01-01
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D2∗ and D2s showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D2∗ and D2s by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. PMID:21723298
Learning Hierarchical Feature Extractors for Image Recognition
2012-09-01
space as a natural criterion for devising better pools. Finally, we propose ways to make coding faster and more powerful through fast convolutional...parameter is the set of pools over which the summary statistic is computed. We propose locality in feature configuration space as a natural criterion for...pooling (dotted lines) is consistently higher than average pooling (solid lines), but the gap is much less signif - icant with intersection kernel (closed
Investigating market efficiency through a forecasting model based on differential equations
NASA Astrophysics Data System (ADS)
de Resende, Charlene C.; Pereira, Adriano C. M.; Cardoso, Rodrigo T. N.; de Magalhães, A. R. Bosco
2017-05-01
A new differential equation based model for stock price trend forecast is proposed as a tool to investigate efficiency in an emerging market. Its predictive power showed statistically to be higher than the one of a completely random model, signaling towards the presence of arbitrage opportunities. Conditions for accuracy to be enhanced are investigated, and application of the model as part of a trading strategy is discussed.
The geographic mosaic of Ecuadorian Y-chromosome ancestry.
Toscanini, U; Gaviria, A; Pardo-Seco, J; Gómez-Carballa, A; Moscoso, F; Vela, M; Cobos, S; Lupero, A; Zambrano, A K; Martinón-Torres, F; Carabajo-Marcillo, A; Yunga-León, R; Ugalde-Noritz, N; Ordoñez-Ugalde, A; Salas, A
2018-03-01
Ecuadorians originated from a complex mixture of Native American indigenous people with Europeans and Africans. We analyzed Y-chromosome STRs (Y-STRs) in a sample of 415 Ecuadorians (145 using the AmpFlSTR ® Yfiler™ system [Life Technologies, USA] and 270 using the PowerPlex ® Y23 system [Promega Corp., USA]; hereafter Yfiler and PPY23, respectively) representing three main ecological continental regions of the country, namely Amazon rainforest, Andes, and Pacific coast. Diversity values are high in the three regions, and the PPY23 exhibits higher discrimination power than the Yfiler set. While summary statistics, AMOVA, and R ST distances show low to moderate levels of population stratification, inferred ancestry derived from Y-STRs reveal clear patterns of geographic variation. The major ancestry in Ecuadorian males is European (61%), followed by an important Native American component (34%); whereas the African ancestry (5%) is mainly concentrated in the Northwest corner of the country. We conclude that classical procedures for measuring population stratification do not have the desirable sensitivity. Statistical inference of ancestry from Y-STRS is a satisfactory alternative for revealing patterns of spatial variation that would pass unnoticed when using popular statistical summary indices. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.
2015-02-01
The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.
Better prognostic marker in ICU - APACHE II, SOFA or SAP II!
Naqvi, Iftikhar Haider; Mahmood, Khalid; Ziaullaha, Syed; Kashif, Syed Mohammad; Sharif, Asim
2016-01-01
This study was designed to determine the comparative efficacy of different scoring system in assessing the prognosis of critically ill patients. This was a retrospective study conducted in medical intensive care unit (MICU) and high dependency unit (HDU) Medical Unit III, Civil Hospital, from April 2012 to August 2012. All patients over age 16 years old who have fulfilled the criteria for MICU admission were included. Predictive mortality of APACHE II, SAP II and SOFA were calculated. Calibration and discrimination were used for validity of each scoring model. A total of 96 patients with equal gender distribution were enrolled. The average APACHE II score in non-survivors (27.97+8.53) was higher than survivors (15.82+8.79) with statistically significant p value (<0.001). The average SOFA score in non-survivors (9.68+4.88) was higher than survivors (5.63+3.63) with statistically significant p value (<0.001). SAP II average score in non-survivors (53.71+19.05) was higher than survivors (30.18+16.24) with statistically significant p value (<0.001). All three tested scoring models (APACHE II, SAP II and SOFA) would be accurate enough for a general description of our ICU patients. APACHE II has showed better calibration and discrimination power than SAP II and SOFA.
Cubison, M. J.; Jimenez, J. L.
2015-06-05
Least-squares fitting of overlapping peaks is often needed to separately quantify ions in high-resolution mass spectrometer data. A statistical simulation approach is used to assess the statistical precision of the retrieved peak intensities. The sensitivity of the fitted peak intensities to statistical noise due to ion counting is probed for synthetic data systems consisting of two overlapping ion peaks whose positions are pre-defined and fixed in the fitting procedure. The fitted intensities are sensitive to imperfections in the m/Q calibration. These propagate as a limiting precision in the fitted intensities that may greatly exceed the precision arising from counting statistics.more » The precision on the fitted peak intensity falls into one of three regimes. In the "counting-limited regime" (regime I), above a peak separation χ ~ 2 to 3 half-widths at half-maximum (HWHM), the intensity precision is similar to that due to counting error for an isolated ion. For smaller χ and higher ion counts (~ 1000 and higher), the intensity precision rapidly degrades as the peak separation is reduced ("calibration-limited regime", regime II). Alternatively for χ < 1.6 but lower ion counts (e.g. 10–100) the intensity precision is dominated by the additional ion count noise from the overlapping ion and is not affected by the imprecision in the m/Q calibration ("overlapping-limited regime", regime III). The transition between the counting and m/Q calibration-limited regimes is shown to be weakly dependent on resolving power and data spacing and can thus be approximated by a simple parameterisation based only on peak intensity ratios and separation. A simple equation can be used to find potentially problematic ion pairs when evaluating results from fitted spectra containing many ions. Longer integration times can improve the precision in regimes I and III, but a given ion pair can only be moved out of regime II through increased spectrometer resolving power. As a result, studies presenting data obtained from least-squares fitting procedures applied to mass spectral peaks should explicitly consider these limits on statistical precision.« less
Replication Unreliability in Psychology: Elusive Phenomena or “Elusive” Statistical Power?
Tressoldi, Patrizio E.
2012-01-01
The focus of this paper is to analyze whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low statistical power. Applying the Null Hypothesis Statistical Testing (NHST), still the widest used statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out. Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the statistical power to detect the expected effect size (ES) of the typical study, is low or very low. The low power in most studies undermines the use of NHST to study phenomena with moderate or low ESs. We conclude by providing some suggestions on how to increase the statistical power or use different statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small ES. PMID:22783215
Mysid (Mysidopsis bahia) life-cycle test: Design comparisons and assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lussier, S.M.; Champlin, D.; Kuhn, A.
1996-12-31
This study examines ASTM Standard E1191-90, ``Standard Guide for Conducting Life-cycle Toxicity Tests with Saltwater Mysids,`` 1990, using Mysidopsis bahia, by comparing several test designs to assess growth, reproduction, and survival. The primary objective was to determine the most labor efficient and statistically powerful test design for the measurement of statistically detectable effects on biologically sensitive endpoints. Five different test designs were evaluated varying compartment size, number of organisms per compartment and sex ratio. Results showed that while paired organisms in the ASTM design had the highest rate of reproduction among designs tested, no individual design had greater statistical powermore » to detect differences in reproductive effects. Reproduction was not statistically different between organisms paired in the ASTM design and those with randomized sex ratios using larger test compartments. These treatments had numerically higher reproductive success and lower within tank replicate variance than treatments using smaller compartments where organisms were randomized, or had a specific sex ratio. In this study, survival and growth were not statistically different among designs tested. Within tank replicate variability can be reduced by using many exposure compartments with pairs, or few compartments with many organisms in each. While this improves variance within replicate chambers, it does not strengthen the power of detection among treatments in the test. An increase in the number of true replicates (exposure chambers) to eight will have the effect of reducing the percent detectable difference by a factor of two.« less
Gene Level Meta-Analysis of Quantitative Traits by Functional Linear Models.
Fan, Ruzong; Wang, Yifan; Boehnke, Michael; Chen, Wei; Li, Yun; Ren, Haobo; Lobach, Iryna; Xiong, Momiao
2015-08-01
Meta-analysis of genetic data must account for differences among studies including study designs, markers genotyped, and covariates. The effects of genetic variants may differ from population to population, i.e., heterogeneity. Thus, meta-analysis of combining data of multiple studies is difficult. Novel statistical methods for meta-analysis are needed. In this article, functional linear models are developed for meta-analyses that connect genetic data to quantitative traits, adjusting for covariates. The models can be used to analyze rare variants, common variants, or a combination of the two. Both likelihood-ratio test (LRT) and F-distributed statistics are introduced to test association between quantitative traits and multiple variants in one genetic region. Extensive simulations are performed to evaluate empirical type I error rates and power performance of the proposed tests. The proposed LRT and F-distributed statistics control the type I error very well and have higher power than the existing methods of the meta-analysis sequence kernel association test (MetaSKAT). We analyze four blood lipid levels in data from a meta-analysis of eight European studies. The proposed methods detect more significant associations than MetaSKAT and the P-values of the proposed LRT and F-distributed statistics are usually much smaller than those of MetaSKAT. The functional linear models and related test statistics can be useful in whole-genome and whole-exome association studies. Copyright © 2015 by the Genetics Society of America.
Riley, Richard D.
2017-01-01
An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945
Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.
2011-01-01
Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Tsuboyama, Takahiro; Jost, Gregor; Pietsch, Hubertus; Tomiyama, Noriyuki
2017-09-01
The aim of this study was to compare power versus manual injection in bolus shape and image quality on contrast-enhanced magnetic resonance angiography (CE-MRA). Three types of CE-MRA (head-neck 3-dimensional [3D] MRA with a test-bolus technique, thoracic-abdominal 3D MRA with a bolus-tracking technique, and thoracic-abdominal time-resolved 4-dimensional [4D] MRA) were performed after power and manual injection of gadobutrol (0.1 mmol/kg) at 2 mL/s in 12 pigs (6 sets of power and manual injections for each type of CE-MRA). For the quantitative analysis, the signal-to-noise ratio was measured on ascending aorta, descending aorta, brachiocephalic trunk, common carotid artery, and external carotid artery on the 6 sets of head-neck 3D MRA, and on ascending aorta, descending aorta, brachiocephalic trunk, abdominal aorta, celiac trunk, and renal artery on the 6 sets of thoracic-abdominal 3D MRA. Bolus shapes were evaluated on the 6 sets each of test-bolus scans and 4D MRA. For the qualitative analysis, arterial enhancement, superimposition of nontargeted enhancement, and overall image quality were evaluated on 3D MRA. Visibility of bolus transition was assessed on 4D MRA. Intraindividual comparison between power and manual injection was made by paired t test, Wilcoxon rank sum test, and analysis of variance by ranks. Signal-to-noise ratio on 3D MRA was statistically higher with power injection than with manual injection (P < 0.001). Bolus shapes (test-bolus, 4D MRA) were represented by a characteristic standard bolus curve (sharp first-pass peak followed by a gentle recirculation peak) in all the 12 scans with power injection, but only in 1 of the 12 scans with manual injection. Standard deviations of time-to-peak enhancement were smaller in power injection than in manual injection. Qualitatively, although both injection methods achieved diagnostic quality on 3D MRA, power injection exhibited significantly higher image quality than manual injection (P = 0.001) due to significantly higher arterial enhancement (P = 0.031) and less superimposition of nontargeted enhancement (P = 0.001). Visibility of bolus transition on 4D MRA was significantly better with power injection than with manual injection (P = 0.031). Compared with manual injection, power injection provides more standardized bolus shapes and higher image quality due to higher arterial enhancement and less superimposition of nontargeted vessels.
NASA Astrophysics Data System (ADS)
Besset, M.; Anthony, E.; Sabatier, F.
2016-12-01
The influence of physical processes on river deltas has long been identified, mainly on the basis of delta morphology. A cuspate delta is considered as wave-dominated, a delta with finger-like extensions is characterized as river-dominated, and a delta with estuarine re-entrants is considered tide-dominated (Galloway, 1975). The need for a more quantitative classification is increasingly recognized, and is achievable through quantified combinations, a good example being Syvitski and Saito (2007) wherein the joint influence of marine power - wave and tides - is compared to that of river influence. This need is further justified as deltas become more and more vulnerable. Going forward from the Syvitski and Saito (2007) approach, we confront, from a large database on 60 river deltas, the maximum potential power of waves and the tidal range (both representing marine power), and the specific stream power and river sediment supply reflecting an increasingly human-impacted river influence. The results show that 45 deltas (75%) have levels of marine power that are significantly higher than those of specific stream power. Five deltas have sufficient stream power to counterbalance marine power but a present sediment supply inadequate for them to be statistically considered as river-dominated. Six others have a sufficient sediment supply but a specific stream power that is not high enough for them to be statistically river-dominated. A major manifestation of the interplay of these parameters is accelerated delta erosion worldwide, shifting the balance towards marine power domination. Deltas currently eroding are mainly influenced by marine power (93%), and small deltas (< 300 km2 of deltaic protuberance) are the most vulnerable (82%). These high levels of erosion domination, compounded by accelerated subsidence, are related to human-induced sediment supply depletion and changes in water discharge in the face of the sediment-dispersive capacity of waves and currents.
Cavalcante, Y L; Hauser-Davis, R A; Saraiva, A C F; Brandão, I L S; Oliveira, T F; Silveira, A M
2013-01-01
This paper compared and evaluated seasonal variations in physico-chemical parameters and metals at a hydroelectric power station reservoir by applying Multivariate Analyses and Artificial Neural Networks (ANN) statistical techniques. A Factor Analysis was used to reduce the number of variables: the first factor was composed of elements Ca, K, Mg and Na, and the second by Chemical Oxygen Demand. The ANN showed 100% correct classifications in training and validation samples. Physico-chemical analyses showed that water pH values were not statistically different between the dry and rainy seasons, while temperature, conductivity, alkalinity, ammonia and DO were higher in the dry period. TSS, hardness and COD, on the other hand, were higher during the rainy season. The statistical analyses showed that Ca, K, Mg and Na are directly connected to the Chemical Oxygen Demand, which indicates a possibility of their input into the reservoir system by domestic sewage and agricultural run-offs. These statistical applications, thus, are also relevant in cases of environmental management and policy decision-making processes, to identify which factors should be further studied and/or modified to recover degraded or contaminated water bodies. Copyright © 2012 Elsevier B.V. All rights reserved.
KNOWLEDGE OF PUERPERAL MOTHERS ABOUT THE GUTHRIE TEST.
Arduini, Giovanna Abadia Oliveira; Balarin, Marly Aparecida Spadotto; Silva-Grecco, Roseane Lopes da; Marqui, Alessandra Bernadete Trovó de
2017-01-01
This study aimed to assess the knowledge of puerperal mothers about the Guthrie test. A total of 75 mothers who sought primary care between October 2014 and February 2015 were investigated. The form was applied by the main researcher and the data was analyzed, using descriptive statistics with Microsoft Office Excel, and Statistical Package for Social Sciences (SPSS) programs. Association tests and statistical power were applied. Among the 75 mothers, 47 (62.7%) would have liked to receive more information about the newborn screening, especially regarding the correct sample collection period, followed by the screened morbidities. Most participants (n=55; 85.9%) took their children to be tested between the third and the seventh day of birth, as recommended by the Brazilian Health Ministry. Fifty-four women (72%) were unable to name the morbidities screened by the test in Minas Gerais, and they were also unaware that most have genetic etiology. The health professional who informed the mother about the Guthrie test was mainly the physician. This information was given prenatally to 57% of the cases, and to 43 % at the time of discharge from the hospital. The association test showed that mothers with higher education have more knowledge about the purpose and importance of the Guthrie test. The statistical power was 83.5%. Maternal knowledge about the Guthrie test is superficial and may reflect the health team's usual practice.
Verhoek-Miller, Nancy; Miller, Duane I; Shirachi, Miyoko; Hoda, Nicholas
2002-08-01
Two studies investigated teachers' and principals' power styles as related to college students' retrospective ratings of satisfaction and peers' abusive behavior. One study also investigated retrospective self-perception as related to students' sensitivity to the occurrence of physical and psychological abuse in the school environment. Among the findings were positive correlations between subjects' perceptions that their typical elementary school teacher used referent, legitimate, or expert power styles and subjects' reported satisfaction with their elementary school experience. Small but statistically significant correlations were found suggesting that principals' power style was weakly associated with ratings of psychological abuse in elementary school and physical abuse in middle school. Also, students who rated themselves as intelligent, sensitive, attractive, and depressive had higher ratings of perceived psychological and physical abuse at school. It was concluded that parameters of leaders' power styles and subjects' vigilance might be useful for understanding school climates. Experimentally designed studies are required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Statistical characteristics of dynamics for population migration driven by the economic interests
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Zhao, Ning; Hao, Rui
2016-06-01
Population migration typically occurs under some constraints, which can deeply affect the structure of a society and some other related aspects. Therefore, it is critical to investigate the characteristics of population migration. Data from the China Statistical Yearbook indicate that the regional gross domestic product per capita relates to the population size via a linear or power-law relation. In addition, the distribution of population migration sizes or relative migration strength introduced here is dominated by a shifted power-law relation. To reveal the mechanism that creates the aforementioned distributions, a dynamic model is proposed based on the population migration rule that migration is facilitated by higher financial gains and abated by fewer employment opportunities at the destination, considering the migration cost as a function of the migration distance. The calculated results indicate that the distribution of the relative migration strength is governed by a shifted power-law relation, and that the distribution of migration distances is dominated by a truncated power-law relation. These results suggest the use of a power-law to fit a distribution may be not always suitable. Additionally, from the modeling framework, one can infer that it is the randomness and determinacy that jointly create the scaling characteristics of the distributions. The calculation also demonstrates that the network formed by active nodes, representing the immigration and emigration regions, usually evolves from an ordered state with a non-uniform structure to a disordered state with a uniform structure, which is evidenced by the increasing structural entropy.
NASA Astrophysics Data System (ADS)
Zhao, H.; Baker, D. N.; Jaynes, A. N.; Li, X.; Kanekal, S. G.; Blum, L. W.; Schiller, Q. A.; Leonard, T. W.; Elkington, S. R.
2017-12-01
The electron energy spectra, as an important characteristic of radiation belt electrons, provide valuable information on the physical mechanisms affecting different electron populations. Based on the measurements of 30 keV - 10 MeV electrons from MagEIS and REPT instruments on the Van Allen Probes, case studies and statistical analysis of the radiation belt electron energy spectra characterization and evolution have been performed. Generally the radiation belt electron energy spectra can be represented by one of the three types of distributions: exponential, power law, and bump-on-tail. Statistical analysis shows that the exponential spectra are usually dominant in the outer radiation belt; as the geomagnetic storms occur, energy spectra in the outer belt soften at first due to injection of lower-energy electrons and loss of higher-energy electrons, and gradually get harder due to loss of lower-energy electrons and delayed enhancement of higher energy electron fluxes. Power law spectra generally dominate the inner belt and higher L region (L>6) during injections. Bump-on-tail spectra commonly exist inside the plasmasphere following the geomagnetic storms and/or the compression of plasmasphere, while the energy of flux maxima is usually 1.8 MeV as the bump-on-tail spectra form and gradually moves to higher energies as the spectra evolve, with the ratio of flux maxima to minima up to >10. Detailed event study indicates that the appearance of bump-on-tail spectra are mainly due to energy-dependent losses caused by the plasmaspheric hiss wave scattering, while the disappearance of these spectra can be attributed to fast flux enhancements of lower-energy electrons during storms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunturu, A.K.; Kugler, E.L.; Cropley, J.B.
A statistically designed set of experiments was run in a recycle reactor to evaluate the kinetics of the formation of higher-molecular-weight alcohols (higher alcohols) and total hydrocarbon byproducts from synthesis gas (hydrogen and carbon monoxide) in a range of experimental conditions that mirrors the limits of commercial production. The alkali-promoted, C-supported Co-Mo sulfide catalyst that was employed in this study is well known for its sulfur resistance. The reaction was carried out in a gradientless Berty-type recycle reactor. A two-level fractional-factorial set consisting of 16 experiments was performed. Five independent variables were selected for this study, namely, temperature, partial pressuremore » of carbon monoxide, partial pressure of hydrogen, partial pressure of inerts, and methanol concentration in the feed. The major oxygenated products were linear alcohols up to n-butanol, but alcohols of higher carbon number were also detected, and analysis of the liquid product revealed the presence of trace amounts of ethers also. Yields of hydrocarbons were non-negligible. The alcohol product followed an Anderson-Schultz-Flory distribution. From the results of the factorial experiments, a preliminary power-law model was developed, and the statistically significant variables in the rate expression for the production of each alcohol were found. Based on the results of the power-law models, rate expressions of the Langmuir-Hinshelwood type were fitted. The observed kinetics are consistent with the rate-limiting step for the production of each higher alcohol being a surface reaction of the alcohol of next-lower carbon number. All other steps, including CO-insertion, H{sub 2}-cleavage, and hydrogenation steps, do not appear to affect the rate correlations.« less
Gene-environment studies: any advantage over environmental studies?
Bermejo, Justo Lorenzo; Hemminki, Kari
2007-07-01
Gene-environment studies have been motivated by the likely existence of prevalent low-risk genes that interact with common environmental exposures. The present study assessed the statistical advantage of the simultaneous consideration of genes and environment to investigate the effect of environmental risk factors on disease. In particular, we contemplated the possibility that several genes modulate the environmental effect. Environmental exposures, genotypes and phenotypes were simulated according to a wide range of parameter settings. Different models of gene-gene-environment interaction were considered. For each parameter combination, we estimated the probability of detecting the main environmental effect, the power to identify the gene-environment interaction and the frequency of environmentally affected individuals at which environmental and gene-environment studies show the same statistical power. The proportion of cases in the population attributable to the modeled risk factors was also calculated. Our data indicate that environmental exposures with weak effects may account for a significant proportion of the population prevalence of the disease. A general result was that, if the environmental effect was restricted to rare genotypes, the power to detect the gene-environment interaction was higher than the power to identify the main environmental effect. In other words, when few individuals contribute to the overall environmental effect, individual contributions are large and result in easily identifiable gene-environment interactions. Moreover, when multiple genes interacted with the environment, the statistical benefit of gene-environment studies was limited to those studies that included major contributors to the gene-environment interaction. The advantage of gene-environment over plain environmental studies also depends on the inheritance mode of the involved genes, on the study design and, to some extend, on the disease prevalence.
Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.
Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks – but not veridical face photographs – affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess – compared to face images – a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope – in contrast to the other tested image properties – did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis. PMID:25835539
Characterizing Wildfire Regimes and Risk in the USA
NASA Astrophysics Data System (ADS)
Malamud, B. D.; Millington, J. D.; Perry, G. L.
2004-12-01
Over the last decade, high profile wildfires have resulted in numerous fatalities and loss of infrastructure. Wildfires also have a significant impact on climate and ecosystems, with recent authors emphasizing the need for regional-level examinations of wildfire-regime dynamics and change, and the factors driving them. With implications for hazard management, climate studies, and ecosystem research, there is therefore significant interest in appropriate analysis of historical wildfire databases. Insightful studies using wildfire database statistics exist, but are often hampered by the low spatial and/or temporal resolution of their datasets. In this paper, we use a high-resolution dataset consisting of 88,855 USFS wildfires over the time period 1970--2000, and consider wildfire occurrence across the conterminous USA as a function of ecoregion (land units classified by climate, vegetation, and topography), ignition source (anthropogenic vs. lightning), and decade (1970--1979, 1980--1989, 1990--1999). We find that for the conterminous USA (a) wildfires exhibit robust frequency-area power-law behavior in 17 different ecoregions, (b) normalized power-law exponents may be used to compare the scaling of wildfire burned areas between regions, (c) power-law exponents change systematically from east to west, (d) wildfires in 75% of the conterminous USA (particularly the east) have higher power-law exponents for anthropogenic vs. lightning ignition sources, and (e) recurrence intervals for wildfires of a given burned area or larger for each ecoregion can be assessed, allowing for the classification of wildfire regimes for probabilistic hazard estimation in the same vein as is now used for earthquakes. By examining wildfire statistics in a spatially and temporally explicit manner, we are able to present resultant wildfire regime summary statistics and conclusions, along with a probabilistic hazard assessment of wildfire risk at the ecoregion division level across the conterminous USA.
Seven ways to increase power without increasing N.
Hansen, W B; Collins, L M
1994-01-01
Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.
Relative risk estimates from spatial and space-time scan statistics: Are they biased?
Prates, Marcos O.; Kulldorff, Martin; Assunção, Renato M.
2014-01-01
The purely spatial and space-time scan statistics have been successfully used by many scientists to detect and evaluate geographical disease clusters. Although the scan statistic has high power in correctly identifying a cluster, no study has considered the estimates of the cluster relative risk in the detected cluster. In this paper we evaluate whether there is any bias on these estimated relative risks. Intuitively, one may expect that the estimated relative risks has upward bias, since the scan statistic cherry picks high rate areas to include in the cluster. We show that this intuition is correct for clusters with low statistical power, but with medium to high power the bias becomes negligible. The same behaviour is not observed for the prospective space-time scan statistic, where there is an increasing conservative downward bias of the relative risk as the power to detect the cluster increases. PMID:24639031
NASA Technical Reports Server (NTRS)
Mei, Chuh; Dhainaut, Jean-Michel
2000-01-01
The Monte Carlo simulation method in conjunction with the finite element large deflection modal formulation are used to estimate fatigue life of aircraft panels subjected to stationary Gaussian band-limited white-noise excitations. Ten loading cases varying from 106 dB to 160 dB OASPL with bandwidth 1024 Hz are considered. For each load case, response statistics are obtained from an ensemble of 10 response time histories. The finite element nonlinear modal procedure yields time histories, probability density functions (PDF), power spectral densities and higher statistical moments of the maximum deflection and stress/strain. The method of moments of PSD with Dirlik's approach is employed to estimate the panel fatigue life.
Bień-Barkowska, Katarzyna; Doroszkiewicz, Halina; Bień, Barbara
2017-01-01
The aim of this article was to identify the best predictors of distress suffered by family carers (FCs) of geriatric patients. A cross-sectional study of 100 FC-geriatric patient dyads was conducted. The negative impact of care (NIoC) subscale of the COPE index was dichotomized to identify lower stress (score of ≤15 on the scale) and higher stress (score of ≥16 on the scale) exerted on FCs by the process of providing care. The set of explanatory variables comprised a wide range of sociodemographic and care-related attributes, including patient-related results from comprehensive geriatric assessments and disease profiles. The best combination of explanatory variables that provided the highest predictive power for distress among FCs in the multiple logistic regression (LR) model was determined according to statistical information criteria. The statistical robustness of the observed relationships and the discriminative power of the model were verified with the cross-validation method. The mean age of FCs was 57.2 (±10.6) years, whereas that of geriatric patients was 81.7 (±6.4) years. Despite the broad initial set of potential explanatory variables, only five predictors were jointly selected for the best statistical model. A higher level of distress was independently predicted by lower self-evaluation of health; worse self-appraisal of coping well as a caregiver; lower sense of general support; more hours of care per week; and the motor retardation of the cared-for person measured with the speed of the Timed Up and Go (TUG) test. Worse performance on the TUG test was only the patient-related predictor of distress among the variables examined as contributors to the higher NIoC. Enhancing the mobility of geriatric patients through suitably tailored kinesitherapeutic methods during their hospital stay may mitigate the burden endured by FCs.
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Quantum fluctuation theorems and power measurements
NASA Astrophysics Data System (ADS)
Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter
2015-07-01
Work in the paradigm of the quantum fluctuation theorems of Crooks and Jarzynski is determined by projective measurements of energy at the beginning and end of the force protocol. In analogy to classical systems, we consider an alternative definition of work given by the integral of the supplied power determined by integrating up the results of repeated measurements of the instantaneous power during the force protocol. We observe that such a definition of work, in spite of taking account of the process dependence, has different possible values and statistics from the work determined by the conventional two energy measurement approach (TEMA). In the limit of many projective measurements of power, the system’s dynamics is frozen in the power measurement basis due to the quantum Zeno effect leading to statistics only trivially dependent on the force protocol. In general the Jarzynski relation is not satisfied except for the case when the instantaneous power operator commutes with the total Hamiltonian at all times. We also consider properties of the joint statistics of power-based definition of work and TEMA work in protocols where both values are determined. This allows us to quantify their correlations. Relaxing the projective measurement condition, weak continuous measurements of power are considered within the stochastic master equation formalism. Even in this scenario the power-based work statistics is in general not able to reproduce qualitative features of the TEMA work statistics.
The Halo Occupation Distribution of Active Galactic Nuclei
NASA Astrophysics Data System (ADS)
Chatterjee, Suchetana; Nagai, D.; Richardson, J.; Zheng, Z.; Degraf, C.; DiMatteo, T.
2011-05-01
We investigate the halo occupation distribution of active galactic nuclei (AGN) using a state-of-the-art cosmological hydrodynamic simulation that self-consistently incorporates the growth and feedback of supermassive black holes and the physics of galaxy formation (DiMatteo et al. 2008). We show that the mean occupation function can be modeled as a softened step function for central AGN and a power law for the satellite population. The satellite occupation is consistent with weak redshift evolution and a power law index of unity. The number of satellite black holes at a given halo mass follows a Poisson distribution. We show that at low redshifts (z=1.0) feedback from AGN is responsible for higher suppression of black hole growth in higher mass halos. This effect introduces a bias in the correlation between instantaneous AGN luminosity and the host halo mass, making AGN clustering depend weakly on luminosity at low redshifts. We show that the radial distribution of AGN follows a power law which is fundamentally different from those of galaxies and dark matter. The best-fit power law index is -2.26 ± 0.23. The power law exponent do not show any evolution with redshift, host halo mass and AGN luminosity within statistical limits. Incorporating the environmental dependence of supermassive black hole accretion and feedback, our formalism provides the most complete theoretical tool for interpreting current and future measurements of AGN clustering.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Sudeep; Louis, Thibaut; Calabrese, Erminia
2014-04-01
We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and showmore » they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ΛCDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6σ detection significance.« less
NASA Technical Reports Server (NTRS)
Das, Sudeep; Louis, Thibaut; Nolta, Michael R.; Addison, Graeme E.; Battisetti, Elia S.; Bond, J. Richard; Calabrese, Erminia; Crichton, Devin; Devlin, Mark J.; Dicker, Simon;
2014-01-01
We present the temperature power spectra of the cosmic microwave background (CMB) derived from the three seasons of data from the Atacama Cosmology Telescope (ACT) at 148 GHz and 218 GHz, as well as the cross-frequency spectrum between the two channels. We detect and correct for contamination due to the Galactic cirrus in our equatorial maps. We present the results of a number of tests for possible systematic error and conclude that any effects are not significant compared to the statistical errors we quote. Where they overlap, we cross-correlate the ACT and the South Pole Telescope (SPT) maps and show they are consistent. The measurements of higher-order peaks in the CMB power spectrum provide an additional test of the ?CDM cosmological model, and help constrain extensions beyond the standard model. The small angular scale power spectrum also provides constraining power on the Sunyaev-Zel'dovich effects and extragalactic foregrounds. We also present a measurement of the CMB gravitational lensing convergence power spectrum at 4.6s detection significance.
Statistical Power of Psychological Research: What Have We Gained in 20 Years?
ERIC Educational Resources Information Center
Rossi, Joseph S.
1990-01-01
Calculated power for 6,155 statistical tests in 221 journal articles published in 1982 volumes of "Journal of Abnormal Psychology,""Journal of Consulting and Clinical Psychology," and "Journal of Personality and Social Psychology." Power to detect small, medium, and large effects was .17, .57, and .83, respectively. Concluded that power of…
NASA Astrophysics Data System (ADS)
Donges, J. F.; Schleussner, C.-F.; Siegmund, J. F.; Donner, R. V.
2016-05-01
Studying event time series is a powerful approach for analyzing the dynamics of complex dynamical systems in many fields of science. In this paper, we describe the method of event coincidence analysis to provide a framework for quantifying the strength, directionality and time lag of statistical interrelationships between event series. Event coincidence analysis allows to formulate and test null hypotheses on the origin of the observed interrelationships including tests based on Poisson processes or, more generally, stochastic point processes with a prescribed inter-event time distribution and other higher-order properties. Applying the framework to country-level observational data yields evidence that flood events have acted as triggers of epidemic outbreaks globally since the 1950s. Facing projected future changes in the statistics of climatic extreme events, statistical techniques such as event coincidence analysis will be relevant for investigating the impacts of anthropogenic climate change on human societies and ecosystems worldwide.
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Current Mode Neutron Noise Measurements in the Zero Power Reactor CROCUS
NASA Astrophysics Data System (ADS)
Pakari, O.; Lamirand, V.; Perret, G.; Braun, L.; Frajtag, P.; Pautz, A.
2018-01-01
The present article is an overview of developments and results regarding neutron noise measurements in current mode at the CROCUS zero power facility. Neutron noise measurements offer a non-invasive method to determine kinetic reactor parameters such as the prompt decay constant at criticality α = βeff / λ, the effective delayed neutron fraction βeff, and the mean generation time λ for code validation efforts. At higher detection rates, i.e. above 2×104 cps in the used configuration at 0.1 W, the previously employed pulse charge amplification electronics with BF3 detectors yielded erroneous results due to dead time effects. Future experimental needs call for higher sensitivity in detectors, higher detection rates or higher reactor powers, and thus a generally more versatile measurement system. We, therefore, explored detectors operated with current mode acquisition electronics to accommodate the need. We approached the matter in two ways: 1) By using the two compensated 10B-coated ionization chambers available in CROCUS as operational monitors. The compensated current signal of these chambers was extracted from coremonitoring output channels. 2) By developing a new current mode amplification station to be used with other available detectors in core. Characteristics and first noise measurements of the new current system are presented. We implemented post-processing of the current signals from 1)and 2) with the APSD/CPSD method to determine α. At two critical states (0.5 and 1.5 W), using the 10B ionization chambers and their CPSD estimate, the prompt decay constant was measured after 1.5 hours to be α=(156.9 ± 4.3) s-1 (1σ). This result is within 1σ of statistical uncertainties of previous experiments and MCNPv5-1.6 predictions using the ENDF/B-7.1 library. The newsystem connected to a CFUL01 fission chamber using the APSDestimate at 100 mW after 33 min yielded α = (160.8 ± 6.3) s-1, also within 1σ agreement. The improvements to previous neutron noise measurementsinclude shorter measurement durations that can achievecomparable statistical uncertainties and measurements at higherdetection rates.
Statistical power analysis of cardiovascular safety pharmacology studies in conscious rats.
Bhatt, Siddhartha; Li, Dingzhou; Flynn, Declan; Wisialowski, Todd; Hemkens, Michelle; Steidl-Nichols, Jill
2016-01-01
Cardiovascular (CV) toxicity and related attrition are a major challenge for novel therapeutic entities and identifying CV liability early is critical for effective derisking. CV safety pharmacology studies in rats are a valuable tool for early investigation of CV risk. Thorough understanding of data analysis techniques and statistical power of these studies is currently lacking and is imperative for enabling sound decision-making. Data from 24 crossover and 12 parallel design CV telemetry rat studies were used for statistical power calculations. Average values of telemetry parameters (heart rate, blood pressure, body temperature, and activity) were logged every 60s (from 1h predose to 24h post-dose) and reduced to 15min mean values. These data were subsequently binned into super intervals for statistical analysis. A repeated measure analysis of variance was used for statistical analysis of crossover studies and a repeated measure analysis of covariance was used for parallel studies. Statistical power analysis was performed to generate power curves and establish relationships between detectable CV (blood pressure and heart rate) changes and statistical power. Additionally, data from a crossover CV study with phentolamine at 4, 20 and 100mg/kg are reported as a representative example of data analysis methods. Phentolamine produced a CV profile characteristic of alpha adrenergic receptor antagonism, evidenced by a dose-dependent decrease in blood pressure and reflex tachycardia. Detectable blood pressure changes at 80% statistical power for crossover studies (n=8) were 4-5mmHg. For parallel studies (n=8), detectable changes at 80% power were 6-7mmHg. Detectable heart rate changes for both study designs were 20-22bpm. Based on our results, the conscious rat CV model is a sensitive tool to detect and mitigate CV risk in early safety studies. Furthermore, these results will enable informed selection of appropriate models and study design for early stage CV studies. Copyright © 2016 Elsevier Inc. All rights reserved.
Cheng, Q R; Shen, H J; Tu, W J; Zhang, Q F; Dong, X
2016-12-02
Objective: To compare brain electrical cognitive tasks and brain development between study about 7 to 12 years old attention deficit hyperactivity disorder (ADHD) and normal children. Method: Prospectic case-control study was used. A total of 110 children with ADHD (63 boys and 47 girls) and 116 normal children (66 boys and 50 girls), were enrolled in this study. The electroencephalogram (EEG) was recorded when attention tasks were conducted, the EEG power was extracted from the original data and comparatively analyzed the absolute power (θ, α, β spectrum) and relative power (θ/total, α/total, θ/α, θ/β). Result: (1) Absolute power: ADHD children θ absolute power was higher than that of normal children in Pz lead ((52±28) vs . (40±30)μV 2 , t =3.906, P <0.05), with statistical significance. (2) Relative power: θ/total, θ/α, θ/β in ADHD are higher than normal children(0.23±0.07 vs . 0.20±0.05, 1.35±0.76 vs . 1.00±0.56, 4.75±2.49 vs . 3.56±2.08, t =2.900 and 3.954 and 3.901, P =0.004 and 0.000 and 0.000), α/total in ADHD is lower (0.21±0.09 vs. 0.24±0.10, t =-2.517, P =0.013). (3) The comparative study of the development of EEG power θ/β between ADHD and normal children showed age-related correlation in both groups ( r =-0.378 and -0.398, P =0.000 for both). Conclusion: ADHD children's EEG power on slow spectrum was higher than that of the normal children, it was more significant in the parietal region than in frontal region. With the increase of age, the θ relative power in ADHD and normal children gradually declined, in the normal children it linearly related, but in ADHD there was no significant regularity. θ/β can be used as a sensitive index to assess ADHD children's cognitive function.
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
How Many Studies Do You Need? A Primer on Statistical Power for Meta-Analysis
ERIC Educational Resources Information Center
Valentine, Jeffrey C.; Pigott, Therese D.; Rothstein, Hannah R.
2010-01-01
In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide…
Samples in applied psychology: over a decade of research in review.
Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S
2011-09-01
This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-01-01
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-11-28
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.
Felix, Leonardo Bonato; Miranda de Sá, Antonio Mauricio Ferreira Leite; Infantosi, Antonio Fernando Catelli; Yehia, Hani Camille
2007-03-01
The presence of cerebral evoked responses can be tested by using objective response detectors. They are statistical tests that provide a threshold above which responses can be assumed to have occurred. The detection power depends on the signal-to-noise ratio (SNR) of the response and the amount of data available. However, the correlation within the background noise could also affect the power of such detectors. For a fixed SNR, the detection can only be improved at the expense of using a longer stretch of signal. This can constitute a limitation, for instance, in monitored surgeries. Alternatively, multivariate objective response detection (MORD) could be used. This work applies two MORD techniques (multiple coherence and multiple component synchrony measure) to EEG data collected during intermittent photic stimulation. They were evaluated throughout Monte Carlo simulations, which also allowed verifying that correlation in the background reduces the detection rate. Considering the N EEG derivations as close as possible to the primary visual cortex, if N = 4, 6 or 8, multiple coherence leads to a statistically significant higher detection rate in comparison with multiple component synchrony measure. With the former, the best performance was obtained with six signals (O1, O2, T5, T6, P3 and P4).
NASA Astrophysics Data System (ADS)
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-11-01
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.
Monitoring Statistics Which Have Increased Power over a Reduced Time Range.
ERIC Educational Resources Information Center
Tang, S. M.; MacNeill, I. B.
1992-01-01
The problem of monitoring trends for changes at unknown times is considered. Statistics that permit one to focus high power on a segment of the monitored period are studied. Numerical procedures are developed to compute the null distribution of these statistics. (Author)
McBride, Jeffrey M; Kirby, Tyler J; Haines, Tracie L; Skinner, Jared
2010-12-01
The purpose of the current investigation was to determine the relationship between relative net vertical impulse (net vertical impulse (VI)) and jump height in the jump squat (JS) going to different squat depths and utilizing various loads. Ten males with two years of jumping experience participated in this investigation (Age: 21.8 ± 1.9 y; Height: 176.9 ± 5.2 cm; Body Mass: 79.0 ± 7.1 kg, 1RM: 131.8 ± 29.5 kg, 1RM/BM: 1.66 ± 0.27). Subjects performed a series of static jumps (SJS) and countermovement jumps (CMJJS) with various loads (Body Mass, 20% of 1RM, 40% of 1RM) in a randomized fashion to a depth of 0.15, 0.30, 0.45, 0.60, and 0.75 m and a self-selected depth. During the concentric phase of each JS, peak force (PF), peak power (PP), jump height (JH) and relative VI were recorded and analyzed. Increasing squat depth corresponded to a decrease in PF and an increase in JH, relative VI for both SJS and CMJJS during all loads. Across all squat depths and loading conditions relative VI was statistically significantly correlated to JH in the SJS (r = .8956, P < .0001, power = 1.000) and CMJJS (r = .6007, P < .0001, power = 1.000). Across all squat depths and loading conditions PF was statistically nonsignificantly correlated to JH in the SJS (r = -0.1010, P = .2095, power = 0.2401) and CMJJS (r = -0.0594, P = .4527, power = 0.1131). Across all squat depths and loading conditions peak power (PP) was significantly correlated with JH during both the SJS (r = .6605, P < .0001, power = 1.000) and the CMJJS (r = .6631, P < .0001, power = 1.000). PP was statistically significantly higher at BM in comparison with 20% of 1RM and 40% of 1RM in the SJS and CMJJS across all squat depths. Results indicate that relative VI and PP can be used to predict JS performance, regardless of squat depth and loading condition. However, relative VI may be the best predictor of JS performance with PF being the worst predictor of JS performance.
Statistics based sampling for controller and estimator design
NASA Astrophysics Data System (ADS)
Tenne, Dirk
The purpose of this research is the development of statistical design tools for robust feed-forward/feedback controllers and nonlinear estimators. This dissertation is threefold and addresses the aforementioned topics nonlinear estimation, target tracking and robust control. To develop statistically robust controllers and nonlinear estimation algorithms, research has been performed to extend existing techniques, which propagate the statistics of the state, to achieve higher order accuracy. The so-called unscented transformation has been extended to capture higher order moments. Furthermore, higher order moment update algorithms based on a truncated power series have been developed. The proposed techniques are tested on various benchmark examples. Furthermore, the unscented transformation has been utilized to develop a three dimensional geometrically constrained target tracker. The proposed planar circular prediction algorithm has been developed in a local coordinate framework, which is amenable to extension of the tracking algorithm to three dimensional space. This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection. This combined prediction can be updated with the subsequent measurement using a linear estimator. The proposed technique is illustrated on a 3D benchmark trajectory, which includes coordinated turns and straight line maneuvers. The third part of this dissertation addresses the design of controller which include knowledge of parametric uncertainties and their distributions. The parameter distributions are approximated by a finite set of points which are calculated by the unscented transformation. This set of points is used to design robust controllers which minimize a statistical performance of the plant over the domain of uncertainty consisting of a combination of the mean and variance. The proposed technique is illustrated on three benchmark problems. The first relates to the design of prefilters for a linear and nonlinear spring-mass-dashpot system and the second applies a feedback controller to a hovering helicopter. Lastly, the statistical robust controller design is devoted to a concurrent feed-forward/feedback controller structure for a high-speed low tension tape drive.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses
Liu, Ruijie; Holik, Aliaksei Z.; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E.; Asselin-Labat, Marie-Liesse; Smyth, Gordon K.; Ritchie, Matthew E.
2015-01-01
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean–variance relationship of the log-counts-per-million using ‘voom’. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source ‘limma’ package. PMID:25925576
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
NASA Astrophysics Data System (ADS)
Spicher, A.; Miloch, W.; Moen, J. I.; Clausen, L. B. N.
2015-12-01
Small-scale plasma irregularities and turbulence are common phenomena in the F layer of the ionosphere, both in the equatorial and polar regions. A common approach in analyzing data from experiments on space and ionospheric plasma irregularities are power spectra. Power spectra give no information about the phases of the waveforms, and thus do not allow to determine whether some of the phases are correlated or whether they exhibit a random character. The former case would imply the presence of nonlinear wave-wave interactions, while the latter suggests a more turbulent-like process. Discerning between these mechanisms is crucial for understanding high latitude plasma irregularities and can be addressed with bispectral analysis and higher order statistics. In this study, we use higher order spectra and statistics to analyze electron density data observed with the ICI-2 sounding rocket experiment at a meter-scale resolution. The main objective of ICI-2 was to investigate plasma irregularities in the cusp in the F layer ionosphere. We study in detail two regions intersected during the rocket flight and which are characterized by large density fluctuations: a trailing edge of a cold polar cap patch, and a density enhancement subject to cusp auroral particle precipitation. While these two regions exhibit similar power spectra, our analysis reveals that their internal structure is different. The structures on the edge of the polar cap patch are characterized by significant coherent mode coupling and intermittency, while the plasma enhancement associated with precipitation exhibits stronger random characteristics. This indicates that particle precipitation may play a fundamental role in ionospheric plasma structuring by creating turbulent-like structures.
NASA Technical Reports Server (NTRS)
Melott, A. L.; Buchert, T.; Weib, A. G.
1995-01-01
We present results showing an improvement of the accuracy of perturbation theory as applied to cosmological structure formation for a useful range of scales. The Lagrangian theory of gravitational instability of Friedmann-Lemaitre cosmogonies is compared with numerical simulations. We study the dynamics of hierarchical models as a second step. In the first step we analyzed the performance of the Lagrangian schemes for pancake models, the difference being that in the latter models the initial power spectrum is truncated. This work probed the quasi-linear and weakly non-linear regimes. We here explore whether the results found for pancake models carry over to hierarchical models which are evolved deeply into the non-linear regime. We smooth the initial data by using a variety of filter types and filter scales in order to determine the optimal performance of the analytical models, as has been done for the 'Zel'dovich-approximation' - hereafter TZA - in previous work. We find that for spectra with negative power-index the second-order scheme performs considerably better than TZA in terms of statistics which probe the dynamics, and slightly better in terms of low-order statistics like the power-spectrum. However, in contrast to the results found for pancake models, where the higher-order schemes get worse than TZA at late non-linear stages and on small scales, we here find that the second-order model is as robust as TZA, retaining the improvement at later stages and on smaller scales. In view of these results we expect that the second-order truncated Lagrangian model is especially useful for the modelling of standard dark matter models such as Hot-, Cold-, and Mixed-Dark-Matter.
Lei, Yi; Li, Jianqiang; Wu, Rui; Fan, Yuting; Fu, Songnian; Yin, Feifei; Dai, Yitang; Xu, Kun
2017-06-01
Based on the observed random fluctuation phenomenon of speckle pattern across multimode fiber (MMF) facet and received optical power distribution across three output ports, we experimentally investigate the statistic characteristics of a 3×3 radio frequency multiple-input multiple-output (MIMO) channel enabled by mode division multiplexing in a conventional 50 µm MMF using non-mode-selective three-dimensional waveguide photonic lanterns as mode multiplexer and demultiplexer. The impacts of mode coupling on the MIMO channel coefficients, channel matrix, and channel capacity have been analyzed over different fiber lengths. The results indicate that spatial multiplexing benefits from the greater fiber length with stronger mode coupling, despite a higher optical loss.
Martinez-Murcia, Francisco Jesús; Lai, Meng-Chuan; Górriz, Juan Manuel; Ramírez, Javier; Young, Adam M H; Deoni, Sean C L; Ecker, Christine; Lombardo, Michael V; Baron-Cohen, Simon; Murphy, Declan G M; Bullmore, Edward T; Suckling, John
2017-03-01
Neuroimaging studies have reported structural and physiological differences that could help understand the causes and development of Autism Spectrum Disorder (ASD). Many of them rely on multisite designs, with the recruitment of larger samples increasing statistical power. However, recent large-scale studies have put some findings into question, considering the results to be strongly dependent on the database used, and demonstrating the substantial heterogeneity within this clinically defined category. One major source of variance may be the acquisition of the data in multiple centres. In this work we analysed the differences found in the multisite, multi-modal neuroimaging database from the UK Medical Research Council Autism Imaging Multicentre Study (MRC AIMS) in terms of both diagnosis and acquisition sites. Since the dissimilarities between sites were higher than between diagnostic groups, we developed a technique called Significance Weighted Principal Component Analysis (SWPCA) to reduce the undesired intensity variance due to acquisition site and to increase the statistical power in detecting group differences. After eliminating site-related variance, statistically significant group differences were found, including Broca's area and the temporo-parietal junction. However, discriminative power was not sufficient to classify diagnostic groups, yielding accuracies results close to random. Our work supports recent claims that ASD is a highly heterogeneous condition that is difficult to globally characterize by neuroimaging, and therefore different (and more homogenous) subgroups should be defined to obtain a deeper understanding of ASD. Hum Brain Mapp 38:1208-1223, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Design of portable ultraminiature flow cytometers for medical diagnostics
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of portable microfluidic flow/image cytometry devices for measurements in the field (e.g. initial medical diagnostics) requires careful design in terms of power requirements and weight to allow for realistic portability. True portability with high-throughput microfluidic systems also requires sampling systems without the need for sheath hydrodynamic focusing both to avoid the need for sheath fluid and to enable higher volumes of actual sample, rather than sheath/sample combinations. Weight/power requirements dictate use of super-bright LEDs with top-hat excitation beam architectures and very small silicon photodiodes or nanophotonic sensors that can both be powered by small batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. Microfluidic cytometry also requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically in less than 15 minutes) initial medical decisions for patients in the field. This is not something conventional cytometry traditionally worries about, but is very important for development of small, portable microfluidic devices with small-volume throughputs. It also provides a more reasonable alternative to conventional tubes of blood when sampling geriatric and newborn patients for whom a conventional peripheral blood draw can be problematical. Instead one or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the doctor's office or field.
Robust inference for group sequential trials.
Ganju, Jitendra; Lin, Yunzhi; Zhou, Kefei
2017-03-01
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests. Copyright © 2017 John Wiley & Sons, Ltd.
Wicks, J
2000-01-01
The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs. PMID:10788332
Wicks, J
2000-06-01
The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs.
Wang, S C; Ding, M M; Wei, X L; Zhang, T; Yao, F
2016-06-01
To recognize the possibility of Y fragment deletion of Amelogenin gene intuitively and simply according to the genotyping graphs. By calculating the ratio of total peak height of genotyping graphs, the statistics of equilibrium distribution between Amelogenin and D3S1358 loci, Amelogenin X-gene and Amelogenin Y-gene, and different alleles of D3S1358 loci from 1 968 individuals was analyzed after amplified by PowerPlex ® 21 detection kit. Sum of peak height of Amelogenin X allele was not less than 60% that of D3S1358 loci alleles in 90.8% female samples, and sum of peak height of Amelogenin X allele was not higher than 70% that of D3S1358 loci alleles in 94.9% male samples. The result of genotyping after amplified by PowerPlex ® 21 detection kit shows that the possibility of Y fragment deletion should be considered when only Amelogenin X-gene of Amelogenin is detected and the peak height of Amelogenin X-gene is not higher than 70% of the total peak height of D3S1358 loci. Copyright© by the Editorial Department of Journal of Forensic Medicine
Chang, Moon-Young; Kim, Hwan-Hee; Kim, Kyeong-Mi; Oh, Jae-Seop; Jang, Chel; Yoon, Tae-Hyung
2017-01-01
[Purpose] The purpose of this study was to examine what changes occur in brain waves when patients with stroke receive mirror therapy intervention. [Subjects and Methods] The subjects of this study were 14 patients with stroke (6 females and 8 males). The subjects were assessed by measuring the alpha and beta waves of the EEG (QEEG-32 system CANS 3000). The mirror therapy intervention was delivered over the course of four weeks (a total of 20 sessions). [Results] Relative alpha power showed statistically significant differences in the F3, F4, O1, and O2 channels in the situation comparison and higher for hand observation than for mirror observation. Relative beta power showed statistically significant differences in the F3, F4, C3, and C4 channels. [Conclusion] This study analyzed activity of the brain in each area when patients with stroke observed movements reflected in a mirror, and future research on diverse tasks and stimuli to heighten activity of the brain should be carried out. PMID:28210035
Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge
2017-04-01
We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n -2 power-law with radial order n and temporal spectra follow a f -1.5 power-law with temporal frequency f . From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates.
Jarosz, Jessica; Mecê, Pedro; Conan, Jean-Marc; Petit, Cyril; Paques, Michel; Meimon, Serge
2017-01-01
We formed a database gathering the wavefront aberrations of 50 healthy eyes measured with an original custom-built Shack-Hartmann aberrometer at a temporal frequency of 236 Hz, with 22 lenslets across a 7-mm diameter pupil, for a duration of 20 s. With this database, we draw statistics on the spatial and temporal behavior of the dynamic aberrations of the eye. Dynamic aberrations were studied on a 5-mm diameter pupil and on a 3.4 s sequence between blinks. We noted that, on average, temporal wavefront variance exhibits a n−2 power-law with radial order n and temporal spectra follow a f−1.5 power-law with temporal frequency f. From these statistics, we then extract guidelines for designing an adaptive optics system. For instance, we show the residual wavefront error evolution as a function of the number of corrected modes and of the adaptive optics loop frame rate. In particular, we infer that adaptive optics performance rapidly increases with the loop frequency up to 50 Hz, with gain being more limited at higher rates. PMID:28736657
Chang, Moon-Young; Kim, Hwan-Hee; Kim, Kyeong-Mi; Oh, Jae-Seop; Jang, Chel; Yoon, Tae-Hyung
2017-01-01
[Purpose] The purpose of this study was to examine what changes occur in brain waves when patients with stroke receive mirror therapy intervention. [Subjects and Methods] The subjects of this study were 14 patients with stroke (6 females and 8 males). The subjects were assessed by measuring the alpha and beta waves of the EEG (QEEG-32 system CANS 3000). The mirror therapy intervention was delivered over the course of four weeks (a total of 20 sessions). [Results] Relative alpha power showed statistically significant differences in the F3, F4, O1, and O2 channels in the situation comparison and higher for hand observation than for mirror observation. Relative beta power showed statistically significant differences in the F3, F4, C3, and C4 channels. [Conclusion] This study analyzed activity of the brain in each area when patients with stroke observed movements reflected in a mirror, and future research on diverse tasks and stimuli to heighten activity of the brain should be carried out.
NASA Technical Reports Server (NTRS)
Burnett, T. H.; Dake, S.; Derrickson, J. H.; Fountain, W. F.; Fuki, M.; Gregory, J. C.; Hayashi, T.; Holynski, R.; Iwai, J.; Jones, W. V.
1985-01-01
The composition and energy spectra of charge groups (C - 0), (Ne - S), and (Z approximately 17) above 500 GeV/nucleon from the experiments of JACEE series balloonborne emulsion chambers are reported. Studies of cosmic ray elemental composition at higher energies provide information on propagation through interstellar space, acceleration mechanisms, and their sources. One of the present interests is the elemental composition at energies above 100 GeV/nucleon. Statistically sufficient data in this energy region can be decisive in judgment of propagation models from the ratios of SECONDARY/PRIMARY and source spectra (acceleration mechanism), as well as speculative contributions of different sources from the ratios of PRIMARY/PRIMARY. At much higher energies, i.e., around 10 to the 15th power eV, data from direct observation will give hints on the knee problem, as to whether they favor an escape effect possibly governed by magnetic rigidity above 10 to the 16th power eV.
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J
2007-09-24
In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.
Alonso, Angeles M; Domínguez, Cristina; Guillén, Dominico A; Barroso, Carmelo G
2002-05-22
A new method for measuring the antioxidant power of wine has been developed based on the accelerated electrochemical oxidation of 2,2'-azino-bis(3-ethylbenzthiazoline-6-sulfonic acid) (ABTS). The calibration (R = 0.9922) and repeatability study (RSD = 7%) have provided good statistical parameters. The method is easy and quick to apply and gives reliable results, requiring only the monitoring of time and absorbance. It has been applied to various red and white wines of different origins. The results have been compared with those obtained by the total antioxidant status (TAS) method. Both methods reveal that the more antioxidant wines are those with higher polyphenolic content. From the HPLC study of the polyphenolic content of the same samples, it is confirmed that there is a positive correlation between the resveratrol content of a wine and its antioxidant power.
Siddiqi, Ariba; Arjunan, Sridhar Poosapadi; Kumar, Dinesh Kant
2016-01-01
Age-related neuromuscular change of Tibialis Anterior (TA) is a leading cause of muscle strength decline among the elderly. This study has established the baseline for age-associated changes in sEMG of TA at different levels of voluntary contraction. We have investigated the use of Gaussianity and maximal power of the power spectral density (PSD) as suitable features to identify age-associated changes in the surface electromyogram (sEMG). Eighteen younger (20-30 years) and 18 older (60-85 years) cohorts completed two trials of isometric dorsiflexion at four different force levels between 10% and 50% of the maximal voluntary contraction. Gaussianity and maximal power of the PSD of sEMG were determined. Results show a significant increase in sEMG's maximal power of the PSD and Gaussianity with increase in force for both cohorts. It was also observed that older cohorts had higher maximal power of the PSD and lower Gaussianity. These age-related differences observed in the PSD and Gaussianity could be due to motor unit remodelling. This can be useful for noninvasive tracking of age-associated neuromuscular changes.
Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz
2015-03-01
FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.
Groundwater nitrate contamination: Factors and indicators
Wick, Katharina; Heumesser, Christine; Schmid, Erwin
2012-01-01
Identifying significant determinants of groundwater nitrate contamination is critical in order to define sensible agri-environmental indicators that support the design, enforcement, and monitoring of regulatory policies. We use data from approximately 1200 Austrian municipalities to provide a detailed statistical analysis of (1) the factors influencing groundwater nitrate contamination and (2) the predictive capacity of the Gross Nitrogen Balance, one of the most commonly used agri-environmental indicators. We find that the percentage of cropland in a given region correlates positively with nitrate concentration in groundwater. Additionally, environmental characteristics such as temperature and precipitation are important co-factors. Higher average temperatures result in lower nitrate contamination of groundwater, possibly due to increased evapotranspiration. Higher average precipitation dilutes nitrates in the soil, further reducing groundwater nitrate concentration. Finally, we assess whether the Gross Nitrogen Balance is a valid predictor of groundwater nitrate contamination. Our regression analysis reveals that the Gross Nitrogen Balance is a statistically significant predictor for nitrate contamination. We also show that its predictive power can be improved if we account for average regional precipitation. The Gross Nitrogen Balance predicts nitrate contamination in groundwater more precisely in regions with higher average precipitation. PMID:22906701
Ho, Lindsey A; Lange, Ethan M
2010-12-01
Genome-wide association (GWA) studies are a powerful approach for identifying novel genetic risk factors associated with human disease. A GWA study typically requires the inclusion of thousands of samples to have sufficient statistical power to detect single nucleotide polymorphisms that are associated with only modest increases in risk of disease given the heavy burden of a multiple test correction that is necessary to maintain valid statistical tests. Low statistical power and the high financial cost of performing a GWA study remains prohibitive for many scientific investigators anxious to perform such a study using their own samples. A number of remedies have been suggested to increase statistical power and decrease cost, including the utilization of free publicly available genotype data and multi-stage genotyping designs. Herein, we compare the statistical power and relative costs of alternative association study designs that use cases and screened controls to study designs that are based only on, or additionally include, free public control genotype data. We describe a novel replication-based two-stage study design, which uses free public control genotype data in the first stage and follow-up genotype data on case-matched controls in the second stage that preserves many of the advantages inherent when using only an epidemiologically matched set of controls. Specifically, we show that our proposed two-stage design can substantially increase statistical power and decrease cost of performing a GWA study while controlling the type-I error rate that can be inflated when using public controls due to differences in ancestry and batch genotype effects.
Multiplicative point process as a model of trading activity
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Powerlaw: a Python package for analysis of heavy-tailed distributions.
Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar
2014-01-01
Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible.
Reconstructing Information in Large-Scale Structure via Logarithmic Mapping
NASA Astrophysics Data System (ADS)
Szapudi, Istvan
We propose to develop a new method to extract information from large-scale structure data combining two-point statistics and non-linear transformations; before, this information was available only with substantially more complex higher-order statistical methods. Initially, most of the cosmological information in large-scale structure lies in two-point statistics. With non- linear evolution, some of that useful information leaks into higher-order statistics. The PI and group has shown in a series of theoretical investigations how that leakage occurs, and explained the Fisher information plateau at smaller scales. This plateau means that even as more modes are added to the measurement of the power spectrum, the total cumulative information (loosely speaking the inverse errorbar) is not increasing. Recently we have shown in Neyrinck et al. (2009, 2010) that a logarithmic (and a related Gaussianization or Box-Cox) transformation on the non-linear Dark Matter or galaxy field reconstructs a surprisingly large fraction of this missing Fisher information of the initial conditions. This was predicted by the earlier wave mechanical formulation of gravitational dynamics by Szapudi & Kaiser (2003). The present proposal is focused on working out the theoretical underpinning of the method to a point that it can be used in practice to analyze data. In particular, one needs to deal with the usual real-life issues of galaxy surveys, such as complex geometry, discrete sam- pling (Poisson or sub-Poisson noise), bias (linear, or non-linear, deterministic, or stochastic), redshift distortions, pro jection effects for 2D samples, and the effects of photometric redshift errors. We will develop methods for weak lensing and Sunyaev-Zeldovich power spectra as well, the latter specifically targetting Planck. In addition, we plan to investigate the question of residual higher- order information after the non-linear mapping, and possible applications for cosmology. Our aim will be to work out practical methods, with the ultimate goal of cosmological parameter estimation. We will quantify with standard MCMC and Fisher methods (including DETF Figure of merit when applicable) the efficiency of our estimators, comparing with the conventional method, that uses the un-transformed field. Preliminary results indicate that the increase for NASA's WFIRST in the DETF Figure of Merit would be 1.5-4.2 using a range of pessimistic to optimistic assumptions, respectively.
Are there gender differences in quality of life and symptomatology between fibromyalgia patients?
Aparicio, Virginia A; Ortega, Francisco B; Carbonell-Baeza, Ana; Femia, Pedro; Tercedor, Pablo; Ruiz, Jonatan R; Delgado-Fernández, Manuel
2012-07-01
The purpose of this study is to examine gender differences in quality of life (QoL) and symptomatology in fibromyalgia (FM) patients. A total of 20 men (48.0 ± 8.0 years) and 78 women (49.8 ± 7.2 years) with FM participated in the study (age range 31-63 years). Health-related QoL and FM impact were assessed by means of the Spanish versions of the Short-Form-36 Health Survey (SF36) and the Fibromyalgia Impact Questionnaire (FIQ), respectively. Comparisons in QoL were performed using one-way analysis of covariance adjusted by age and body mass index (BMI), and comparisons in FIQ dimensions were performed using Mann-Whitney test. Overall FM impact, as measured by FIQ-total score (p = .01) and FIQ-physical impairment (p = .02) was higher in men, whereas women presented higher values of FIQ-fatigue and FIQ-morning tiredness (p = .04) and less SF36-vitality (p = .02). Therefore, women appear to feel more fatigue, whereas men present higher FM overall impact. Due to the small number of men included in this study and the consequent small statistical power, these results should be taken as preliminary. Higher powered studies are warranted to further address gender differences in FM in order to design more successful treatments.
Order statistics applied to the most massive and most distant galaxy clusters
NASA Astrophysics Data System (ADS)
Waizmann, J.-C.; Ettori, S.; Bartelmann, M.
2013-06-01
In this work, we present an analytic framework for calculating the individual and joint distributions of the nth most massive or nth highest redshift galaxy cluster for a given survey characteristic allowing us to formulate Λ cold dark matter (ΛCDM) exclusion criteria. We show that the cumulative distribution functions steepen with increasing order, giving them a higher constraining power with respect to the extreme value statistics. Additionally, we find that the order statistics in mass (being dominated by clusters at lower redshifts) is sensitive to the matter density and the normalization of the matter fluctuations, whereas the order statistics in redshift is particularly sensitive to the geometric evolution of the Universe. For a fixed cosmology, both order statistics are efficient probes of the functional shape of the mass function at the high-mass end. To allow a quick assessment of both order statistics, we provide fits as a function of the survey area that allow percentile estimation with an accuracy better than 2 per cent. Furthermore, we discuss the joint distributions in the two-dimensional case and find that for the combination of the largest and the second largest observation, it is most likely to find them to be realized with similar values with a broadly peaked distribution. When combining the largest observation with higher orders, it is more likely to find a larger gap between the observations and when combining higher orders in general, the joint probability density function peaks more strongly. Having introduced the theory, we apply the order statistical analysis to the Southpole Telescope (SPT) massive cluster sample and metacatalogue of X-ray detected clusters of galaxies catalogue and find that the 10 most massive clusters in the sample are consistent with ΛCDM and the Tinker mass function. For the order statistics in redshift, we find a discrepancy between the data and the theoretical distributions, which could in principle indicate a deviation from the standard cosmology. However, we attribute this deviation to the uncertainty in the modelling of the SPT survey selection function. In turn, by assuming the ΛCDM reference cosmology, order statistics can also be utilized for consistency checks of the completeness of the observed sample and of the modelling of the survey selection function.
Power Enhancement in High Dimensional Cross-Sectional Tests
Fan, Jianqing; Liao, Yuan; Yao, Jiawei
2016-01-01
We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846
Detecting rater bias using a person-fit statistic: a Monte Carlo simulation study.
Aubin, André-Sébastien; St-Onge, Christina; Renaud, Jean-Sébastien
2018-04-01
With the Standards voicing concern for the appropriateness of response processes, we need to explore strategies that would allow us to identify inappropriate rater response processes. Although certain statistics can be used to help detect rater bias, their use is complicated by either a lack of data about their actual power to detect rater bias or the difficulty related to their application in the context of health professions education. This exploratory study aimed to establish the worthiness of pursuing the use of l z to detect rater bias. We conducted a Monte Carlo simulation study to investigate the power of a specific detection statistic, that is: the standardized likelihood l z person-fit statistics (PFS). Our primary outcome was the detection rate of biased raters, namely: raters whom we manipulated into being either stringent (giving lower scores) or lenient (giving higher scores), using the l z statistic while controlling for the number of biased raters in a sample (6 levels) and the rate of bias per rater (6 levels). Overall, stringent raters (M = 0.84, SD = 0.23) were easier to detect than lenient raters (M = 0.31, SD = 0.28). More biased raters were easier to detect then less biased raters (60% bias: 62, SD = 0.37; 10% bias: 43, SD = 0.36). The PFS l z seems to offer an interesting potential to identify biased raters. We observed detection rates as high as 90% for stringent raters, for whom we manipulated more than half their checklist. Although we observed very interesting results, we cannot generalize these results to the use of PFS with estimated item/station parameters or real data. Such studies should be conducted to assess the feasibility of using PFS to identify rater bias.
Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell
2014-05-01
The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.
Wagner, Tyler; Irwin, Brian J.; James R. Bence,; Daniel B. Hayes,
2016-01-01
Monitoring to detect temporal trends in biological and habitat indices is a critical component of fisheries management. Thus, it is important that management objectives are linked to monitoring objectives. This linkage requires a definition of what constitutes a management-relevant “temporal trend.” It is also important to develop expectations for the amount of time required to detect a trend (i.e., statistical power) and for choosing an appropriate statistical model for analysis. We provide an overview of temporal trends commonly encountered in fisheries management, review published studies that evaluated statistical power of long-term trend detection, and illustrate dynamic linear models in a Bayesian context, as an additional analytical approach focused on shorter term change. We show that monitoring programs generally have low statistical power for detecting linear temporal trends and argue that often management should be focused on different definitions of trends, some of which can be better addressed by alternative analytical approaches.
Statistical Analysis of Large-Scale Structure of Universe
NASA Astrophysics Data System (ADS)
Tugay, A. V.
While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.
Lee, Bum Ju; Kim, Jong Yeol
2015-09-01
Serum high-density lipoprotein (HDL) and low-density lipoprotein (LDL) cholesterol levels are associated with risk factors for various diseases and are related to anthropometric measures. However, controversy remains regarding the best anthropometric indicators of the HDL and LDL cholesterol levels. The objectives of this study were to identify the best predictors of HDL and LDL cholesterol using statistical analyses and two machine learning algorithms and to compare the predictive power of combined anthropometric measures in Korean adults. A total of 13,014 subjects participated in this study. The anthropometric measures were assessed with binary logistic regression (LR) to evaluate statistically significant differences between the subjects with normal and high LDL cholesterol levels and between the subjects with normal and low HDL cholesterol levels. LR and the naive Bayes algorithm (NB), which provides more reasonable and reliable results, were used in the analyses of the predictive power of individual and combined measures. The best predictor of HDL was the rib to hip ratio (p =< 0.0001; odds ratio (OR) = 1.895; area under curve (AUC) = 0.681) in women and the waist to hip ratio (WHR) (p =< 0.0001; OR = 1.624; AUC = 0.633) in men. In women, the strongest indicator of LDL was age (p =< 0.0001; OR = 1.662; AUC by NB = 0.653 ; AUC by LR = 0.636). Among the anthropometric measures, the body mass index (BMI), WHR, forehead to waist ratio, forehead to rib ratio, and forehead to chest ratio were the strongest predictors of LDL; these measures had similar predictive powers. The strongest predictor in men was BMI (p =< 0.0001; OR = 1.369; AUC by NB = 0.594; AUC by LR = 0.595 ). The predictive power of almost all individual anthropometric measures was higher for HDL than for LDL, and the predictive power for both HDL and LDL in women was higher than for men. A combination of anthropometric measures slightly improved the predictive power for both HDL and LDL cholesterol. The best indicator for HDL and LDL might differ according to the type of cholesterol and the gender. In women, but not men, age was the variable that strongly predicted HDL and LDL cholesterol levels. Our findings provide new information for the development of better initial screening tools for HDL and LDL cholesterol.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Rong; Li, Yongdong; Liu, Chunliang
2016-07-15
The output power fluctuations caused by weights of macro particles used in particle-in-cell (PIC) simulations of a backward wave oscillator and a travelling wave tube are statistically analyzed. It is found that the velocities of electrons passed a specific slow-wave structure form a specific electron velocity distribution. The electron velocity distribution obtained in PIC simulation with a relative small weight of macro particles is considered as an initial distribution. By analyzing this initial distribution with a statistical method, the estimations of the output power fluctuations caused by different weights of macro particles are obtained. The statistical method is verified bymore » comparing the estimations with the simulation results. The fluctuations become stronger with increasing weight of macro particles, which can also be determined reversely from estimations of the output power fluctuations. With the weights of macro particles optimized by the statistical method, the output power fluctuations in PIC simulations are relatively small and acceptable.« less
Sassani, Farrokh
2014-01-01
The simulation results for electromagnetic energy harvesters (EMEHs) under broad band stationary Gaussian random excitations indicate the importance of both a high transformation factor and a high mechanical quality factor to achieve favourable mean power, mean square load voltage, and output spectral density. The optimum load is different for random vibrations and for sinusoidal vibration. Reducing the total damping ratio under band-limited random excitation yields a higher mean square load voltage. Reduced bandwidth resulting from decreased mechanical damping can be compensated by increasing the electrical damping (transformation factor) leading to a higher mean square load voltage and power. Nonlinear EMEHs with a Duffing spring and with linear plus cubic damping are modeled using the method of statistical linearization. These nonlinear EMEHs exhibit approximately linear behaviour under low levels of broadband stationary Gaussian random vibration; however, at higher levels of such excitation the central (resonant) frequency of the spectral density of the output voltage shifts due to the increased nonlinear stiffness and the bandwidth broadens slightly. Nonlinear EMEHs exhibit lower maximum output voltage and central frequency of the spectral density with nonlinear damping compared to linear damping. Stronger nonlinear damping yields broader bandwidths at stable resonant frequency. PMID:24605063
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
An entropy-based statistic for genomewide association studies.
Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao
2005-07-01
Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.
The Ironic Effect of Significant Results on the Credibility of Multiple-Study Articles
ERIC Educational Resources Information Center
Schimmack, Ulrich
2012-01-01
Cohen (1962) pointed out the importance of statistical power for psychology as a science, but statistical power of studies has not increased, while the number of studies in a single article has increased. It has been overlooked that multiple studies with modest power have a high probability of producing nonsignificant results because power…
The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2010-01-01
This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…
Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models
ERIC Educational Resources Information Center
Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles
2012-01-01
This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…
Analysis on development situation and tendency of international Qinghai-Tibet Plateau studies
NASA Astrophysics Data System (ADS)
Wang, X.
2015-12-01
Qinghai-Tibet Plateau is one of the hotspots of the international earth science studies. The relative research papers have proliferated especially since 21st century. By using the the latest bibliometric indicators, the statistical analysis of the quantities and qualities was carried out for the Qinghai-Tibet Plateau literature indexed by SCIE during 1900 and 2012. It focused on the published years, journals, countries, cities, research institutes, international cooperation, and subjects. Some statistical results were displayed and deeply analyzed by using the tools of mapping knowledge domain (MKD) and geographic information system (GIS). The results of the Bibliometric analysis indicate that the publication and citation of QTP researches have a jump after entering the 21st century. China, USA, India, Canada and France are the main coutries engaged in the Qinghai-Tibet Plateau studies. Knowledge mapping results show that USA, UK and France have longer academic influential power of QTP researches, which indicates that these countries begin to study QTP early and the papers have longer permanent impacts. On the other hand, China, India and Japan have higher academic influential power in recent years, which indicate that these countries have more publications and higher impacts recently. The disciplines of QTP researches mainly focus on the geology, geochemistry &geophysics, environmental sciences &ecology, and so on. The spatial analysis indicates that the differences of the disciplines are emphasized by different countries. The above analysis results is hoped to integrate the new knowledge and reveal the development tendency of the QTP researches.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baers, L.B.; Gutierrez, T.R.; Mendoza, R.A.
1993-08-01
The second (conventional variance or Campbell signal) , , , the third , and the modified fourth order [minus] 3*[sup 2] etc. central signal moments associated with the amplified (K) and filtered currents [i[sub 1], i[sub 2], x = K * (i[sub 2]-),] from two electrodes of an ex-core neutron sensitive fission detector have been measured versus the reactor power of the 1 MW TRIGA reactor in Mexico City. Two channels of a high speed (400 kHz) multiplexing data sampler and A/D converter with 12 bit resolution and one megawords buffer memory were used. The data were further retrieved intomore » a PC and estimates for auto- and cross-correlation moments up to the fifth order, coherence (/[radical]), skewness (/([radical]/)[sup 3]), excess (/[sup 2] - 3) etc. quantities were calculated off-line. A five mode operation of the detector was achieved including the conventional counting rates and currents in agreement with the theory and the authors previous results with analogue techniques. The signals were proportional to the neutron flux and reactor power in some flux ranges. The suppression of background noise is improved and the lower limit of the measurement range is extended as the order of moment is increased, in agreement with the theory. On the other hand the statistical uncertainty is increased. At increasing flux levels it was statistically more difficult to obtain flux estimates based on the higher order ([>=]3) moments.« less
Fractal properties of background noise and target signal enhancement using CSEM data
NASA Astrophysics Data System (ADS)
Benavides, Alfonso; Everett, Mark E.; Pierce, Carl; Nguyen, Cam
2003-09-01
Controlled-source electromagnetic (CSEM) spatial profiles and 2-D conductivity maps were obtained on the Brazos Valley, TX floodplain to study the fractal statistics of geological signals and effects of man-made conductive targets using Geonics EM34, EM31 and EM63. Using target-free areas, a consistent power-law power spectrum (|A(k)| ~ k ^-β) for the profiles was found with β values typical of fractional Brownian motion (fBm). This means that the spatial variation of conductivity does not correspond to Gaussian statistics, where there are spatial correlations at different scales. The presence of targets tends to flatten the power-law power spectrum (PS) at small wavenumbers. Detection and localization of targets can be achieved using short-time Fourier transform (STFT). The presence of targets is enhanced because the signal energy is spread to higher wavenumbers (small scale numbers) in the positions occupied by the targets. In the case of poor spatial sampling or small amount of data, the information available from the power spectrum is not enough to separate spatial correlations from target signatures. Advantages are gained by using the spatial correlations of the fBm in order to reject the background response, and to enhance the signals from highly conductive targets. This approach was tested for the EM31 using a pre-processing step that combines apparent conductivity readings from two perpendicular transmitter-receiver orientations at each station. The response obtained using time-domain CSEM is influence to a lesser degree by geological noise and the target response can be processed to recover target features. The homotopy method is proposed to solve the inverse problem using a set of possible target models and a dynamic library of responses used to optimize the starting model.
On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.
Koyama, Shinsuke
2015-07-01
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
Joint probability of statistical success of multiple phase III trials.
Zhang, Jianliang; Zhang, Jenny J
2013-01-01
In drug development, after completion of phase II proof-of-concept trials, the sponsor needs to make a go/no-go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision-making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k₀ (k₀≤ K) trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.
Anderson, Samantha F; Maxwell, Scott E
2017-01-01
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
Statistical inference methods for two crossing survival curves: a comparison of methods.
Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng
2015-01-01
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman's smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér-von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman's smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests.
Statistical Inference Methods for Two Crossing Survival Curves: A Comparison of Methods
Li, Huimin; Han, Dong; Hou, Yawen; Chen, Huilin; Chen, Zheng
2015-01-01
A common problem that is encountered in medical applications is the overall homogeneity of survival distributions when two survival curves cross each other. A survey demonstrated that under this condition, which was an obvious violation of the assumption of proportional hazard rates, the log-rank test was still used in 70% of studies. Several statistical methods have been proposed to solve this problem. However, in many applications, it is difficult to specify the types of survival differences and choose an appropriate method prior to analysis. Thus, we conducted an extensive series of Monte Carlo simulations to investigate the power and type I error rate of these procedures under various patterns of crossing survival curves with different censoring rates and distribution parameters. Our objective was to evaluate the strengths and weaknesses of tests in different situations and for various censoring rates and to recommend an appropriate test that will not fail for a wide range of applications. Simulation studies demonstrated that adaptive Neyman’s smooth tests and the two-stage procedure offer higher power and greater stability than other methods when the survival distributions cross at early, middle or late times. Even for proportional hazards, both methods maintain acceptable power compared with the log-rank test. In terms of the type I error rate, Renyi and Cramér—von Mises tests are relatively conservative, whereas the statistics of the Lin-Xu test exhibit apparent inflation as the censoring rate increases. Other tests produce results close to the nominal 0.05 level. In conclusion, adaptive Neyman’s smooth tests and the two-stage procedure are found to be the most stable and feasible approaches for a variety of situations and censoring rates. Therefore, they are applicable to a wider spectrum of alternatives compared with other tests. PMID:25615624
Nasseri, Simin; Monazzam, Mohammadreza; Beheshti, Meisam; Zare, Sajad; Mahvi, Amirhosein
2013-12-20
New environmental pollutants interfere with the environment and human life along with technology development. One of these pollutants is electromagnetic field. This study determines the vertical microwave radiation pattern of different types of Base Transceiver Station (BTS) antennae in the Hashtgerd city as the capital of Savojbolagh County, Alborz Province of Iran. The basic data including the geographical location of the BTS antennae in the city, brand, operator type, installation and its height was collected from radio communication office, and then the measurements were carried out according to IEEE STD 95. 1 by the SPECTRAN 4060. The statistical analyses were carried out by SPSS16 using Kolmogorov Smirnov test and multiple regression method. Results indicated that in both operators of Irancell and Hamrah-e-Aval (First Operator), the power density rose with an increase in measurement height or decrease in the vertical distance of broadcaster antenna. With mix model test, a significant statistical relationship was observed between measurement height and the average power density in both types of the operators. With increasing measuring height, power density increased in both operators. The study showed installing antennae in a crowded area needs more care because of higher radiation emission. More rigid surfaces and mobile users are two important factors in crowded area that can increase wave density and hence raise public microwave exposure.
Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.
Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E
2015-09-03
Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.
Statistical Measurement of the Gamma-Ray Source-count Distribution as a Function of Energy
NASA Astrophysics Data System (ADS)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; Fornengo, Nicolao; Regis, Marco
2016-08-01
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. We employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ˜50 GeV. The index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index of {2.2}-0.3+0.7 in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain {83}-13+7% ({81}-19+52%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). The method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.
Improving accuracy and power with transfer learning using a meta-analytic database.
Schwartz, Yannick; Varoquaux, Gaël; Pallier, Christophe; Pinel, Philippe; Poline, Jean-Baptiste; Thirion, Bertrand
2012-01-01
Typical cohorts in brain imaging studies are not large enough for systematic testing of all the information contained in the images. To build testable working hypotheses, investigators thus rely on analysis of previous work, sometimes formalized in a so-called meta-analysis. In brain imaging, this approach underlies the specification of regions of interest (ROIs) that are usually selected on the basis of the coordinates of previously detected effects. In this paper, we propose to use a database of images, rather than coordinates, and frame the problem as transfer learning: learning a discriminant model on a reference task to apply it to a different but related new task. To facilitate statistical analysis of small cohorts, we use a sparse discriminant model that selects predictive voxels on the reference task and thus provides a principled procedure to define ROIs. The benefits of our approach are twofold. First it uses the reference database for prediction, i.e., to provide potential biomarkers in a clinical setting. Second it increases statistical power on the new task. We demonstrate on a set of 18 pairs of functional MRI experimental conditions that our approach gives good prediction. In addition, on a specific transfer situation involving different scanners at different locations, we show that voxel selection based on transfer learning leads to higher detection power on small cohorts.
2013-01-01
New environmental pollutants interfere with the environment and human life along with technology development. One of these pollutants is electromagnetic field. This study determines the vertical microwave radiation pattern of different types of Base Transceiver Station (BTS) antennae in the Hashtgerd city as the capital of Savojbolagh County, Alborz Province of Iran. The basic data including the geographical location of the BTS antennae in the city, brand, operator type, installation and its height was collected from radio communication office, and then the measurements were carried out according to IEEE STD 95. 1 by the SPECTRAN 4060. The statistical analyses were carried out by SPSS16 using Kolmogorov Smirnov test and multiple regression method. Results indicated that in both operators of Irancell and Hamrah-e-Aval (First Operator), the power density rose with an increase in measurement height or decrease in the vertical distance of broadcaster antenna. With mix model test, a significant statistical relationship was observed between measurement height and the average power density in both types of the operators. With increasing measuring height, power density increased in both operators. The study showed installing antennae in a crowded area needs more care because of higher radiation emission. More rigid surfaces and mobile users are two important factors in crowded area that can increase wave density and hence raise public microwave exposure. PMID:24359870
ERIC Educational Resources Information Center
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
ERIC Educational Resources Information Center
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben
2016-01-01
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Pasaniuc, Bogdan; Zaitlen, Noah; Lettre, Guillaume; Chen, Gary K; Tandon, Arti; Kao, W H Linda; Ruczinski, Ingo; Fornage, Myriam; Siscovick, David S; Zhu, Xiaofeng; Larkin, Emma; Lange, Leslie A; Cupples, L Adrienne; Yang, Qiong; Akylbekova, Ermeg L; Musani, Solomon K; Divers, Jasmin; Mychaleckyj, Joe; Li, Mingyao; Papanicolaou, George J; Millikan, Robert C; Ambrosone, Christine B; John, Esther M; Bernstein, Leslie; Zheng, Wei; Hu, Jennifer J; Ziegler, Regina G; Nyante, Sarah J; Bandera, Elisa V; Ingles, Sue A; Press, Michael F; Chanock, Stephen J; Deming, Sandra L; Rodriguez-Gil, Jorge L; Palmer, Cameron D; Buxbaum, Sarah; Ekunwe, Lynette; Hirschhorn, Joel N; Henderson, Brian E; Myers, Simon; Haiman, Christopher A; Reich, David; Patterson, Nick; Wilson, James G; Price, Alkes L
2011-04-01
While genome-wide association studies (GWAS) have primarily examined populations of European ancestry, more recent studies often involve additional populations, including admixed populations such as African Americans and Latinos. In admixed populations, linkage disequilibrium (LD) exists both at a fine scale in ancestral populations and at a coarse scale (admixture-LD) due to chromosomal segments of distinct ancestry. Disease association statistics in admixed populations have previously considered SNP association (LD mapping) or admixture association (mapping by admixture-LD), but not both. Here, we introduce a new statistical framework for combining SNP and admixture association in case-control studies, as well as methods for local ancestry-aware imputation. We illustrate the gain in statistical power achieved by these methods by analyzing data of 6,209 unrelated African Americans from the CARe project genotyped on the Affymetrix 6.0 chip, in conjunction with both simulated and real phenotypes, as well as by analyzing the FGFR2 locus using breast cancer GWAS data from 5,761 African-American women. We show that, at typed SNPs, our method yields an 8% increase in statistical power for finding disease risk loci compared to the power achieved by standard methods in case-control studies. At imputed SNPs, we observe an 11% increase in statistical power for mapping disease loci when our local ancestry-aware imputation framework and the new scoring statistic are jointly employed. Finally, we show that our method increases statistical power in regions harboring the causal SNP in the case when the causal SNP is untyped and cannot be imputed. Our methods and our publicly available software are broadly applicable to GWAS in admixed populations.
Galaxy bias and primordial non-Gaussianity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Assassi, Valentin; Baumann, Daniel; Schmidt, Fabian, E-mail: assassi@ias.edu, E-mail: D.D.Baumann@uva.nl, E-mail: fabians@MPA-Garching.MPG.DE
2015-12-01
We present a systematic study of galaxy biasing in the presence of primordial non-Gaussianity. For a large class of non-Gaussian initial conditions, we define a general bias expansion and prove that it is closed under renormalization, thereby showing that the basis of operators in the expansion is complete. We then study the effects of primordial non-Gaussianity on the statistics of galaxies. We show that the equivalence principle enforces a relation between the scale-dependent bias in the galaxy power spectrum and that in the dipolar part of the bispectrum. This provides a powerful consistency check to confirm the primordial origin ofmore » any observed scale-dependent bias. Finally, we also discuss the imprints of anisotropic non-Gaussianity as motivated by recent studies of higher-spin fields during inflation.« less
Kinetic and kinematic differences between squats performed with and without elastic bands.
Israetel, Michael A; McBride, Jeffrey M; Nuzzo, James L; Skinner, Jared W; Dayne, Andrea M
2010-01-01
The purpose of this investigation was to compare kinetic and kinematic variables between squats performed with and without elastic bands equalized for total work. Ten recreationally weight trained males completed 1 set of 5 squats without (Wht) and with (Band) elastic bands as resistance. Squats were completed while standing on a force platform with bar displacement measured using 2 potentiometers. Electromyography (EMG) was obtained from the vastus lateralis. Average force-time, velocity-time, power-time, and EMG-time graphs were generated and statistically analyzed for mean differences in values between the 2 conditions during the eccentric and concentric phases. The Band condition resulted in significantly higher forces in comparison to the Wht condition during the first 25% of the eccentric phase and the last 10% of the concentric phase (p < or = 0.05). However, the Wht condition resulted in significantly higher forces during the last 5% of the eccentric phase and the first 5% of the concentric phase in comparison to the Band condition. The Band condition resulted in significantly higher power and velocity values during the first portion of the eccentric phase and the latter portion of the concentric phase. Vastus lateralis muscle activity during the Band condition was significantly greater during the first portion of the eccentric phase and latter portion of the concentric phase as well. This investigation indicates that squats equalized for total work with and without elastic bands significantly alter the force-time, power-time, velocity-time, and EMG-time curves associated with the movements. Specifically, elastic bands seem to increase force, power, and muscle activity during the early portions of the eccentric phase and latter portions of the concentric phase.
Uddameri, Venkatesh; Singaraju, Sreeram; Hernandez, E Annette
2018-02-21
Seasonal and cyclic trends in nutrient concentrations at four agricultural drainage ditches were assessed using a dataset generated from a multivariate, multiscale, multiyear water quality monitoring effort in the agriculturally dominant Lower Rio Grande Valley (LRGV) River Watershed in South Texas. An innovative bootstrap sampling-based power analysis procedure was developed to evaluate the ability of Mann-Whitney and Noether tests to discern trends and to guide future monitoring efforts. The Mann-Whitney U test was able to detect significant changes between summer and winter nutrient concentrations at sites with lower depths and unimpeded flows. Pollutant dilution, non-agricultural loadings, and in-channel flow structures (weirs) masked the effects of seasonality. The detection of cyclical trends using the Noether test was highest in the presence of vegetation mainly for total phosphorus and oxidized nitrogen (nitrite + nitrate) compared to dissolved phosphorus and reduced nitrogen (total Kjeldahl nitrogen-TKN). Prospective power analysis indicated that while increased monitoring can lead to higher statistical power, the effect size (i.e., the total number of trend sequences within a time-series) had a greater influence on the Noether test. Both Mann-Whitney and Noether tests provide complementary information on seasonal and cyclic behavior of pollutant concentrations and are affected by different processes. The results from these statistical tests when evaluated in the context of flow, vegetation, and in-channel hydraulic alterations can help guide future data collection and monitoring efforts. The study highlights the need for long-term monitoring of agricultural drainage ditches to properly discern seasonal and cyclical trends.
Analysis of postoperative complications for superficial liposuction: a review of 2398 cases.
Kim, Youn Hwan; Cha, Sang Myun; Naidu, Shenthilkumar; Hwang, Weon Jung
2011-02-01
Superficial liposuction has found its application in maximizing and creating a lifting effect to achieve a better aesthetic result. Due to initial high complication rates, these procedures were generally accepted as risky. In a response to the increasing concerns over the safety and efficacy of superficial liposuction, the authors describe their 14-year experience of performing superficial liposuction and analysis of postoperative complications associated with these procedures. From March of 1995 to December of 2008, the authors performed superficial liposuction on 2398 patients. Three subgroups were incorporated according to liposuction methods as follows: power-assisted liposuction alone (subgroup 1), power-assisted liposuction combined with ultrasound energy (subgroup 2), and power-assisted liposuction combined with external ultrasound and postoperative Endermologie (subgroup 3). Statistical analyses for complications were performed among subgroups. The mean age was 42.8 years, mean body mass index was 27.9 kg/m2, and mean volume of total aspiration was 5045 cc. Overall complication rate was 8.6 percent (206 patients). Four cases of skin necroses and two cases of infections were included. The most common complication was postoperative contour irregularity. Power-assisted liposuction combined with external ultrasound with or without postoperative Endermologie was seen to decrease the overall complication rate, contour irregularity, and skin necrosis. There were no statistical differences regarding other complications. Superficial liposuction has potential risks for higher complications compared with conventional suction techniques, especially postoperative contour irregularity, which can be minimized with proper selection of candidates for the procedure, avoiding overzealous suctioning of superficial layer, and using a combination of ultrasound energy techniques.
Testing parity-violating physics from cosmic rotation power reconstruction
Namikawa, Toshiya
2017-02-22
We study the reconstruction of the cosmic rotation power spectrum produced by parity-violating physics, with an eye to ongoing and near future cosmic microwave background (CMB) experiments such as BICEP Array, CMBS4, LiteBIRD and Simons Observatory. In addition to the inflationary gravitational waves and gravitational lensing, measurements of other various effects on CMB polarization open new window into the early Universe. One of these is anisotropies of the cosmic polarization rotation which probes the Chern-Simons term generally predicted by string theory. The anisotropies of the cosmic rotation are also generated by the primordial magnetism and in the Standard Model extentionmore » framework. The cosmic rotation anisotropies can be reconstructed as quadratic in CMB anisotropies. However, the power of the reconstructed cosmic rotation is a CMB four-point correlation and is not directly related to the cosmic-rotation power spectrum. Understanding all contributions in the four-point correlation is required to extract the cosmic rotation signal. Here, assuming inflationary motivated cosmic-rotation models, we employ simulation to quantify each contribution to the four-point correlation and find that (1) a secondary contraction of the trispectrum increases the total signal-to-noise, (2) a bias from the lensing-induced trispectrum is significant compared to the statistical errors in, e.g., LiteBIRD and CMBS4-like experiments, (3) the use of a realization-dependent estimator decreases the statistical errors by 10%–20%, depending on experimental specifications, and (4) other higher-order contributions are negligible at least for near future experiments.« less
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
Decoding English Alphabet Letters Using EEG Phase Information
Wang, YiYan; Wang, Pingxiao; Yu, Yuguo
2018-01-01
Increasing evidence indicates that the phase pattern and power of the low frequency oscillations of brain electroencephalograms (EEG) contain significant information during the human cognition of sensory signals such as auditory and visual stimuli. Here, we investigate whether and how the letters of the alphabet can be directly decoded from EEG phase and power data. In addition, we investigate how different band oscillations contribute to the classification and determine the critical time periods. An English letter recognition task was assigned, and statistical analyses were conducted to decode the EEG signal corresponding to each letter visualized on a computer screen. We applied support vector machine (SVM) with gradient descent method to learn the potential features for classification. It was observed that the EEG phase signals have a higher decoding accuracy than the oscillation power information. Low-frequency theta and alpha oscillations have phase information with higher accuracy than do other bands. The decoding performance was best when the analysis period began from 180 to 380 ms after stimulus presentation, especially in the lateral occipital and posterior temporal scalp regions (PO7 and PO8). These results may provide a new approach for brain-computer interface techniques (BCI) and may deepen our understanding of EEG oscillations in cognition. PMID:29467615
NASA Astrophysics Data System (ADS)
Hu, Dewen; Wang, Yucheng; Liu, Yadong; Li, Ming; Liu, Fayi
2010-05-01
An automated method is presented for artery-vein separation in cerebral cortical images recorded with optical imaging of the intrinsic signal. The vessel-type separation method is based on the fact that the spectral distribution of intrinsic physiological oscillations varies from arterial regions to venous regions. In arterial regions, the spectral power is higher in the heartbeat frequency (HF), whereas in venous regions, the spectral power is higher in the respiration frequency (RF). The separation method was begun by extracting the vascular network and its centerline. Then the spectra of the optical intrinsic signals were estimated by the multitaper method. A standard F-test was performed on each discrete frequency point to test the statistical significance at the given level. Four periodic physiological oscillations were examined: HF, RF, and two other eigenfrequencies termed F1 and F2. The separation of arteries and veins was implemented with the fuzzy c-means clustering method and the region-growing approach by utilizing the spectral amplitudes and power-ratio values of the four eigenfrequencies on the vasculature. Subsequently, independent spectral distributions in the arteries, veins, and capillary bed were estimated for comparison, which showed that the spectral distributions of the intrinsic signals were very distinct between the arterial and venous regions.
Hu, Dewen; Wang, Yucheng; Liu, Yadong; Li, Ming; Liu, Fayi
2010-01-01
An automated method is presented for artery-vein separation in cerebral cortical images recorded with optical imaging of the intrinsic signal. The vessel-type separation method is based on the fact that the spectral distribution of intrinsic physiological oscillations varies from arterial regions to venous regions. In arterial regions, the spectral power is higher in the heartbeat frequency (HF), whereas in venous regions, the spectral power is higher in the respiration frequency (RF). The separation method was begun by extracting the vascular network and its centerline. Then the spectra of the optical intrinsic signals were estimated by the multitaper method. A standard F-test was performed on each discrete frequency point to test the statistical significance at the given level. Four periodic physiological oscillations were examined: HF, RF, and two other eigenfrequencies termed F1 and F2. The separation of arteries and veins was implemented with the fuzzy c-means clustering method and the region-growing approach by utilizing the spectral amplitudes and power-ratio values of the four eigenfrequencies on the vasculature. Subsequently, independent spectral distributions in the arteries, veins, and capillary bed were estimated for comparison, which showed that the spectral distributions of the intrinsic signals were very distinct between the arterial and venous regions.
Effects of cold plasma treatment on alfalfa seed growth under simulated drought stress
NASA Astrophysics Data System (ADS)
Jinkui, FENG; Decheng, WANG; Changyong, SHAO; Lili, ZHANG; Xin, TANG
2018-03-01
The effect of different cold plasma treatments on the germination and seedling growth of alfalfa (Medicago sativa L.) seeds under simulated drought stress conditions was investigated. Polyethyleneglycol-6000 (PEG 6000)with the mass fraction of 0% (purified water), 5%, 10%, and 15% were applied to simulate the drought environment. The alfalfa seeds were treated with 15 different power levels ranged between 0-280 W for 15 s. The germination potential, germination rate, germination index, seedling root length, seedling height, and vigor index were investigated. Results indicated significant differences between treated with proper power and untreated alfalfa seeds. With the increase of treatment power, these indexes mentioned above almost presented bimodal curves. Under the different mass fractions of PEG 6000, results showed that the lower power led to increased germination, and the seedlings presented good adaptability to different drought conditions. Meanwhile, higher power levels resulted in a decreased germination rate. Seeds treated with 40 W resulted in higher germination potential, germination rate, seedling height, root length, and vigor index. Vigor indexes of the treated seeds under different PEG 6000 stresses increased by 38.68%, 43.91%, 74.34%, and 39.20% respectively compared to CK0-0, CK5-0, CK10-0, and CK15-0 (the control sample under 0%, 5%, 10%, and 15% PEG 6000). Therefore, 40 W was regarded as the best treatment in this research. Although the trend indexes of alfalfa seeds treated with the same power were statistically the same under different PEG 6000 stresses, the cold plasma treatment had a significant effect on the adaptability of alfalfa seeds in different drought environments. Thus, this kind of treatment is worth implementing to promote seed growth under drought situations.
Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls
Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.
2013-01-01
As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950
Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform
NASA Astrophysics Data System (ADS)
Pando, Jesus
1997-10-01
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
Temperature rise induced by some light emitting diode and quartz-tungsten-halogen curing units.
Asmussen, Erik; Peutzfeldt, Anne
2005-02-01
Because of the risk of thermal damage to the pulp, the temperature rise induced by light-curing units should not be too high. LED (light emitting diode) curing units have the main part of their irradiation in the blue range and have been reported to generate less heat than QTH (quartz-tungsten-halogen) curing units. This study had two aims: first, to measure the temperature rise induced by ten LED and three QTH curing units; and, second, to relate the measured temperature rise to the power density of the curing units. The light-induced temperature rise was measured by means of a thermocouple embedded in a small cylinder of resin composite. The power density was measured by using a dental radiometer. For LED units, the temperature rise increased with increasing power density, in a statistically significant manner. Two of the three QTH curing units investigated resulted in a higher temperature rise than LED curing units of the same power density. Previous findings, that LED curing units induce less temperature rise than QTH units, does not hold true in general.
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
Turbulence in planetary occultations. IV - Power spectra of phase and intensity fluctuations
NASA Technical Reports Server (NTRS)
Haugstad, B. S.
1979-01-01
Power spectra of phase and intensity scintillations during occultation by turbulent planetary atmospheres are significantly affected by the inhomogeneous background upon which the turbulence is superimposed. Such coupling is particularly pronounced in the intensity, where there is also a marked difference in spectral shape between a central and grazing occultation. While the former has its structural features smoothed by coupling to the inhomogeneous background, such features are enhanced in the latter. Indeed, the latter power spectrum peaks around the characteristic frequency that is determined by the size of the free-space Fresnel zone and the ray velocity in the atmosphere; at higher frequencies strong fringes develop in the power spectrum. A confrontation between the theoretical scintillation spectra computed here and those calculated from the Mariner 5 Venus mission by Woo et al. (1974) is inconclusive, mainly because of insufficient statistical resolution. Phase and/or intensity power spectra computed from occultation data may be used to deduce characteristics of the turbulence and to distinguish turbulence from other perturbations in the refractive index. Such determinations are facilitated if observations are made at two or more frequencies (radio occultation) or in two or more colors (stellar occultation).
NASA Astrophysics Data System (ADS)
Kato, Takeyoshi; Minagata, Atsushi; Suzuoki, Yasuo
This paper discusses the influence of mass installation of a home co-generation system (H-CGS) using a polymer electrolyte fuel cell (PEFC) on the voltage profile of power distribution system in residential area. The influence of H-CGS is compared with that of photovoltaic power generation systems (PV systems). The operation pattern of H-CGS is assumed based on the electricity and hot-water demand observed in 10 households for a year. The main results are as follows. With the clustered H-CGS, the voltage of each bus is higher by about 1-3% compared with the conventional system without any distributed generators. Because H-CGS tends to increase the output during the early evening, H-CGS contributes to recover the voltage drop during the early evening, resulting in smaller voltage variation of distribution system throughout a day. Because of small rated power output about 1kW, the influence on voltage profile by the clustered H-CGS is smaller than that by the clustered PV systems. The highest voltage during the day time is not so high as compared with the distribution system with the clustered PV systems, even if the reverse power flow from H-CGS is allowed.
2016-01-01
Age-related neuromuscular change of Tibialis Anterior (TA) is a leading cause of muscle strength decline among the elderly. This study has established the baseline for age-associated changes in sEMG of TA at different levels of voluntary contraction. We have investigated the use of Gaussianity and maximal power of the power spectral density (PSD) as suitable features to identify age-associated changes in the surface electromyogram (sEMG). Eighteen younger (20–30 years) and 18 older (60–85 years) cohorts completed two trials of isometric dorsiflexion at four different force levels between 10% and 50% of the maximal voluntary contraction. Gaussianity and maximal power of the PSD of sEMG were determined. Results show a significant increase in sEMG's maximal power of the PSD and Gaussianity with increase in force for both cohorts. It was also observed that older cohorts had higher maximal power of the PSD and lower Gaussianity. These age-related differences observed in the PSD and Gaussianity could be due to motor unit remodelling. This can be useful for noninvasive tracking of age-associated neuromuscular changes. PMID:27610379
On the structure and phase transitions of power-law Poissonian ensembles
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Oshanin, Gleb
2012-10-01
Power-law Poissonian ensembles are Poisson processes that are defined on the positive half-line, and that are governed by power-law intensities. Power-law Poissonian ensembles are stochastic objects of fundamental significance; they uniquely display an array of fractal features and they uniquely generate a span of important applications. In this paper we apply three different methods—oligarchic analysis, Lorenzian analysis and heterogeneity analysis—to explore power-law Poissonian ensembles. The amalgamation of these analyses, combined with the topology of power-law Poissonian ensembles, establishes a detailed and multi-faceted picture of the statistical structure and the statistical phase transitions of these elemental ensembles.
NASA Astrophysics Data System (ADS)
Asal, F. F.
2012-07-01
Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM), which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD) of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten) in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported by statistical analysis has proven good potential in the DEM quality assessment.
NASA Astrophysics Data System (ADS)
Chang, Xiaoyen Y.; Sewell, Thomas D.; Raff, Lionel M.; Thompson, Donald L.
1992-11-01
The possibility of utilizing different types of power spectra obtained from classical trajectories as a diagnostic tool to identify the presence of nonstatistical dynamics is explored by using the unimolecular bond-fission reactions of 1,2-difluoroethane and the 2-chloroethyl radical as test cases. In previous studies, the reaction rates for these systems were calculated by using a variational transition-state theory and classical trajectory methods. A comparison of the results showed that 1,2-difluoroethane is a nonstatistical system, while the 2-chloroethyl radical behaves statistically. Power spectra for these two systems have been generated under various conditions. The characteristics of these spectra are as follows: (1) The spectra for the 2-chloroethyl radical are always broader and more coupled to other modes than is the case for 1,2-difluoroethane. This is true even at very low levels of excitation. (2) When an internal energy near or above the dissociation threshold is initially partitioned into a local C-H stretching mode, the power spectra for 1,2-difluoroethane broaden somewhat, but discrete and somewhat isolated bands are still clearly evident. In contrast, the analogous power spectra for the 2-chloroethyl radical exhibit a near complete absence of isolated bands. The general appearance of the spectrum suggests a very high level of mode-to-mode coupling, large intramolecular vibrational energy redistribution (IVR) rates, and global statistical behavior. (3) The appearance of the power spectrum for the 2-chloroethyl radical is unaltered regardless of whether the initial C-H excitation is in the CH2 or the CH2Cl group. This result also suggests statistical behavior. These results are interpreted to mean that power spectra may be used as a diagnostic tool to assess the statistical character of a system. The presence of a diffuse spectrum exhibiting a nearly complete loss of isolated structures indicates that the dissociation dynamics of the molecule will be well described by statistical theories. If, however, the power spectrum maintains its discrete, isolated character, as is the case for 1,2-difluoroethane, the opposite conclusion is suggested. Since power spectra are very easily computed, this diagnostic method may prove to be useful.
Estimating the vibration level of an L-shaped beam using power flow techniques
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.; Mccollum, M.; Rassineux, J. L.; Gilbert, T.
1986-01-01
The response of one component of an L-shaped beam, with point force excitation on the other component, is estimated using the power flow method. The transmitted power from the source component to the receiver component is expressed in terms of the transfer and input mobilities at the excitation point and the joint. The response is estimated both in narrow frequency bands, using the exact geometry of the beams, and as a frequency averaged response using infinite beam models. The results using this power flow technique are compared to the results obtained using finite element analysis (FEA) of the L-shaped beam for the low frequency response and to results obtained using statistical energy analysis (SEA) for the high frequencies. The agreement between the FEA results and the power flow method results at low frequencies is very good. SEA results are in terms of frequency averaged levels and these are in perfect agreement with the results obtained using the infinite beam models in the power flow method. The narrow frequency band results from the power flow method also converge to the SEA results at high frequencies. The advantage of the power flow method is that detail of the response can be retained while reducing computation time, which will allow the narrow frequency band analysis of the response to be extended to higher frequencies.
MIDAS: Regionally linear multivariate discriminative statistical mapping.
Varol, Erdem; Sotiras, Aristeidis; Davatzikos, Christos
2018-07-01
Statistical parametric maps formed via voxel-wise mass-univariate tests, such as the general linear model, are commonly used to test hypotheses about regionally specific effects in neuroimaging cross-sectional studies where each subject is represented by a single image. Despite being informative, these techniques remain limited as they ignore multivariate relationships in the data. Most importantly, the commonly employed local Gaussian smoothing, which is important for accounting for registration errors and making the data follow Gaussian distributions, is usually chosen in an ad hoc fashion. Thus, it is often suboptimal for the task of detecting group differences and correlations with non-imaging variables. Information mapping techniques, such as searchlight, which use pattern classifiers to exploit multivariate information and obtain more powerful statistical maps, have become increasingly popular in recent years. However, existing methods may lead to important interpretation errors in practice (i.e., misidentifying a cluster as informative, or failing to detect truly informative voxels), while often being computationally expensive. To address these issues, we introduce a novel efficient multivariate statistical framework for cross-sectional studies, termed MIDAS, seeking highly sensitive and specific voxel-wise brain maps, while leveraging the power of regional discriminant analysis. In MIDAS, locally linear discriminative learning is applied to estimate the pattern that best discriminates between two groups, or predicts a variable of interest. This pattern is equivalent to local filtering by an optimal kernel whose coefficients are the weights of the linear discriminant. By composing information from all neighborhoods that contain a given voxel, MIDAS produces a statistic that collectively reflects the contribution of the voxel to the regional classifiers as well as the discriminative power of the classifiers. Critically, MIDAS efficiently assesses the statistical significance of the derived statistic by analytically approximating its null distribution without the need for computationally expensive permutation tests. The proposed framework was extensively validated using simulated atrophy in structural magnetic resonance imaging (MRI) and further tested using data from a task-based functional MRI study as well as a structural MRI study of cognitive performance. The performance of the proposed framework was evaluated against standard voxel-wise general linear models and other information mapping methods. The experimental results showed that MIDAS achieves relatively higher sensitivity and specificity in detecting group differences. Together, our results demonstrate the potential of the proposed approach to efficiently map effects of interest in both structural and functional data. Copyright © 2018. Published by Elsevier Inc.
Geometry of tracer trajectories in turbulent rotating convection
NASA Astrophysics Data System (ADS)
Alards, Kim; Rajaei, Hadi; Kunnen, Rudie; Toschi, Federico; Clercx, Herman
2016-11-01
In Rayleigh-Bénard convection rotation is known to cause transitions in flow structures and to change the level of anisotropy close to the horizontal plates. To analyze this effect of rotation, we collect curvature and torsion statistics of passive tracer trajectories in rotating Rayleigh-Bénard convection, using both experiments and direct numerical simulations. In previous studies, focusing on homogeneous isotropic turbulence (HIT), curvature and torsion PDFs are found to reveal pronounced power laws. In the center of the convection cell, where the flow is closest to HIT, we recover these power laws, regardless of the rotation rate. However, near the top plate, where we expect the flow to be anisotropic, the scaling of the PDFs deviates from the HIT prediction for lower rotation rates. This indicates that anisotropy clearly affects the geometry of tracer trajectories. Another effect of rotation is observed as a shift of curvature and torsion PDFs towards higher values. We expect this shift to be related to the length scale of typical flow structures. Using curvature and torsion statistics, we can characterize how these typical length scales evolve under rotation and moreover analyze the effect of rotation on more complicated flow characteristics, such as anisotropy.
A powerful approach for association analysis incorporating imprinting effects
Xia, Fan; Zhou, Ji-Yuan; Fung, Wing Kam
2011-01-01
Motivation: For a diallelic marker locus, the transmission disequilibrium test (TDT) is a simple and powerful design for genetic studies. The TDT was originally proposed for use in families with both parents available (complete nuclear families) and has further been extended to 1-TDT for use in families with only one of the parents available (incomplete nuclear families). Currently, the increasing interest of the influence of parental imprinting on heritability indicates the importance of incorporating imprinting effects into the mapping of association variants. Results: In this article, we extend the TDT-type statistics to incorporate imprinting effects and develop a series of new test statistics in a general two-stage framework for association studies. Our test statistics enjoy the nature of family-based designs that need no assumption of Hardy–Weinberg equilibrium. Also, the proposed methods accommodate complete and incomplete nuclear families with one or more affected children. In the simulation study, we verify the validity of the proposed test statistics under various scenarios, and compare the powers of the proposed statistics with some existing test statistics. It is shown that our methods greatly improve the power for detecting association in the presence of imprinting effects. We further demonstrate the advantage of our methods by the application of the proposed test statistics to a rheumatoid arthritis dataset. Contact: wingfung@hku.hk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21798962
A powerful approach for association analysis incorporating imprinting effects.
Xia, Fan; Zhou, Ji-Yuan; Fung, Wing Kam
2011-09-15
For a diallelic marker locus, the transmission disequilibrium test (TDT) is a simple and powerful design for genetic studies. The TDT was originally proposed for use in families with both parents available (complete nuclear families) and has further been extended to 1-TDT for use in families with only one of the parents available (incomplete nuclear families). Currently, the increasing interest of the influence of parental imprinting on heritability indicates the importance of incorporating imprinting effects into the mapping of association variants. In this article, we extend the TDT-type statistics to incorporate imprinting effects and develop a series of new test statistics in a general two-stage framework for association studies. Our test statistics enjoy the nature of family-based designs that need no assumption of Hardy-Weinberg equilibrium. Also, the proposed methods accommodate complete and incomplete nuclear families with one or more affected children. In the simulation study, we verify the validity of the proposed test statistics under various scenarios, and compare the powers of the proposed statistics with some existing test statistics. It is shown that our methods greatly improve the power for detecting association in the presence of imprinting effects. We further demonstrate the advantage of our methods by the application of the proposed test statistics to a rheumatoid arthritis dataset. wingfung@hku.hk Supplementary data are available at Bioinformatics online.
Rogala, Maja M; Danielewska, Monika E; Antończyk, Agnieszka; Kiełbowicz, Zdzisław; Rogowska, Marta E; Kozuń, Marta; Detyna, Jerzy; Iskander, D Robert
2017-09-01
The aim was to ascertain whether the characteristics of the corneal pulse (CP) measured in-vivo in a rabbit eye change after short-term artificial increase of intraocular pressure (IOP) and whether they correlate with corneal biomechanics assessed in-vitro. Eight New Zealand white rabbits were included in this study and were anesthetized. In-vivo experiments included simultaneous measurements of the CP signal, registered with a non-contact method, IOP, intra-arterial blood pressure, and blood pulse (BPL), at the baseline and short-term elevated IOP. Afterwards, thickness of post-mortem corneas was determined and then uniaxial tensile tests were conducted leading to estimates of their Young's modulus (E). At the baseline IOP, backward stepwise regression analyses were performed in which successively the ocular biomechanical, biometric and cardiovascular predictors were separately taken into account. Results of the analysis revealed that the 3rd CP harmonic can be statistically significantly predicted by E and central corneal thickness (Models: R 2 = 0.662, p < 0.005 and R 2 = 0.832, p < 0.001 for the signal amplitude and power, respectively). The 1st CP harmonic can be statistically significantly predicted by the amplitude and power of the 1st BPL harmonic (Models: R 2 = 0.534, p = 0.015 and R 2 = 0.509, p < 0.018, respectively). For elevated IOP, non-parametric analysis indicated significant differences for the power of the 1st CP harmonic (Kruskal-Wallis test; p = 0.031) and for the mean, systolic and diastolic blood pressures (p = 0.025, p = 0.019, p = 0.033, respectively). In conclusion, for the first time, the association between parameters of the CP signal in-vivo and corneal biomechanics in-vitro was confirmed. In particular, spectral analysis revealed that higher amplitude and power of the 3rd CP harmonic indicates higher corneal stiffness, while the 1st CP harmonic correlates positively with the corresponding harmonic of the BPL signal. Copyright © 2017 Elsevier Ltd. All rights reserved.
Difference of refraction values between standard autorefractometry and Plusoptix.
Bogdănici, Camelia Margareta; Săndulache, Codrina Maria; Vasiliu, Rodica; Obadă, Otilia
2016-01-01
Aim: Comparison between the objective refraction measurement results determined with Topcon KR-8900 standard autorefractometer and Plusoptix A09 photo-refractometer in children. Material and methods: A prospective transversal study was performed in the Department of Ophthalmology of "Sf. Spiridon" Hospital in Iași on 90 eyes of 45 pediatric patients, with a mean age of 8,82 ± 3,52 years, examined with noncycloplegic measurements provided by Plusoptix A09 and cycloplegic and noncycloplegic measurements provided by Topcon KR-8900 standard autorefractometer. The clinical parameters compared were the following: spherical equivalent (SE), spherical and cylindrical values, and cylinder axis. Astigmatism was recorded and evaluated with the cylindrical value on minus after transposition. The statistical calculation was performed with paired t-tests and Pearson's correlation analysis. All the data were analyzed with SPSS statistical package 19 (SPSS for Windows, Chicago, IL). Results: Plusoptix A09 noncycloplegic values were relatively equal between the eyes, with slightly lower values compared to noncycloplegic auto refractometry. Mean (± SD) measurements provided by Plusoptix AO9 were the following: spherical power 1.11 ± 1.52, cylindrical power 0.80 ± 0.80, and spherical equivalent 0.71 ± 1.39. The noncycloplegic auto refractometer mean (± SD) measurements were spherical power 1.12 ± 1.63, cylindrical power 0.79 ± 0,77 and spherical equivalent 0.71 ± 1.58. The cycloplegic auto refractometer mean (± SD) measurements were spherical power 2.08 ± 1.95, cylindrical power 0,82 ± 0.85 and spherical equivalent 1.68 ± 1.87. 32% of the eyes were hyperopic, 2.67% were myopic, 65.33% had astigmatism, and 30% eyes had amblyopia. Conclusions: Noncycloplegic objective refraction values were similar with those determined by autorefractometry. Plusoptix had an important role in the ophthalmological screening, but did not detect higher refractive errors, justifying the cycloplegic autorefractometry.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okumura, Teppei; Seljak, Uroš; Desjacques, Vincent, E-mail: teppei@ewha.ac.kr, E-mail: useljak@berkeley.edu, E-mail: dvince@physik.uzh.ch
It was recently shown that the power spectrum in redshift space can be written as a sum of cross-power spectra between number weighted velocity moments, of which the lowest are density and momentum density. We investigate numerically the properties of these power spectra for simulated galaxies and dark matter halos and compare them to the dark matter power spectra, generalizing the concept of the bias in density-density power spectra. Because all of the quantities are number weighted this approach is well defined even for sparse systems such as massive halos. This contrasts to the previous approaches to RSD where velocitymore » correlations have been explored, but velocity field is a poorly defined concept for sparse systems. We find that the number density weighting leads to a strong scale dependence of the bias terms for momentum density auto-correlation and cross-correlation with density. This trend becomes more significant for the more biased halos and leads to an enhancement of RSD power relative to the linear theory. Fingers-of-god effects, which in this formalism come from the correlations of the higher order moments beyond the momentum density, lead to smoothing of the power spectrum and can reduce this enhancement of power from the scale dependent bias, but are relatively small for halos with no small scale velocity dispersion. In comparison, for a more realistic galaxy sample with satellites the small scale velocity dispersion generated by satellite motions inside the halos leads to a larger power suppression on small scales, but this depends on the satellite fraction and on the details of how the satellites are distributed inside the halo. We investigate several statistics such as the two-dimensional power spectrum P(k,μ), where μ is the angle between the Fourier mode and line of sight, its multipole moments, its powers of μ{sup 2}, and configuration space statistics. Overall we find that the nonlinear effects in realistic galaxy samples such as luminous red galaxies affect the redshift space clustering on very large scales: for example, the quadrupole moment is affected by 10% for k < 0.1hMpc{sup −1}, which means that these effects need to be understood if we want to extract cosmological information from the redshift space distortions.« less
Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics
Dowding, Irene; Haufe, Stefan
2018-01-01
Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885
On the bispectra of very massive tracers in the Effective Field Theory of Large-Scale Structure
Nadler, Ethan O.; Perko, Ashley; Senatore, Leonardo
2018-02-01
The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a consistent perturbative framework for describing the statistical distribution of cosmological large-scale structure. In a previous EFTofLSS calculation that involved the one-loop power spectra and tree-level bispectra, it was shown that the k-reach of the prediction for biased tracers is comparable for all investigated masses if suitable higher-derivative biases, which are less suppressed for more massive tracers, are added. However, it is possible that the non-linear biases grow faster with tracer mass than the linear bias, implying that loop contributions could be the leading correction to the bispectra. To check this,more » we include the one-loop contributions in a fit to numerical data in the limit of strongly enhanced higher-order biases. Here, we show that the resulting one-loop power spectra and higher-derivative plus leading one-loop bispectra fit the two- and three-point functions respectively up to k≃0.19 h Mpc -1 and ksime 0.14 h Mpc -1 at the percent level. We find that the higher-order bias coefficients are not strongly enhanced, and we argue that the gain in perturbative reach due to the leading one-loop contributions to the bispectra is relatively small. Thus, we conclude that higher-derivative biases provide the leading correction to the bispectra for tracers of a very wide range of masses.« less
On the bispectra of very massive tracers in the Effective Field Theory of Large-Scale Structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nadler, Ethan O.; Perko, Ashley; Senatore, Leonardo
The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a consistent perturbative framework for describing the statistical distribution of cosmological large-scale structure. In a previous EFTofLSS calculation that involved the one-loop power spectra and tree-level bispectra, it was shown that the k-reach of the prediction for biased tracers is comparable for all investigated masses if suitable higher-derivative biases, which are less suppressed for more massive tracers, are added. However, it is possible that the non-linear biases grow faster with tracer mass than the linear bias, implying that loop contributions could be the leading correction to the bispectra. To check this,more » we include the one-loop contributions in a fit to numerical data in the limit of strongly enhanced higher-order biases. Here, we show that the resulting one-loop power spectra and higher-derivative plus leading one-loop bispectra fit the two- and three-point functions respectively up to k≃0.19 h Mpc -1 and ksime 0.14 h Mpc -1 at the percent level. We find that the higher-order bias coefficients are not strongly enhanced, and we argue that the gain in perturbative reach due to the leading one-loop contributions to the bispectra is relatively small. Thus, we conclude that higher-derivative biases provide the leading correction to the bispectra for tracers of a very wide range of masses.« less
Sagari, Shitalkumar G; Babannavar, Roopa; Lohra, Abhishek; Kodgi, Ashwin; Bapure, Sunil; Rao, Yogesh; J, Arun; Malghan, Manjunath
2014-12-01
Biomonitoring provides a useful tool to estimate the genetic risk from exposure to genotoxic agents. The aim of this study was to evaluate the frequencies of Micronuclei (MN) and other Nuclear abnormalities (NA) from exfoliated oral mucosal cells in Nuclear Power Station (NPS) workers. Micronucleus frequencies in oral exfoliated cells were done from individuals not known to be exposed to either environmental or occupational carcinogens (Group I). Similarly samples were obtained from full-time Nuclear Power Station (NPS) workers with absence of Leukemia and any malignancy (Group II) and workers diagnosed as leukemic patients and undergoing treatment (Group III). There was statistically significant difference between Group I, Group II & Group III. MN and NA frequencies in Leukemic Patients were significantly higher than those in exposed workers &control groups (p < 0.05). MN and other NA reflect genetic changes, events associated with malignancies. Therefore, there is a need to educate those who work in NPS about the potential hazard of occupational exposure and the importance of using protective measures.
NASA Astrophysics Data System (ADS)
Petri, Andrea; May, Morgan; Haiman, Zoltán
2016-09-01
Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w . When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ωm,w ,σ8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. We find that redshift tomography with the power spectrum reduces the area of the 1 σ confidence interval in (Ωm,w ) space by a factor of 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ωm,w ) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. We find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions
ERIC Educational Resources Information Center
Yormaz, Seha; Sünbül, Önder
2017-01-01
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
ERIC Educational Resources Information Center
Texeira, Antonio; Rosa, Alvaro; Calapez, Teresa
2009-01-01
This article presents statistical power analysis (SPA) based on the normal distribution using Excel, adopting textbook and SPA approaches. The objective is to present the latter in a comparative way within a framework that is familiar to textbook level readers, as a first step to understand SPA with other distributions. The analysis focuses on the…
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Précis of statistical significance: rationale, validity, and utility.
Chow, S L
1998-04-01
The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.
Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I
2013-05-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.
TRMM/LIS and PR Observations and Thunderstorm Activity
NASA Astrophysics Data System (ADS)
Ohita, S.; Morimoto, T.; Kawasaki, Z. I.; Ushio, T.
2005-12-01
Thunderstorms observed by TRMM/PR and LIS have been investigating, and Lightning Research Group of Osaka University (LRG-OU) has unveiled several interesting features. Correlation between lightning activities and the snow depth of convective clouds may follow the power-five law. The power five law means that the flash density is a function of the snow-depth to power five. The definition of snow depth is the height of detectable cloud tops by TRMM/PR from the climatological freezing level, and it may be equivalent to the length of the portion where the solid phase precipitation particles exist. This is given by examining more than one million convective clouds, and we conclude that the power five law should be universal from the aspect of the statistic. Three thunderstorm active areas are well known as "Three World Chimneys", and those are the Central Africa, Amazon of the South America, and South East Asia. Thunderstorm activities in these areas are expected to contribute to the distribution of thermal energy around the equator to middle latitude regions. Moreover thunderstorm activity in the tropical region is believed to be related with the average temperature of our planet earth. That is why long term monitoring of lightning activity is required. After launching TRMM we have accumulated seven-year LIS observations, and statistics for three world chimneys are obtained. We have recognized the additional lightning active area, and that is around the Maracaibo lake in Venezuera. We conclude that this is because of geographical features of the Maracaibo lake and the continuous easterly trade wind. Lightning Activity during El Niño period is another interesting subject. LRGOU studies thunderstorm occurrences over west Indonesia and south China, and investigates the influence of El Nino on lightning . We compare the statistics between El Nino and non El Nino periods. We learn that the lightning activity during El Niño period is higher than non El Nino period instead of less precipitation on the ground during El Niño period. Since we expect the strong correlation between precipitation and lightning activity, the results seem to be against the conventional common sense. However analyzed results for these two areas show no contradictions, or we can say that the results are exactly same from the aspect of statistics. The meteorological comprehension is still remained.
New heterogeneous test statistics for the unbalanced fixed-effect nested design.
Guo, Jiin-Huarng; Billard, L; Luh, Wei-Ming
2011-05-01
When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two-factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander-Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed-effect two-stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation. ©2010 The British Psychological Society.
Correlation techniques and measurements of wave-height statistics
NASA Technical Reports Server (NTRS)
Guthart, H.; Taylor, W. C.; Graf, K. A.; Douglas, D. G.
1972-01-01
Statistical measurements of wave height fluctuations have been made in a wind wave tank. The power spectral density function of temporal wave height fluctuations evidenced second-harmonic components and an f to the minus 5th power law decay beyond the second harmonic. The observations of second harmonic effects agreed very well with a theoretical prediction. From the wave statistics, surface drift currents were inferred and compared to experimental measurements with satisfactory agreement. Measurements were made of the two dimensional correlation coefficient at 15 deg increments in angle with respect to the wind vector. An estimate of the two-dimensional spatial power spectral density function was also made.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Ecological statistics of Gestalt laws for the perceptual organization of contours.
Elder, James H; Goldberg, Richard M
2002-01-01
Although numerous studies have measured the strength of visual grouping cues for controlled psychophysical stimuli, little is known about the statistical utility of these various cues for natural images. In this study, we conducted experiments in which human participants trace perceived contours in natural images. These contours are automatically mapped to sequences of discrete tangent elements detected in the image. By examining relational properties between pairs of successive tangents on these traced curves, and between randomly selected pairs of tangents, we are able to estimate the likelihood distributions required to construct an optimal Bayesian model for contour grouping. We employed this novel methodology to investigate the inferential power of three classical Gestalt cues for contour grouping: proximity, good continuation, and luminance similarity. The study yielded a number of important results: (1) these cues, when appropriately defined, are approximately uncorrelated, suggesting a simple factorial model for statistical inference; (2) moderate image-to-image variation of the statistics indicates the utility of general probabilistic models for perceptual organization; (3) these cues differ greatly in their inferential power, proximity being by far the most powerful; and (4) statistical modeling of the proximity cue indicates a scale-invariant power law in close agreement with prior psychophysics.
Vajawat, Mayuri; Deepika, P. C.; Kumar, Vijay; Rajeshwari, P.
2015-01-01
Aim: To compare the efficacy of powered toothbrushes in improving gingival health and reducing salivary red complex counts as compared to manual toothbrushes, among autistic individuals. Materials and Methods: Forty autistics was selected. Test group received powered toothbrushes, and control group received manual toothbrushes. Plaque index and gingival index were recorded. Unstimulated saliva was collected for analysis of red complex organisms using polymerase chain reaction. Results: A statistically significant reduction in the plaque scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.002 for controls). This reduction was statistically more significant in the test group (P = 0.024). A statistically significant reduction in the gingival scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.001 for controls). This reduction was statistically more significant in the test group (P = 0.042). No statistically significant reduction in the detection rate of red complex organisms were seen at 4 weeks in both the groups. Conclusion: Powered toothbrushes result in a significant overall improvement in gingival health when constant reinforcement of oral hygiene instructions is given. PMID:26681855
NASA Astrophysics Data System (ADS)
Schroeder, C. B.; Fawley, W. M.; Esarey, E.
2003-07-01
We investigate the statistical properties (e.g., shot-to-shot power fluctuations) of the radiation from a high-gain free-electron laser (FEL) operating in the nonlinear regime. We consider the case of an FEL amplifier reaching saturation whose shot-to-shot fluctuations in input radiation power follow a gamma distribution. We analyze the corresponding output power fluctuations at and beyond saturation, including beam energy spread effects, and find that there are well-characterized values of undulator length for which the fluctuations reach a minimum.
NASA Astrophysics Data System (ADS)
Ahn, Hyunjun; Jung, Younghun; Om, Ju-Seong; Heo, Jun-Haeng
2014-05-01
It is very important to select the probability distribution in Statistical hydrology. Goodness of fit test is a statistical method that selects an appropriate probability model for a given data. The probability plot correlation coefficient (PPCC) test as one of the goodness of fit tests was originally developed for normal distribution. Since then, this test has been widely applied to other probability models. The PPCC test is known as one of the best goodness of fit test because it shows higher rejection powers among them. In this study, we focus on the PPCC tests for the GEV distribution which is widely used in the world. For the GEV model, several plotting position formulas are suggested. However, the PPCC statistics are derived only for the plotting position formulas (Goel and De, In-na and Nguyen, and Kim et al.) in which the skewness coefficient (or shape parameter) are included. And then the regression equations are derived as a function of the shape parameter and sample size for a given significance level. In addition, the rejection powers of these formulas are compared using Monte-Carlo simulation. Keywords: Goodness-of-fit test, Probability plot correlation coefficient test, Plotting position, Monte-Carlo Simulation ACKNOWLEDGEMENTS This research was supported by a grant 'Establishing Active Disaster Management System of Flood Control Structures by using 3D BIM Technique' [NEMA-12-NH-57] from the Natural Hazard Mitigation Research Group, National Emergency Management Agency of Korea.
Transformation of general binary MRF minimization to the first-order case.
Ishikawa, Hiroshi
2011-06-01
We introduce a transformation of general higher-order Markov random field with binary labels into a first-order one that has the same minima as the original. Moreover, we formalize a framework for approximately minimizing higher-order multi-label MRF energies that combines the new reduction with the fusion-move and QPBO algorithms. While many computer vision problems today are formulated as energy minimization problems, they have mostly been limited to using first-order energies, which consist of unary and pairwise clique potentials, with a few exceptions that consider triples. This is because of the lack of efficient algorithms to optimize energies with higher-order interactions. Our algorithm challenges this restriction that limits the representational power of the models so that higher-order energies can be used to capture the rich statistics of natural scenes. We also show that some minimization methods can be considered special cases of the present framework, as well as comparing the new method experimentally with other such techniques.
Less Physician Practice Competition Is Associated With Higher Prices Paid For Common Procedures.
Austin, Daniel R; Baker, Laurence C
2015-10-01
Concentration among physician groups has been steadily increasing, which may affect prices for physician services. We assessed the relationship in 2010 between physician competition and prices paid by private preferred provider organizations for fifteen common, high-cost procedures to understand whether higher concentration of physician practices and accompanying increased market power were associated with higher prices for services. Using county-level measures of the concentration of physician practices and county average prices, and statistically controlling for a range of other regional characteristics, we found that physician practice concentration and prices were significantly associated for twelve of the fifteen procedures we studied. For these procedures, counties with the highest average physician concentrations had prices 8-26 percent higher than prices in the lowest counties. We concluded that physician competition is frequently associated with prices. Policies that would influence physician practice organization should take this into consideration. Project HOPE—The People-to-People Health Foundation, Inc.
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
Statistical properties of Chinese phonemic networks
NASA Astrophysics Data System (ADS)
Yu, Shuiyuan; Liu, Haitao; Xu, Chunshan
2011-04-01
The study of properties of speech sound systems is of great significance in understanding the human cognitive mechanism and the working principles of speech sound systems. Some properties of speech sound systems, such as the listener-oriented feature and the talker-oriented feature, have been unveiled with the statistical study of phonemes in human languages and the research of the interrelations between human articulatory gestures and the corresponding acoustic parameters. With all the phonemes of speech sound systems treated as a coherent whole, our research, which focuses on the dynamic properties of speech sound systems in operation, investigates some statistical parameters of Chinese phoneme networks based on real text and dictionaries. The findings are as follows: phonemic networks have high connectivity degrees and short average distances; the degrees obey normal distribution and the weighted degrees obey power law distribution; vowels enjoy higher priority than consonants in the actual operation of speech sound systems; the phonemic networks have high robustness against targeted attacks and random errors. In addition, for investigating the structural properties of a speech sound system, a statistical study of dictionaries is conducted, which shows the higher frequency of shorter words and syllables and the tendency that the longer a word is, the shorter the syllables composing it are. From these structural properties and dynamic properties one can derive the following conclusion: the static structure of a speech sound system tends to promote communication efficiency and save articulation effort while the dynamic operation of this system gives preference to reliable transmission and easy recognition. In short, a speech sound system is an effective, efficient and reliable communication system optimized in many aspects.
ERIC Educational Resources Information Center
Amundson, Vickie E.; Bernstein, Ira H.
1973-01-01
Authors note that Fehrer and Biederman's two statistical tests were not of equal power and that their conclusion could be a statistical artifact of both the lesser power of the verbal report comparison and the insensitivity of their particular verbal report indicator. (Editor)
Power laws in citation distributions: evidence from Scopus.
Brzezinski, Michal
Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in "Physics and Astronomy". Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza; ...
2016-07-29
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.
Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao
2016-04-01
To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes.
Statistical measurement of the gamma-ray source-count distribution as a function of energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zechlin, Hannes-S.; Cuoco, Alessandro; Donato, Fiorenza
Statistical properties of photon count maps have recently been proven as a new tool to study the composition of the gamma-ray sky with high precision. Here, we employ the 1-point probability distribution function of six years of Fermi-LAT data to measure the source-count distribution dN/dS and the diffuse components of the high-latitude gamma-ray sky as a function of energy. To that aim, we analyze the gamma-ray emission in five adjacent energy bands between 1 and 171 GeV. It is demonstrated that the source-count distribution as a function of flux is compatible with a broken power law up to energies of ~50 GeV. Furthermore, the index below the break is between 1.95 and 2.0. For higher energies, a simple power-law fits the data, with an index ofmore » $${2.2}_{-0.3}^{+0.7}$$ in the energy band between 50 and 171 GeV. Upper limits on further possible breaks as well as the angular power of unresolved sources are derived. We find that point-source populations probed by this method can explain $${83}_{-13}^{+7}$$% ($${81}_{-19}^{+52}$$%) of the extragalactic gamma-ray background between 1.04 and 1.99 GeV (50 and 171 GeV). Our method has excellent capabilities for constraining the gamma-ray luminosity function and the spectra of unresolved blazars.« less
Aldemir, Ramazan; Demirci, Esra; Per, Huseyin; Canpolat, Mehmet; Özmen, Sevgi; Tokmakçı, Mahmut
2018-04-01
To investigate the frequency domain effects and changes in electroencephalography (EEG) signals in children diagnosed with attention deficit hyperactivity disorder (ADHD). The study contains 40 children. All children were between the ages of 7 and 12 years. Participants were classified into four groups which were ADHD (n=20), ADHD-I (ADHD-Inattentive type) (n=10), ADHD-C (ADHD-Combined type) (n=10), and control (n=20) groups. In this study, the frequency domain of EEG signals for ADHD, subtypes and control groups were analyzed and compared using Matlab software. The mean age of the ADHD children's group was 8.7 years and the control group 9.1 years. Spectral analysis of mean power (μV 2 ) and relative-mean power (%) was carried out for four different frequency bands: delta (0--4 Hz), theta (4--8 Hz), alpha (8--13 Hz) and beta (13--32 Hz). The ADHD and subtypes of ADHD-I, and ADHD-C groups had higher average power value of delta and theta band than that of control group. However, this is not the case for alpha and beta bands. Increases in delta/beta ratio and statistical significance were found only between ADHD-I and control group, and in delta/beta, theta/delta ratio statistical significance values were found to exist between ADHD-C and control group. EEG analyzes can be used as an alternative method when ADHD subgroups are identified.
Muzyka-Woźniak, Maria; Oleszko, Adam
2018-04-26
To compare measurements of axial length (AL), corneal curvature (K), anterior chamber depth (ACD) and white-to-white (WTW) distance on a new device combining Scheimpflug camera and partial coherence interferometry (Pentacam AXL) with a reference optical biometer (IOL Master 500). To evaluate differences between IOL power calculations based on the two biometers. Ninety-seven eyes of 97 consecutive cataract or refractive lens exchange patients were examined preoperatively on IOL Master 500 and Pentacam AXL units. Comparisons between two devices were performed for AL, K, ACD and WTW. Intraocular lens (IOL) power targeting emmetropia was calculated with SRK/T and Haigis formulas on both devices and compared. There were statistically significant differences between two devices for all measured parameters (P < 0.05), except ACD (P = 0.36). Corneal curvature measured with Pentacam AXL was significantly flatter then with IOL Master. The mean difference in AL was clinically insignificant (0.01 mm; 95% LoA 0.16 mm). Pentacam AXL yielded higher IOL power in 75% of eyes for Haigis formula and in 62% of eyes for SRK/T formula, with a mean difference within ± 0.5 D for 72 and 86% of eyes, respectively. There were statistically significant differences between AL, K and WTW measurements obtained with the compared biometers. Flatter corneal curvature measurements on Pentacam AXL necessitate formulas optimisation for Pentacam AXL.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Namikawa, Toshiya
We study the reconstruction of the cosmic rotation power spectrum produced by parity-violating physics, with an eye to ongoing and near future cosmic microwave background (CMB) experiments such as BICEP Array, CMBS4, LiteBIRD and Simons Observatory. In addition to the inflationary gravitational waves and gravitational lensing, measurements of other various effects on CMB polarization open new window into the early Universe. One of these is anisotropies of the cosmic polarization rotation which probes the Chern-Simons term generally predicted by string theory. The anisotropies of the cosmic rotation are also generated by the primordial magnetism and in the Standard Model extentionmore » framework. The cosmic rotation anisotropies can be reconstructed as quadratic in CMB anisotropies. However, the power of the reconstructed cosmic rotation is a CMB four-point correlation and is not directly related to the cosmic-rotation power spectrum. Understanding all contributions in the four-point correlation is required to extract the cosmic rotation signal. Here, assuming inflationary motivated cosmic-rotation models, we employ simulation to quantify each contribution to the four-point correlation and find that (1) a secondary contraction of the trispectrum increases the total signal-to-noise, (2) a bias from the lensing-induced trispectrum is significant compared to the statistical errors in, e.g., LiteBIRD and CMBS4-like experiments, (3) the use of a realization-dependent estimator decreases the statistical errors by 10%–20%, depending on experimental specifications, and (4) other higher-order contributions are negligible at least for near future experiments.« less
Alignment-free sequence comparison (II): theoretical power of comparison statistics.
Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S
2010-11-01
Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.
Got power? A systematic review of sample size adequacy in health professions education research.
Cook, David A; Hatala, Rose
2015-03-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Rollins, Derrick K; Teh, Ailing
2010-12-17
Microarray data sets provide relative expression levels for thousands of genes for a small number, in comparison, of different experimental conditions called assays. Data mining techniques are used to extract specific information of genes as they relate to the assays. The multivariate statistical technique of principal component analysis (PCA) has proven useful in providing effective data mining methods. This article extends the PCA approach of Rollins et al. to the development of ranking genes of microarray data sets that express most differently between two biologically different grouping of assays. This method is evaluated on real and simulated data and compared to a current approach on the basis of false discovery rate (FDR) and statistical power (SP) which is the ability to correctly identify important genes. This work developed and evaluated two new test statistics based on PCA and compared them to a popular method that is not PCA based. Both test statistics were found to be effective as evaluated in three case studies: (i) exposing E. coli cells to two different ethanol levels; (ii) application of myostatin to two groups of mice; and (iii) a simulated data study derived from the properties of (ii). The proposed method (PM) effectively identified critical genes in these studies based on comparison with the current method (CM). The simulation study supports higher identification accuracy for PM over CM for both proposed test statistics when the gene variance is constant and for one of the test statistics when the gene variance is non-constant. PM compares quite favorably to CM in terms of lower FDR and much higher SP. Thus, PM can be quite effective in producing accurate signatures from large microarray data sets for differential expression between assays groups identified in a preliminary step of the PCA procedure and is, therefore, recommended for use in these applications.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
Webster, R J; Williams, A; Marchetti, F; Yauk, C L
2018-07-01
Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Perles, Stephanie J.; Wagner, Tyler; Irwin, Brian J.; Manning, Douglas R.; Callahan, Kristina K.; Marshall, Matthew R.
2014-01-01
Forests are socioeconomically and ecologically important ecosystems that are exposed to a variety of natural and anthropogenic stressors. As such, monitoring forest condition and detecting temporal changes therein remain critical to sound public and private forestland management. The National Parks Service’s Vital Signs monitoring program collects information on many forest health indicators, including species richness, cover by exotics, browse pressure, and forest regeneration. We applied a mixed-model approach to partition variability in data for 30 forest health indicators collected from several national parks in the eastern United States. We then used the estimated variance components in a simulation model to evaluate trend detection capabilities for each indicator. We investigated the extent to which the following factors affected ability to detect trends: (a) sample design: using simple panel versus connected panel design, (b) effect size: increasing trend magnitude, (c) sample size: varying the number of plots sampled each year, and (d) stratified sampling: post-stratifying plots into vegetation domains. Statistical power varied among indicators; however, indicators that measured the proportion of a total yielded higher power when compared to indicators that measured absolute or average values. In addition, the total variability for an indicator appeared to influence power to detect temporal trends more than how total variance was partitioned among spatial and temporal sources. Based on these analyses and the monitoring objectives of theVital Signs program, the current sampling design is likely overly intensive for detecting a 5 % trend·year−1 for all indicators and is appropriate for detecting a 1 % trend·year−1 in most indicators.
Comparison of Time-to-First Event and Recurrent Event Methods in Randomized Clinical Trials.
Claggett, Brian; Pocock, Stuart; Wei, L J; Pfeffer, Marc A; McMurray, John J V; Solomon, Scott D
2018-03-27
Background -Most Phase-3 trials feature time-to-first event endpoints for their primary and/or secondary analyses. In chronic diseases where a clinical event can occur more than once, recurrent-event methods have been proposed to more fully capture disease burden and have been assumed to improve statistical precision and power compared to conventional "time-to-first" methods. Methods -To better characterize factors that influence statistical properties of recurrent-events and time-to-first methods in the evaluation of randomized therapy, we repeatedly simulated trials with 1:1 randomization of 4000 patients to active vs control therapy, with true patient-level risk reduction of 20% (i.e. RR=0.80). For patients who discontinued active therapy after a first event, we assumed their risk reverted subsequently to their original placebo-level risk. Through simulation, we varied a) the degree of between-patient heterogeneity of risk and b) the extent of treatment discontinuation. Findings were compared with those from actual randomized clinical trials. Results -As the degree of between-patient heterogeneity of risk was increased, both time-to-first and recurrent-events methods lost statistical power to detect a true risk reduction and confidence intervals widened. The recurrent-events analyses continued to estimate the true RR=0.80 as heterogeneity increased, while the Cox model produced estimates that were attenuated. The power of recurrent-events methods declined as the rate of study drug discontinuation post-event increased. Recurrent-events methods provided greater power than time-to-first methods in scenarios where drug discontinuation was ≤30% following a first event, lesser power with drug discontinuation rates of ≥60%, and comparable power otherwise. We confirmed in several actual trials in chronic heart failure that treatment effect estimates were attenuated when estimated via the Cox model and that increased statistical power from recurrent-events methods was most pronounced in trials with lower treatment discontinuation rates. Conclusions -We find that the statistical power of both recurrent-events and time-to-first methods are reduced by increasing heterogeneity of patient risk, a parameter not included in conventional power and sample size formulas. Data from real clinical trials are consistent with simulation studies, confirming that the greatest statistical gains from use of recurrent-events methods occur in the presence of high patient heterogeneity and low rates of study drug discontinuation.
A Note on Comparing the Power of Test Statistics at Low Significance Levels.
Morris, Nathan; Elston, Robert
2011-01-01
It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.
Brown, Alaina J; Shen, Megan Johnson; Urbauer, Diana; Taylor, Jolyn; Parker, Patricia A; Carmack, Cindy; Prescott, Lauren; Kolawole, Elizabeth; Rosemore, Carly; Sun, Charlotte; Ramondetta, Lois; Bodurka, Diane C
2016-09-01
The goals of this study were: (1) to evaluate patients' knowledge regarding advance directives and completion rates of advance directives among gynecologic oncology patients and (2) to examine the association between death anxiety, disease symptom burden, and patient initiation of advance directives. 110 gynecologic cancer patients were surveyed regarding their knowledge and completion of advance directives. Patients also completed the MD Anderson Symptom Inventory (MDASI) scale and Templer's Death Anxiety Scale (DAS). Descriptive statistics were utilized to examine characteristics of the sample. Fisher's exact tests and 2-sample t-tests were utilized to examine associations between key variables. Most patients were white (76.4%) and had ovarian (46.4%) or uterine cancer (34.6%). Nearly half (47.0%) had recurrent disease. The majority of patients had heard about advance directives (75%). Only 49% had completed a living will or medical power of attorney. Older patients and those with a higher level of education were more likely to have completed an advance directive (p<0.01). Higher MDASI Interference Score (higher symptom burden) was associated with patients being less likely to have a living will or medical power of attorney (p=0.003). Higher DAS score (increased death anxiety) was associated with patients being less likely to have completed a living will or medical power of attorney (p=0.03). Most patients were familiar with advance directives, but less than half had created these documents. Young age, lower level of education, disease-related interference with daily activities, and a higher level of death anxiety were associated with decreased rates of advance directive completion, indicating these may be barriers to advance care planning documentation. Young patients, less educated patients, patients with increased disease symptom burden, and patients with increased death anxiety should be targeted for advance care planning discussions as they may be less likely to engage in advance care planning. Copyright © 2016. Published by Elsevier Inc.
Brown, Alaina J.; Shen, Megan Johnson; Urbauer, Diana; Taylor, Jolyn; Parker, Patricia A.; Carmack, Cindy; Prescott, Lauren; Kowaloe, Elizabeth; Rosemore, Carly; Sun, Charlotte; Ramondetta, Lois; Bodurka, Diane C.
2017-01-01
Objectives The goals of this study were: (1) to evaluate patients’ knowledge regarding advance directives and completion rates of advance directives among gynecologic oncology patients and (2) to examine the association between death anxiety, disease symptom burden, and patient initiation of advance directives. Methods 110 gynecologic cancer patients were surveyed regarding their knowledge and completion of advance directives. Patients also completed the MD Anderson Symptom Inventory (MDASI) scale and Templer’s Death Anxiety Scale (DAS). Descriptive statistics were utilized to examine characteristics of the sample. Fisher’s exact tests and 2-sample t-tests were utilized to examine associations between key variables. Results Most patients were white (76.4%) and had ovarian (46.4%) or uterine cancer (34.6%). Nearly half (47.0%) had recurrent disease. The majority of patients had heard about advance directives (75%). Only 49% had completed a living will or medical power of attorney. Older patients and those with a higher level of education were more likely to have completed an advance directive (p<0.01). Higher MDASI Interference Score (higher symptom burden) was associated with patients being less likely to have a living will or medical power of attorney (p=0.003). Higher DAS score (increased death anxiety) was associated with patients being less likely to have completed a living will or medical power of attorney (p=0.03). Conclusion Most patients were familiar with advance directives, but less than half had created these documents. Young age, lower level of education, disease-related interference with daily activities, and a higher level of death anxiety were associated with decreased rates of advance directive completion, indicating these may be barriers to advance care planning documentation. Young patients, less educated patients, patients with increased disease symptom burden, and patients with increased death anxiety should be targeted for advance care planning discussions as they may be less likely to engage in advance care planning. PMID:27439968
Model-independent test for scale-dependent non-Gaussianities in the cosmic microwave background.
Räth, C; Morfill, G E; Rossmanith, G; Banday, A J; Górski, K M
2009-04-03
We present a model-independent method to test for scale-dependent non-Gaussianities in combination with scaling indices as test statistics. Therefore, surrogate data sets are generated, in which the power spectrum of the original data is preserved, while the higher order correlations are partly randomized by applying a scale-dependent shuffling procedure to the Fourier phases. We apply this method to the Wilkinson Microwave Anisotropy Probe data of the cosmic microwave background and find signatures for non-Gaussianities on large scales. Further tests are required to elucidate the origin of the detected anomalies.
Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.
1998-01-01
Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites required to sample benthic macroinvertebrates during our sampling period depended on the study objective and ranged from 18 to more than 40 sites per stratum. No single sampling regime would efficiently and adequately sample all components of the macroinvertebrate community.
PPM/NAR 8.4-GHz noise temperature statistics for DSN 64-meter antennas, 1982-1984
NASA Technical Reports Server (NTRS)
Slobin, S. D.; Andres, E. M.
1986-01-01
From August 1982 through November 1984, X-band downlink (8.4-GHz) system noise temperature measurements were made on the DSN 64-m antennas during tracking periods. Statistics of these noise temperature values are needed by the DSN and by spacecraft mission planners to assess antenna, receiving, and telemetry system needs, present performance, and future performance. These measurements were made using the DSN Mark III precision power monitor noise-adding radiometers located at each station. It is found that for DSS 43 and DSS 63, at the 90% cumulative distribution level, equivalent zenith noise temperature values fall between those presented in the earlier (1977) and present (1983) versions of DSN/Flight Project design documents. Noise temperatures measured for DSS 14 (Goldstone) are higher than those given in existing design documents and this disagreement will be investigated as a diagnostic of possible PPM or receiving system performance problems.
New insights into old methods for identifying causal rare variants.
Wang, Haitian; Huang, Chien-Hsun; Lo, Shaw-Hwa; Zheng, Tian; Hu, Inchi
2011-11-29
The advance of high-throughput next-generation sequencing technology makes possible the analysis of rare variants. However, the investigation of rare variants in unrelated-individuals data sets faces the challenge of low power, and most methods circumvent the difficulty by using various collapsing procedures based on genes, pathways, or gene clusters. We suggest a new way to identify causal rare variants using the F-statistic and sliced inverse regression. The procedure is tested on the data set provided by the Genetic Analysis Workshop 17 (GAW17). After preliminary data reduction, we ranked markers according to their F-statistic values. Top-ranked markers were then subjected to sliced inverse regression, and those with higher absolute coefficients in the most significant sliced inverse regression direction were selected. The procedure yields good false discovery rates for the GAW17 data and thus is a promising method for future study on rare variants.
Goldberg, J M; Lindblom, U
1979-01-01
Vibration threshold determinations were made by means of an electromagnetic vibrator at three sites (carpal, tibial, and tarsal), which were primarily selected for examining patients with polyneuropathy. Because of the vast variation demonstrated for both vibrator output and tissue damping, the thresholds were expressed in terms of amplitude of stimulator movement measured by means of an accelerometer, instead of applied voltage which is commonly used. Statistical analysis revealed a higher power of discimination for amplitude measurements at all three stimulus sites. Digital read-out gave the best statistical result and was also most practical. Reference values obtained from 110 healthy males, 10 to 74 years of age, were highly correlated with age for both upper and lower extremities. The variance of the vibration perception threshold was less than that of the disappearance threshold, and determination of the perception threshold alone may be sufficient in most cases. PMID:501379
Vorticity and divergence in the solar photosphere
NASA Technical Reports Server (NTRS)
Wang, YI; Noyes, Robert W.; Tarbell, Theodore D.; Title, Alan M.
1995-01-01
We have studied an outstanding sequence of continuum images of the solar granulation from Pic du Midi Observatory. We have calculated the horizontal vector flow field using a correlation tracking algorithm, and from this determined three scalar field: the vertical component of the curl; the horizontal divergence; and the horizontal flow speed. The divergence field has substantially longer coherence time and more power than does the curl field. Statistically, curl is better correlated with regions of negative divergence - that is, the vertical vorticity is higher in downflow regions, suggesting excess vorticity in intergranular lanes. The average value of the divergence is largest (i.e., outflow is largest) where the horizontal speed is large; we associate these regions with exploding granules. A numerical simulation of general convection also shows similar statistical differences between curl and divergence. Some individual small bright points in the granulation pattern show large local vorticities.
Bodapati, Rohan K; Kizer, Jorge R; Kop, Willem J; Kamel, Hooman; Stein, Phyllis K
2017-07-21
Heart rate variability (HRV) characterizes cardiac autonomic functioning. The association of HRV with stroke is uncertain. We examined whether 24-hour HRV added predictive value to the Cardiovascular Health Study clinical stroke risk score (CHS-SCORE), previously developed at the baseline examination. N=884 stroke-free CHS participants (age 75.3±4.6), with 24-hour Holters adequate for HRV analysis at the 1994-1995 examination, had 68 strokes over ≤8 year follow-up (median 7.3 [interquartile range 7.1-7.6] years). The value of adding HRV to the CHS-SCORE was assessed with stepwise Cox regression analysis. The CHS-SCORE predicted incident stroke (HR=1.06 per unit increment, P =0.005). Two HRV parameters, decreased coefficient of variance of NN intervals (CV%, P =0.031) and decreased power law slope (SLOPE, P =0.033) also entered the model, but these did not significantly improve the c-statistic ( P =0.47). In a secondary analysis, dichotomization of CV% (LOWCV% ≤12.8%) was found to maximally stratify higher-risk participants after adjustment for CHS-SCORE. Similarly, dichotomizing SLOPE (LOWSLOPE <-1.4) maximally stratified higher-risk participants. When these HRV categories were combined (eg, HIGHCV% with HIGHSLOPE), the c-statistic for the model with the CHS-SCORE and combined HRV categories was 0.68, significantly higher than 0.61 for the CHS-SCORE alone ( P =0.02). In this sample of older adults, 2 HRV parameters, CV% and power law slope, emerged as significantly associated with incident stroke when added to a validated clinical risk score. After each parameter was dichotomized based on its optimal cut point in this sample, their composite significantly improved prediction of incident stroke during ≤8-year follow-up. These findings will require validation in separate, larger cohorts. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
1990-03-01
equation of the statistical energy analysis (SEA) using the procedure indicated in equation (13) [8, 9]. Similarly, one may state the quantities (. (X-)) and...CONGRESS ON ACOUSTICS, July 24-31 1986, Toronto, Canada, Paper D6-1. 5. CUSCHIERI, J.M., Power flow as a compliment to statistical energy analysis and...34Random response of identical one-dimensional subsystems", Journal of Sound and Vibration, 1980, Vol. 70, p. 343-353. 8. LYON, R.H., Statistical Energy Analysis of
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Palmisano, Aldo N.; Elder, N.E.
2001-01-01
We examined, under standardized conditions, seawater survival of chinook salmon Oncorhynchus tshawytscha at the smolt stage to evaluate the experimental hatchery practices applied to their rearing. The experimental rearing practices included rearing fish at different densities; attempting to control bacterial kidney disease with broodstock segregation, erythromycin injection, and an experimental diet; rearing fish on different water sources; and freeze branding the fish. After application of experimental rearing practices in hatcheries, smolts were transported to a rearing facility for about 2-3 months of seawater rearing. Of 16 experiments, 4 yielded statistically significant differences in seawater survival. In general we found that high variability among replicates, plus the low numbers of replicates available, resulted in low statistical power. We recommend including four or five replicates and using ?? = 0.10 in 1-tailed tests of hatchery experiments to try to increase the statistical power to 0.80.
Jin, Meihua; Jung, Ji-Young; Lee, Jung-Ryun
2016-10-12
With the arrival of the era of Internet of Things (IoT), Wi-Fi Direct is becoming an emerging wireless technology that allows one to communicate through a direct connection between the mobile devices anytime, anywhere. In Wi-Fi Direct-based IoT networks, all devices are categorized by group of owner (GO) and client. Since portability is emphasized in Wi-Fi Direct devices, it is essential to control the energy consumption of a device very efficiently. In order to avoid unnecessary power consumed by GO, Wi-Fi Direct standard defines two power-saving methods: Opportunistic and Notice of Absence (NoA) power-saving methods. In this paper, we suggest an algorithm to enhance the energy efficiency of Wi-Fi Direct power-saving, considering the characteristics of multimedia video traffic. Proposed algorithm utilizes the statistical distribution for the size of video frames and adjusts the lengths of awake intervals in a beacon interval dynamically. In addition, considering the inter-dependency among video frames, the proposed algorithm ensures that a video frame having high priority is transmitted with higher probability than other frames having low priority. Simulation results show that the proposed method outperforms the traditional NoA method in terms of average delay and energy efficiency.
Jin, Meihua; Jung, Ji-Young; Lee, Jung-Ryun
2016-01-01
With the arrival of the era of Internet of Things (IoT), Wi-Fi Direct is becoming an emerging wireless technology that allows one to communicate through a direct connection between the mobile devices anytime, anywhere. In Wi-Fi Direct-based IoT networks, all devices are categorized by group of owner (GO) and client. Since portability is emphasized in Wi-Fi Direct devices, it is essential to control the energy consumption of a device very efficiently. In order to avoid unnecessary power consumed by GO, Wi-Fi Direct standard defines two power-saving methods: Opportunistic and Notice of Absence (NoA) power-saving methods. In this paper, we suggest an algorithm to enhance the energy efficiency of Wi-Fi Direct power-saving, considering the characteristics of multimedia video traffic. Proposed algorithm utilizes the statistical distribution for the size of video frames and adjusts the lengths of awake intervals in a beacon interval dynamically. In addition, considering the inter-dependency among video frames, the proposed algorithm ensures that a video frame having high priority is transmitted with higher probability than other frames having low priority. Simulation results show that the proposed method outperforms the traditional NoA method in terms of average delay and energy efficiency. PMID:27754315
Marino, Michael J
2018-05-01
There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kumar, Jagadish; Ananthakrishna, G.
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Kumar, Jagadish; Ananthakrishna, G
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Statistical issues on the analysis of change in follow-up studies in dental research.
Blance, Andrew; Tu, Yu-Kang; Baelum, Vibeke; Gilthorpe, Mark S
2007-12-01
To provide an overview to the problems in study design and associated analyses of follow-up studies in dental research, particularly addressing three issues: treatment-baselineinteractions; statistical power; and nonrandomization. Our previous work has shown that many studies purport an interacion between change (from baseline) and baseline values, which is often based on inappropriate statistical analyses. A priori power calculations are essential for randomized controlled trials (RCTs), but in the pre-test/post-test RCT design it is not well known to dental researchers that the choice of statistical method affects power, and that power is affected by treatment-baseline interactions. A common (good) practice in the analysis of RCT data is to adjust for baseline outcome values using ancova, thereby increasing statistical power. However, an important requirement for ancova is there to be no interaction between the groups and baseline outcome (i.e. effective randomization); the patient-selection process should not cause differences in mean baseline values across groups. This assumption is often violated for nonrandomized (observational) studies and the use of ancova is thus problematic, potentially giving biased estimates, invoking Lord's paradox and leading to difficulties in the interpretation of results. Baseline interaction issues can be overcome by use of statistical methods; not widely practiced in dental research: Oldham's method and multilevel modelling; the latter is preferred for its greater flexibility to deal with more than one follow-up occasion as well as additional covariates To illustrate these three key issues, hypothetical examples are considered from the fields of periodontology, orthodontics, and oral implantology. Caution needs to be exercised when considering the design and analysis of follow-up studies. ancova is generally inappropriate for nonrandomized studies and causal inferences from observational data should be avoided.
Robust Statistical Detection of Power-Law Cross-Correlation.
Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert
2016-06-02
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.
Robust Statistical Detection of Power-Law Cross-Correlation
Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert
2016-01-01
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630
Fractal analysis of the short time series in a visibility graph method
NASA Astrophysics Data System (ADS)
Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan
2016-05-01
The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.
Statistics Report on TEQSA Registered Higher Education Providers, 2017
ERIC Educational Resources Information Center
Australian Government Tertiary Education Quality and Standards Agency, 2017
2017-01-01
The "Statistics Report on TEQSA Registered Higher Education Providers" ("the Statistics Report") is the fourth release of selected higher education sector data held by is the fourth release of selected higher education sector data held by the Australian Government Tertiary Education Quality and Standards Agency (TEQSA) for its…
Ventura, Bruna V; Wang, Li; Ali, Shazia F; Koch, Douglas D; Weikert, Mitchell P
2015-08-01
To evaluate and compare the performance of a point-source color light-emitting diode (LED)-based topographer (color-LED) in measuring anterior corneal power and aberrations with that of a Placido-disk topographer and a combined Placido and dual Scheimpflug device. Cullen Eye Institute, Baylor College of Medicine, Houston, Texas USA. Retrospective observational case series. Normal eyes and post-refractive-surgery eyes were consecutively measured using color-LED, Placido, and dual-Scheimpflug devices. The main outcome measures were anterior corneal power, astigmatism, and higher-order aberrations (HOAs) (6.0 mm pupil), which were compared using the t test. There were no statistically significant differences in corneal power measurements in normal and post-refractive surgery eyes and in astigmatism magnitude in post-refractive surgery eyes between the color-LED device and Placido or dual Scheimpflug devices (all P > .05). In normal eyes, there were no statistically significant differences in 3rd-order coma and 4th-order spherical aberration between the color-LED and Placido devices and in HOA root mean square, 3rd-order coma, 3rd-order trefoil, 4th-order spherical aberration, and 4th-order secondary astigmatism between the color-LED and dual Scheimpflug devices (all P > .05). In post-refractive surgery eyes, the color-LED device agreed with the Placido and dual-Scheimpflug devices regarding 3rd-order coma and 4th-order spherical aberration (all P > .05). In normal and post-refractive surgery eyes, all 3 devices were comparable with respect to corneal power. The agreement in corneal aberrations varied. Drs. Wang, Koch, and Weikert are consultants to Ziemer Ophthalmic Systems AG. Dr. Koch is a consultant to Abbott Medical Optics, Inc., Alcon Surgical, Inc., and i-Optics Corp. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Petri, Andrea; May, Morgan; Haiman, Zoltán
2016-09-30
Weak gravitational lensing is becoming a mature technique for constraining cosmological parameters, and future surveys will be able to constrain the dark energy equation of state w. When analyzing galaxy surveys, redshift information has proven to be a valuable addition to angular shear correlations. We forecast parameter constraints on the triplet (Ω m,w,σ 8) for a LSST-like photometric galaxy survey, using tomography of the shear-shear power spectrum, convergence peak counts and higher convergence moments. Here we find that redshift tomography with the power spectrum reduces the area of the 1σ confidence interval in (Ω m,w) space by a factor ofmore » 8 with respect to the case of the single highest redshift bin. We also find that adding non-Gaussian information from the peak counts and higher-order moments of the convergence field and its spatial derivatives further reduces the constrained area in (Ω m,w) by factors of 3 and 4, respectively. When we add cosmic microwave background parameter priors from Planck to our analysis, tomography improves power spectrum constraints by a factor of 3. Adding moments yields an improvement by an additional factor of 2, and adding both moments and peaks improves by almost a factor of 3 over power spectrum tomography alone. We evaluate the effect of uncorrected systematic photometric redshift errors on the parameter constraints. In conclusion, we find that different statistics lead to different bias directions in parameter space, suggesting the possibility of eliminating this bias via self-calibration.« less
Variability of ULF wave power at the magnetopause: a study at low latitude with Cluster data
NASA Astrophysics Data System (ADS)
Cornilleau-Wehrlin, N.; Grison, B.; Belmont, G.; Rezeau, L.; Chanteur, G.; Robert, P.; Canu, P.
2012-04-01
Strong ULF wave activity has been observed at magnetopause crossings since a long time. Those turbulent-like waves are possible contributors to particle penetration from the Solar Wind to the Magnetosphere through the magnetopause. Statistical studies have been performed to understand under which conditions the ULF wave power is the most intense and thus the waves can be the most efficient for particle transport from one region to the other. Clearly the solar wind pressure organizes the data, the stronger the pressure, the higher the ULF power (Attié et al 2008). Double STAR-Cluster comparison has shown that ULF wave power is stronger at low latitude than at high latitude (Cornilleau-Wehrlin et al, 2008). The different studies performed have not, up to now, shown a stronger power in the vicinity of local noon. Nevertheless under identical activity conditions, the variability of this power, even at a given location in latitude and local time is very high. The present work intends at understanding this variability by means of the multi spacecraft mission Cluster. The data used are from spring 2008, while Cluster was crossing the magnetopause at low latitude, in particularly quite Solar Wind conditions. The first region of interest of this study is the sub-solar point vicinity where the long wavelength surface wave effects are most unlikely.
ASSESSMENT OF OXIDATIVE STRESS IN EARLY AND LATE ONSET PRE-ECLAMPSIA AMONG GHANAIAN WOMEN.
Tetteh, P W; Adu-Bonsaffoh, K; Antwi-Boasiako, C; Antwi, D A; Gyan, B; Obed, S A
2015-01-01
Pre-eclampsia is a multisystem pregnancy-related disorder with multiple theories regarding its aetiology resulting in lack of reliable screening tests and well-established measures for primary prevention. However, oxidative stress is increasingly being implicated in the pathogenesi of pre-eclampsia although conflicting findings have been reported. To determine and compare the levels of oxidative stress in early and late onset pre-eclampsia by measuring urinary excretion of isoprostane and total antioxidant power (TAP) in a cohort of pre-eclamptic women at Korle Bu Teaching Hospital. This was a cross-sectional study conducted at Korle-Bu Teaching Hospital, Accra, Ghana involving pre-eclamptic women between the ages 18 and 45 years who gave written informed consent. Urinary isoprostane levels were determined using an enzyme-linked immunosorbent assay (ELISA) kit whereas the Total Anti-oxidant Power in urine samples was determined using Total Antioxidant Power Colorimetric Microplate Assay kit. The data obtained were analyzed using MEGASTAT statistical software package. We included 102 pre-eclamptic women comprising 68 (66.7%) and 34 (33.3%) with early-onset and late-onset pre-eclampsia respectively. There were no statistically significant differences between the mean maternal age, haematological indices, serum ALT, AST, ALT, albumin, urea, creatinine uric acid and total protein at the time of diagnosis. The mean gestational age at diagnosis of early and late onset pre-eclampsia were 31.65 ± 0.41 and 38.03 ± 0.21 respectively (p ˂ 0.001). Also, there were statistically significant differences between the diastolic blood pressure (BP), systolic BP and mean arterial pressure (MAP) at diagnosis of pre-eclampsia in the two categories. The mean urinary Isoprostane excretion was significantly higher in the early onset pre-eclamptic group (3.04 ± 0.34 ng/mg Cr) compared to that of the late onset pre-eclamptic group (2.36 ± 0.45 ng/mg Cr), (p=0.019). Urinary total antioxidant power (TAP) in early onset PE (1.64 ± 0.06) was lower but not significantly different from that of late onset PE (1.74 ± 0.09) with p = 0.369. Significantly increased urinary isoprostane excretion was detected in early onset pre-eclampsia compared to late onset pre-eclampsia, suggestive of increased oxidative stress in the former. However, there was no significant difference in total anti-oxidant power between the two categories of pre-eclampsia women although there was a tendency of reduced total antioxidant power in the women with early onset pre-ecalmpsia.
Colegrave, Nick
2017-01-01
A common approach to the analysis of experimental data across much of the biological sciences is test-qualified pooling. Here non-significant terms are dropped from a statistical model, effectively pooling the variation associated with each removed term with the error term used to test hypotheses (or estimate effect sizes). This pooling is only carried out if statistical testing on the basis of applying that data to a previous more complicated model provides motivation for this model simplification; hence the pooling is test-qualified. In pooling, the researcher increases the degrees of freedom of the error term with the aim of increasing statistical power to test their hypotheses of interest. Despite this approach being widely adopted and explicitly recommended by some of the most widely cited statistical textbooks aimed at biologists, here we argue that (except in highly specialized circumstances that we can identify) the hoped-for improvement in statistical power will be small or non-existent, and there is likely to be much reduced reliability of the statistical procedures through deviation of type I error rates from nominal levels. We thus call for greatly reduced use of test-qualified pooling across experimental biology, more careful justification of any use that continues, and a different philosophy for initial selection of statistical models in the light of this change in procedure. PMID:28330912
Zhang, Guosheng; Huang, Kuan-Chieh; Xu, Zheng; Tzeng, Jung-Ying; Conneely, Karen N; Guan, Weihua; Kang, Jian; Li, Yun
2016-05-01
DNA methylation is a key epigenetic mark involved in both normal development and disease progression. Recent advances in high-throughput technologies have enabled genome-wide profiling of DNA methylation. However, DNA methylation profiling often employs different designs and platforms with varying resolution, which hinders joint analysis of methylation data from multiple platforms. In this study, we propose a penalized functional regression model to impute missing methylation data. By incorporating functional predictors, our model utilizes information from nonlocal probes to improve imputation quality. Here, we compared the performance of our functional model to linear regression and the best single probe surrogate in real data and via simulations. Specifically, we applied different imputation approaches to an acute myeloid leukemia dataset consisting of 194 samples and our method showed higher imputation accuracy, manifested, for example, by a 94% relative increase in information content and up to 86% more CpG sites passing post-imputation filtering. Our simulated association study further demonstrated that our method substantially improves the statistical power to identify trait-associated methylation loci. These findings indicate that the penalized functional regression model is a convenient and valuable imputation tool for methylation data, and it can boost statistical power in downstream epigenome-wide association study (EWAS). © 2016 WILEY PERIODICALS, INC.
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
A statistical spatial power spectrum of the Earth's lithospheric magnetic field
NASA Astrophysics Data System (ADS)
Thébault, E.; Vervelidou, F.
2015-05-01
The magnetic field of the Earth's lithosphere arises from rock magnetization contrasts that were shaped over geological times. The field can be described mathematically in spherical harmonics or with distributions of magnetization. We exploit this dual representation and assume that the lithospheric field is induced by spatially varying susceptibility values within a shell of constant thickness. By introducing a statistical assumption about the power spectrum of the susceptibility, we then derive a statistical expression for the spatial power spectrum of the crustal magnetic field for the spatial scales ranging from 60 to 2500 km. This expression depends on the mean induced magnetization, the thickness of the shell, and a power law exponent for the power spectrum of the susceptibility. We test the relevance of this form with a misfit analysis to the observational NGDC-720 lithospheric magnetic field model power spectrum. This allows us to estimate a mean global apparent induced magnetization value between 0.3 and 0.6 A m-1, a mean magnetic crustal thickness value between 23 and 30 km, and a root mean square for the field value between 190 and 205 nT at 95 per cent. These estimates are in good agreement with independent models of the crustal magnetization and of the seismic crustal thickness. We carry out the same analysis in the continental and oceanic domains separately. We complement the misfit analyses with a Kolmogorov-Smirnov goodness-of-fit test and we conclude that the observed power spectrum can be each time a sample of the statistical one.
Statistical power and effect sizes of depression research in Japan.
Okumura, Yasuyuki; Sakamoto, Shinji
2011-06-01
Few studies have been conducted on the rationales for using interpretive guidelines for effect size, and most of the previous statistical power surveys have covered broad research domains. The present study aimed to estimate the statistical power and to obtain realistic target effect sizes of depression research in Japan. We systematically reviewed 18 leading journals of psychiatry and psychology in Japan and identified 974 depression studies that were mentioned in 935 articles published between 1990 and 2006. In 392 studies, logistic regression analyses revealed that using clinical populations was independently associated with being a statistical power of <0.80 (odds ratio 5.9, 95% confidence interval 2.9-12.0) and of <0.50 (odds ratio 4.9, 95% confidence interval 2.3-10.5). Of the studies using clinical populations, 80% did not achieve a power of 0.80 or more, and 44% did not achieve a power of 0.50 or more to detect the medium population effect sizes. A predictive model for the proportion of variance explained was developed using a linear mixed-effects model. The model was then used to obtain realistic target effect sizes in defined study characteristics. In the face of a real difference or correlation in population, many depression researchers are less likely to give a valid result than simply tossing a coin. It is important to educate depression researchers in order to enable them to conduct an a priori power analysis. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.
Targeted On-Demand Team Performance App Development
2016-10-01
from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes. What opportunities for
Effects of Laser Treatment on the Bond Strength of Differently Sintered Zirconia Ceramics.
Dede, Doğu Ömür; Yenisey, Murat; Rona, Nergiz; Öngöz Dede, Figen
2016-07-01
The purpose of this study was to investigate the effects of carbon dioxide (CO2) and Erbium-doped yttrium aluminum garnet (Er:YAG) laser irradiations on the shear bond strength (SBS) of differently sintered zirconia ceramics to resin cement. Eighty zirconia specimens were prepared, sintered in two different periods (short = Ss, long = Ls), and divided into four treatment groups (n = 10 each). These groups were (a) untreated (control), (b) Er:YAG laser irradiated with 6 W power for 5 sec, (c) CO2 laser with 2 W power for 10 sec, (d) CO2 laser with 4 W power for 10 sec. Scanning electron microscope (SEM) images were recorded for each of the eight groups. Eighty composite resin discs (3 × 3 mm) were fabricated and cemented with an adhesive resin cement to ceramic specimens. The SBS test was performed after specimens were stored in water for 24 h by an universal testing machine at a crosshead speed of 1 mm/min. Data were statistically analyzed with two way analysis of variance (ANOVA) and Tukey honest significant difference (HSD) test (α = 0.05). According to the ANOVA, the sintering time, surface treatments and their interaction were statistically significant (p < 0.05). Although each of the laser-irradiated groups were significantly higher than the control groups, there was no statistically significant difference among them (p > 0.05). Variation in sintering time from 2.5 to 5.0 h may have influenced the SBS of Yttrium-stabilized tetragonal zirconia polycrystalline (Y-TZP) ceramics. Although CO2 and Er:YAG laser irradiation techniques may increase the SBS values of both tested zirconia ceramics, they are recommended for clinicians as an alternative pretreatment method.
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity
Beasley, T. Mark
2013-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.
Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L
2014-01-01
We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.
An application of an optimal statistic for characterizing relative orientations
NASA Astrophysics Data System (ADS)
Jow, Dylan L.; Hill, Ryley; Scott, Douglas; Soler, J. D.; Martin, P. G.; Devlin, M. J.; Fissel, L. M.; Poidevin, F.
2018-02-01
We present the projected Rayleigh statistic (PRS), a modification of the classic Rayleigh statistic, as a test for non-uniform relative orientation between two pseudo-vector fields. In the application here, this gives an effective way of investigating whether polarization pseudo-vectors (spin-2 quantities) are preferentially parallel or perpendicular to filaments in the interstellar medium. For example, there are other potential applications in astrophysics, e.g. when comparing small-scale orientations with larger scale shear patterns. We compare the efficiency of the PRS against histogram binning methods that have previously been used for characterizing the relative orientations of gas column density structures with the magnetic field projected on the plane of the sky. We examine data for the Vela C molecular cloud, where the column density is inferred from Herschel submillimetre observations, and the magnetic field from observations by the Balloon-borne Large-Aperture Submillimetre Telescope in the 250-, 350- and 500-μm wavelength bands. We find that the PRS has greater statistical power than approaches that bin the relative orientation angles, as it makes more efficient use of the information contained in the data. In particular, the use of the PRS to test for preferential alignment results in a higher statistical significance, in each of the four Vela C regions, with the greatest increase being by a factor 1.3 in the South-Nest region in the 250 - μ m band.
Degree of conversion of two lingual retainer adhesives cured with different light sources.
Usümez, Serdar; Büyükyilmaz, Tamer; Karaman, Ali Ihya; Gündüz, Beniz
2005-04-01
The aim of this study was to evaluate the degree of conversion (DC) of two lingual retainer adhesives, Transbond Lingual Retainer (TLR) and Light Cure Retainer (LCR), cured with a fast halogen light, a plasma arc light and a light-emitting diode (LED) at various curing times. A conventional halogen light served as the control. One hundred adhesive samples (five per group) were cured for 5, 10 or 15 seconds with an Optilux 501 (fast halogen light), for 3, 6 or 9 seconds with a Power Pac (plasma arc light), or for 10, 20 or 40 seconds with an Elipar Freelight (LED). Samples cured for 40 seconds with the conventional halogen lamp were used as the controls. Absorbance peaks were recorded using Fourier transform infrared (FT-IR) spectroscopy. DC values were calculated. Data were analysed using Kruskal-Wallis and Mann-Whitney U-tests. For the TLR, the highest DC values were achieved in 6 and 9 seconds with the plasma arc light. Curing with the fast halogen light for 15 seconds and with the LED for 40 seconds produced statistically similar DC values, but these were lower than those with the plasma arc light. All of these light exposures yielded a statistically significantly higher DC than 40 seconds of conventional halogen light curing. The highest DC value for the LCR was achieved in 15 seconds with the fast halogen light, then the plasma arc light curing for 6 seconds. These two combinations produced a statistically significantly higher DC when compared with the 40 seconds of conventional halogen light curing. The lowest DC for the LCR was achieved with 10 seconds of LED curing. The overall DC of the LCR was significantly higher than that of the TLR. The results suggest that a similar or higher DC than the control values could be achieved in 6-9 seconds by plasma arc curing, in 10-15 seconds by fast halogen curing or in 20 seconds by LED curing.
Magnification Bias in Gravitational Arc Statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caminha, G. B.; Estrada, J.; Makler, M.
2013-08-29
The statistics of gravitational arcs in galaxy clusters is a powerful probe of cluster structure and may provide complementary cosmological constraints. Despite recent progresses, discrepancies still remain among modelling and observations of arc abundance, specially regarding the redshift distribution of strong lensing clusters. Besides, fast "semi-analytic" methods still have to incorporate the success obtained with simulations. In this paper we discuss the contribution of the magnification in gravitational arc statistics. Although lensing conserves surface brightness, the magnification increases the signal-to-noise ratio of the arcs, enhancing their detectability. We present an approach to include this and other observational effects in semi-analyticmore » calculations for arc statistics. The cross section for arc formation ({\\sigma}) is computed through a semi-analytic method based on the ratio of the eigenvalues of the magnification tensor. Using this approach we obtained the scaling of {\\sigma} with respect to the magnification, and other parameters, allowing for a fast computation of the cross section. We apply this method to evaluate the expected number of arcs per cluster using an elliptical Navarro--Frenk--White matter distribution. Our results show that the magnification has a strong effect on the arc abundance, enhancing the fraction of arcs, moving the peak of the arc fraction to higher redshifts, and softening its decrease at high redshifts. We argue that the effect of magnification should be included in arc statistics modelling and that it could help to reconcile arcs statistics predictions with the observational data.« less
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Hill, Timothy; Chocholek, Melanie; Clement, Robert
2017-06-01
Eddy covariance (EC) continues to provide invaluable insights into the dynamics of Earth's surface processes. However, despite its many strengths, spatial replication of EC at the ecosystem scale is rare. High equipment costs are likely to be partially responsible. This contributes to the low sampling, and even lower replication, of ecoregions in Africa, Oceania (excluding Australia) and South America. The level of replication matters as it directly affects statistical power. While the ergodicity of turbulence and temporal replication allow an EC tower to provide statistically robust flux estimates for its footprint, these principles do not extend to larger ecosystem scales. Despite the challenge of spatially replicating EC, it is clearly of interest to be able to use EC to provide statistically robust flux estimates for larger areas. We ask: How much spatial replication of EC is required for statistical confidence in our flux estimates of an ecosystem? We provide the reader with tools to estimate the number of EC towers needed to achieve a given statistical power. We show that for a typical ecosystem, around four EC towers are needed to have 95% statistical confidence that the annual flux of an ecosystem is nonzero. Furthermore, if the true flux is small relative to instrument noise and spatial variability, the number of towers needed can rise dramatically. We discuss approaches for improving statistical power and describe one solution: an inexpensive EC system that could help by making spatial replication more affordable. However, we note that diverting limited resources from other key measurements in order to allow spatial replication may not be optimal, and a balance needs to be struck. While individual EC towers are well suited to providing fluxes from the flux footprint, we emphasize that spatial replication is essential for statistically robust fluxes if a wider ecosystem is being studied. © 2016 The Authors Global Change Biology Published by John Wiley & Sons Ltd.
System Study: Emergency Power System 1998-2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2015-12-01
This report presents an unreliability evaluation of the emergency power system (EPS) at 104 U.S. commercial nuclear power plants. Demand, run hours, and failure data from fiscal year 1998 through 2014 for selected components were obtained from the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The unreliability results are trended for the most recent 10 year period while yearly estimates for system unreliability are provided for the entire active period. An extremely statistically significant increasing trend was observed for EPS system unreliability for an 8-hour mission. A statistically significant increasing trend was observed for EPS system start-onlymore » unreliability.« less
The association between major depression prevalence and sex becomes weaker with age.
Patten, Scott B; Williams, Jeanne V A; Lavorato, Dina H; Wang, Jian Li; Bulloch, Andrew G M; Sajobi, Tolulope
2016-02-01
Women have a higher prevalence of major depressive episodes (MDE) than men, and the annual prevalence of MDE declines with age. Age by sex interactions may occur (a weakening of the sex effect with age), but are easily overlooked since individual studies lack statistical power to detect interactions. The objective of this study was to evaluate age by sex interactions in MDE prevalence. In Canada, a series of 10 national surveys conducted between 1996 and 2013 assessed MDE prevalence in respondents over the age of 14. Treating age as a continuous variable, binomial and linear regression was used to model age by sex interactions in each survey. To increase power, the survey-specific interaction coefficients were then pooled using meta-analytic methods. The estimated interaction terms were homogeneous. In the binomial regression model I (2) was 31.2 % and was not statistically significant (Q statistic = 13.1, df = 9, p = 0.159). The pooled estimate (-0.004) was significant (z = 3.13, p = 0.002), indicating that the effect of sex became weaker with increasing age. This resulted in near disappearance of the sex difference in the 75+ age group. This finding was also supported by an examination of age- and sex-specific estimates pooled across the surveys. The association of MDE prevalence with sex becomes weaker with age. The interaction may reflect biological effect modification. Investigators should test for, and consider inclusion of age by sex interactions in epidemiological analyses of MDE prevalence.
Influence of different types of astigmatism on visual acuity.
Remón, Laura; Monsoriu, Juan A; Furlan, Walter D
To investigate the change in visual acuity (VA) produced by different types of astigmatism (on the basis of the refractive power and position of the principal meridians) on normal accommodating eyes. The lens induced method was employed to simulate a set of 28 astigmatic blur conditions on different healthy emmetropic eyes. Additionally, 24 values of spherical defocus were also simulated on the same eyes for comparison. VA was measured in each case and the results, expressed in logMAR units, were represented against of the modulus of the dioptric power vector (blur strength). LogMAR VA varies in a linear fashion with increasing astigmatic blur, being the slope of the line dependent on the accommodative demand in each type of astigmatism. However, in each case, we found no statistically significant differences between the three axes investigated (0°, 45°, 90°). Non-statistically significant differences were found either for the VA achieved with spherical myopic defocus (MD) and mixed astigmatism (MA). VA with simple hyperopic astigmatism (SHA) was higher than with simple myopic astigmatism (SMA), however, in this case non conclusive results were obtained in terms of statistical significance. The VA achieved with imposed compound hyperopic astigmatism (CHA) was highly influenced by the eye's accommodative response. VA is correlated with the blur strength in a different way for each type of astigmatism, depending on the accommodative demand. VA is better when one of the focal lines lie on the retina irrespective of the axis orientation; accommodation favors this situation. Copyright © 2016 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.
Gambling Risk Groups are Not All the Same: Risk Factors Amongst Sports Bettors.
Russell, Alex M T; Hing, Nerilee; Li, En; Vitartas, Peter
2018-03-20
Sports betting is increasing worldwide, with an associated increase in sports betting-related problems. Previous studies have examined risk factors for problem gambling amongst sports bettors and have identified demographic, behavioural, marketing, normative and impulsiveness factors. These studies have generally compared those in problem gambling, or a combination of moderate risk and problem gambling, groups to non-problem gamblers, often due to statistical power issues. However, recent evidence suggests that, at a population level, the bulk of gambling-related harm stems from low risk and moderate risk gamblers, rather than problem gamblers. Thus it is essential to understand the risk factors for each level of gambling-related problems (low risk, moderate risk, problem) separately. The present study used a large sample (N = 1813) to compare each gambling risk group to non-problem gamblers, first using bivariate and then multivariate statistical techniques. A range of demographic, behavioural, marketing, normative and impulsiveness variables were included as possible risk factors. The results indicated that some variables, such as gambling expenditure, number of accounts with different operators, number of different types of promotions used and impulsiveness were significantly higher for all risk groups, while others such as some normative factors, age, gender and particular sports betting variables only applied to those with the highest level of gambling-related problems. The results generally supported findings from previous literature for problem gamblers, and extended these findings to low risk and moderate risk groups. In the future, where statistical power allows, risk factors should be assessed separately for all levels of gambling problems.
NASA Astrophysics Data System (ADS)
Franz, T. E.; Avery, W. A.; Finkenbiner, C. E.; Wang, T.; Brocca, L.
2014-12-01
Approximately 40% of global food production comes from irrigated agriculture. With the increasing demand for food even greater pressures will be placed on water resources within these systems. In this work we aimed to characterize the spatial and temporal patterns of soil moisture at the field-scale (~500 m) using the newly developed cosmic-ray neutron rover near Waco, NE. Here we mapped soil moisture of 144 quarter section fields (a mix of maize, soybean, and natural areas) each week during the 2014 growing season (May to September). The 11 x11 km study domain also contained 3 stationary cosmic-ray neutron probes for independent validation of the rover surveys. Basic statistical analysis of the domain indicated a strong inverted parabolic relationship between the mean and variance of soil moisture. The relationship between the mean and higher order moments were not as strong. Geostatistical analysis indicated the range of the soil moisture semi-variogram was significantly shorter during periods of heavy irrigation as compared to non-irrigated periods. Scaling analysis indicated strong power law behavior between the variance of soil moisture and averaging area with minimal dependence of mean soil moisture on the slope of the power law function. Statistical relationships derived from the rover dataset offer a novel set of observations that will be useful in: 1) calibrating and validating land surface models, 2) calibrating and validating crop models, 3) soil moisture covariance estimates for statistical downscaling of remote sensing products such as SMOS and SMAP, and 4) provide center-pivot scale mean soil moisture data for optimal irrigation timing and volume amounts.
Petaloti, Christina; Triantafyllou, Athanasios; Kouimtzis, Themistoklis; Samara, Constantini
2006-12-01
Total suspended particle (TSP) concentrations were determined in the Eordea basin (western Macedonia, Greece), an area with intensive lignite burning for power generation. The study was conducted over a one-year period (November 2000-November 2001) at 10 sites located at variable distances from the power plants. Ambient TSP samples were analyzed for 27 major, minor and trace elements. Annual means of TSP concentrations ranged between 47+/-33 microg m(-3) and 110+/-50 microg m(-3) at 9 out of the 10 sites. Only the site closest to the power stations and the lignite conveyor belts exhibited annual TSP levels (210+/-97 microg m(-3)) exceeding the European standard (150 microg m(-3), 80/779/EEC). Concentrations of TSP and almost all elemental components exhibited significant spatial variations; however, the elemental profiles of TSP were quite similar among all sites suggesting that they are affected by similar source types. At all sites, statistical analysis indicated insignificant (P<0.05) seasonal variation for TSP concentrations. Some elements (Cl, As, Pb, Br, Se, S, Cd) exhibited significantly higher concentrations at certain sites during the cold period suggesting more intense emissions from traffic, domestic heating and other combustion sources. On the contrary, concentrations significantly higher in the warm period were found at other sites mainly for crustal elements (Ti, Mn, K, P, Cr, etc.) suggesting stronger influence from soil resuspension and/or fly ash in the warm months. The most enriched elements against local soil or road dust were S, Cl, Cu, As, Se, Br, Cd and Pb, whereas negligible enrichment was found for Ti, Mn, Mg, Al, Si, P, Cr. At most sites, highest concentrations of TSP and elemental components were associated with low- to moderate-speed winds favoring accumulation of emissions from local sources. Influences from the power generation were likely at those sites located closest to the power plants and mining activities.
Characteristic correlation study of UV disinfection performance for ballast water treatment
NASA Astrophysics Data System (ADS)
Ba, Te; Li, Hongying; Osman, Hafiiz; Kang, Chang-Wei
2016-11-01
Characteristic correlation between ultraviolet disinfection performance and operating parameters, including ultraviolet transmittance (UVT), lamp power and water flow rate, was studied by numerical and experimental methods. A three-stage model was developed to simulate the fluid flow, UV radiation and the trajectories of microorganisms. Navier-Stokes equation with k-epsilon turbulence was solved to model the fluid flow, while discrete ordinates (DO) radiation model and discrete phase model (DPM) were used to introduce UV radiation and microorganisms trajectories into the model, respectively. The UV dose statistical distribution for the microorganisms was found to move to higher value with the increase of UVT and lamp power, but moves to lower value when the water flow rate increases. Further investigation shows that the fluence rate increases exponentially with UVT but linearly with the lamp power. The average and minimum resident time decreases linearly with the water flow rate while the maximum resident time decrease rapidly in a certain range. The current study can be used as a digital design and performance evaluation tool of the UV reactor for ballast water treatment.
The Energy Spectrum of Solar Energetic Electrons
NASA Astrophysics Data System (ADS)
Wang, L.; Yang, L.; Krucker, S.; Wimmer-Schweingruber, R. F.; Bale, S. D.
2015-12-01
Here we present a statistical survey of the energy spectrum of solar energetic electron events (SEEs) observed by the WIND 3DP instrument from 1995 though 2014. For SEEs with the minimum energy below 10 keV and the maximum energy above 100 keV, ~85% (~2%) have a double-power-law energy spectrum with a steepening (hardening) above the break energy, while ~13% have a single-power-law energy spectrum at all energies. The average spectral index is ~2.4 below the energy break and is ~4.0 above the energy break. For SEEs detected only at energies <10 keV (>20 keV), they generally show a single-power-law spectrum with the average index of ~3.0 (~3.3). The spectrum of SEEs detected only below 10 keV appears to get harder with increasing solar activity, but the spectrum of SEEs with higher-energy electrons shows no clear correlation with solar activity. We will also investigate whether the observed energy spectrum of SEEs at 1 AU mainly reflects the electron acceleration at the Sun or the electron transport in the interplanetary medium.
Babannavar, Roopa; Lohra, Abhishek; Kodgi, Ashwin; Bapure, Sunil; Rao, Yogesh; J., Arun; Malghan, Manjunath
2014-01-01
Aim: Biomonitoring provides a useful tool to estimate the genetic risk from exposure to genotoxic agents. The aim of this study was to evaluate the frequencies of Micronuclei (MN) and other Nuclear abnormalities (NA) from exfoliated oral mucosal cells in Nuclear Power Station (NPS) workers. Materials and Methods: Micronucleus frequencies in oral exfoliated cells were done from individuals not known to be exposed to either environmental or occupational carcinogens (Group I). Similarly samples were obtained from full-time Nuclear Power Station (NPS) workers with absence of Leukemia and any malignancy (Group II) and workers diagnosed as leukemic patients and undergoing treatment (Group III). Results: There was statistically significant difference between Group I, Group II & Group III. MN and NA frequencies in Leukemic Patients were significantly higher than those in exposed workers &control groups (p < 0.05). Conclusion: MN and other NA reflect genetic changes, events associated with malignancies. Therefore, there is a need to educate those who work in NPS about the potential hazard of occupational exposure and the importance of using protective measures. PMID:25654022
Infrared Thermal Imaging During Ultrasonic Aspiration of Bone
NASA Astrophysics Data System (ADS)
Cotter, D. J.; Woodworth, G.; Gupta, S. V.; Manandhar, P.; Schwartz, T. H.
Ultrasonic surgical aspirator tips target removal of bone in approaches to tumors or aneurysms. Low profile angled tips provide increased visualization and safety in many high risk surgical situations that commonly were approached using a high speed rotary drill. Utilization of the ultrasonic aspirator for bone removal raised questions about relative amount of local and transmitted heat energy. In the sphenoid wing of a cadaver section, ultrasonic bone aspiration yielded lower thermal rise in precision bone removal than rotary mechanical drills, with maximum temperature of 31 °C versus 69 °C for fluted and 79 °C for diamond drill bits. Mean ultrasonic fragmentation power was about 8 Watts. Statistical studies using tenacious porcine cranium yielded mean power levels of about 4.5 Watts to 11 Watts and mean temperature of less than 41.1 °C. Excessively loading the tip yielded momentary higher power; however, mean thermal rise was less than 8 °C with bone removal starting at near body temperature of about 37 °C. Precision bone removal and thermal management were possible with conditions tested for ultrasonic bone aspiration.
Nonlinear GARCH model and 1 / f noise
NASA Astrophysics Data System (ADS)
Kononovicius, A.; Ruseckas, J.
2015-06-01
Auto-regressive conditionally heteroskedastic (ARCH) family models are still used, by practitioners in business and economic policy making, as a conditional volatility forecasting models. Furthermore ARCH models still are attracting an interest of the researchers. In this contribution we consider the well known GARCH(1,1) process and its nonlinear modifications, reminiscent of NGARCH model. We investigate the possibility to reproduce power law statistics, probability density function and power spectral density, using ARCH family models. For this purpose we derive stochastic differential equations from the GARCH processes in consideration. We find the obtained equations to be similar to a general class of stochastic differential equations known to reproduce power law statistics. We show that linear GARCH(1,1) process has power law distribution, but its power spectral density is Brownian noise-like. However, the nonlinear modifications exhibit both power law distribution and power spectral density of the 1 /fβ form, including 1 / f noise.
Immunohistochemical evaluation of myofibroblast density in odontogenic cysts and tumors.
Kouhsoltani, Maryam; Halimi, Monireh; Jabbari, Golchin
2016-01-01
Background. The aim of this study was to investigate myofibroblast (MF) density in a broad spectrum of odontogenic cysts and tumors and the relation between the density of MFs and the clinical behavior of these lesions. Methods. A total of 105 cases of odontogenic lesions, including unicystic ameloblastoma (UAM), solid ameloblastoma (SA), odontogenic keratocyst (OKC), dentigerous cyst (DC), radicular cyst (RC) (15 for each category), and odontogenic myxoma (OM), adenomatoid odontogenic tumor (AOT), calcifying odontogenic cyst (COC) (10 for each category), were immunohistochemically stained with anti-α-smooth muscle actin antibody. The mean percentage of positive cells in 10 high-power fields was considered as MF density for each case. Results. A statistically significant difference was observed in the mean scores between the study groups (P < 0.001). The intensity of MFs was significantly higher in odontogenic tumors compared to odontogenic cysts (P < 0.001). There was no statistically significant difference between odontogenic tumors, except between UAM and OM (P = 0.041). The difference between OKC and odontogenic tumors was not statistically significant (P > 0.05). The number of MFs was significantly higher in OKC and lower in COC compared to other odontogenic cysts (P = 0.007 and P = 0.045, respectively). Conclusion. The results of the present study suggest a role for MFs in the aggressive behavior of odontogenic lesions. MFs may represent an important target of therapy, especially for aggressive odontogenic lesions. Our findings support the classification of OKC in the category of odontogenic tumors.
Immunohistochemical evaluation of myofibroblast density in odontogenic cysts and tumors
Kouhsoltani, Maryam; Halimi, Monireh; Jabbari, Golchin
2016-01-01
Background. The aim of this study was to investigate myofibroblast (MF) density in a broad spectrum of odontogenic cysts and tumors and the relation between the density of MFs and the clinical behavior of these lesions. Methods. A total of 105 cases of odontogenic lesions, including unicystic ameloblastoma (UAM), solid ameloblastoma (SA), odontogenic keratocyst (OKC), dentigerous cyst (DC), radicular cyst (RC) (15 for each category), and odontogenic myxoma (OM), adenomatoid odontogenic tumor (AOT), calcifying odontogenic cyst (COC) (10 for each category), were immunohistochemically stained with anti-α-smooth muscle actin antibody. The mean percentage of positive cells in 10 high-power fields was considered as MF density for each case. Results. A statistically significant difference was observed in the mean scores between the study groups (P < 0.001). The intensity of MFs was significantly higher in odontogenic tumors compared to odontogenic cysts (P < 0.001). There was no statistically significant difference between odontogenic tumors, except between UAM and OM (P = 0.041). The difference between OKC and odontogenic tumors was not statistically significant (P > 0.05). The number of MFs was significantly higher in OKC and lower in COC compared to other odontogenic cysts (P = 0.007 and P = 0.045, respectively). Conclusion. The results of the present study suggest a role for MFs in the aggressive behavior of odontogenic lesions. MFs may represent an important target of therapy, especially for aggressive odontogenic lesions. Our findings support the classification of OKC in the category of odontogenic tumors. PMID:27092213
Feitosa, Fernanda A; de Araújo, Rodrigo M; Tay, Franklin R; Niu, Lina; Pucci, César R
2017-12-12
The present study evaluated the effect of different high-power-laser surface treatments on the bond strength between resin cement and disilicate ceramic. Lithium disilicate ceramic specimens with truncated cones shape were prepared and divided into 5 groups: HF (hydrofluoric acid-etching), Er:YAG laser + HF, Graphite + Er:YAG laser + HF, Nd:YAG laser + HF, and Graphite + Nd:YAG laser + HF. The treated ceramic surfaces were characterized with scanning electron microscopy and surface roughness measurement. Hourglasses-shaped ceramic- resin bond specimens were prepared, thermomechanically cycled and stressed to failure under tension. The results showed that for both the factors "laser" and "graphite", statistically significant differences were observed (p < 0.05). Multiple-comparison tests performed on the "laser" factor were in the order: Er:YAG > Nd:YAG (p < 0.05), and on the "graphite" factor were in the order: graphite coating < without coating (p < 0.05). The Dunnett test showed that Er:YAG + HF had significantly higher tensile strength (p = 0.00). Higher surface roughness was achieved after Er:YAG laser treatment. Thus Er:YAG laser treatment produces higher bond strength to resin cement than other surface treatment protocols. Surface-coating with graphite does not improve bonding of the laser-treated lithium disilicate ceramic to resin cement.
Shifflett, Benjamin; Huang, Rong; Edland, Steven D
2017-01-01
Genotypic association studies are prone to inflated type I error rates if multiple hypothesis testing is performed, e.g., sequentially testing for recessive, multiplicative, and dominant risk. Alternatives to multiple hypothesis testing include the model independent genotypic χ 2 test, the efficiency robust MAX statistic, which corrects for multiple comparisons but with some loss of power, or a single Armitage test for multiplicative trend, which has optimal power when the multiplicative model holds but with some loss of power when dominant or recessive models underlie the genetic association. We used Monte Carlo simulations to describe the relative performance of these three approaches under a range of scenarios. All three approaches maintained their nominal type I error rates. The genotypic χ 2 and MAX statistics were more powerful when testing a strictly recessive genetic effect or when testing a dominant effect when the allele frequency was high. The Armitage test for multiplicative trend was most powerful for the broad range of scenarios where heterozygote risk is intermediate between recessive and dominant risk. Moreover, all tests had limited power to detect recessive genetic risk unless the sample size was large, and conversely all tests were relatively well powered to detect dominant risk. Taken together, these results suggest the general utility of the multiplicative trend test when the underlying genetic model is unknown.
REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, M. F.; Melatos, A.; Delaigle, A.
2013-04-01
We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by {approx}6% than the C-statistic alone under optimal conditionsmore » (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a {approx}> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.« less
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine.
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L; Balleteros, Francisco
2016-12-07
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets.
Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine
Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L.; Balleteros, Francisco
2016-01-01
Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets. PMID:27941604
Escalante, Yolanda; Saavedra, Jose M.; Tella, Victor; Mansilla, Mirella; García-Hermoso, Antonio; Dominguez, Ana M.
2012-01-01
The aims of this study were (i) to compare women’s water polo game-related statistics by match outcome (winning and losing teams) and phase (preliminary, classificatory, and semi-final/bronze medal/gold medal), and (ii) identify characteristics that discriminate performances for each phase. The game-related statistics of the 124 women’s matches played in five International Championships (World and European Championships) were analyzed. Differences between winning and losing teams in each phase were determined using the chi-squared. A discriminant analysis was then performed according to context in each of the three phases. It was found that the game-related statistics differentiate the winning from the losing teams in each phase of an international championship. The differentiating variables were both offensive (centre goals, power-play goals, counterattack goal, assists, offensive fouls, steals, blocked shots, and won sprints) and defensive (goalkeeper-blocked shots, goalkeeper-blocked inferiority shots, and goalkeeper-blocked 5-m shots). The discriminant analysis showed the game-related statistics to discriminate performance in all phases: preliminary, classificatory, and final phases (92%, 90%, and 83%, respectively). Two variables were discriminatory by match outcome (winning or losing teams) in all three phases: goals and goalkeeper-blocked shots. Key pointsThe preliminary phase that more than one variable was involved in this differentiation, including both offensive and defensive aspects of the game.The game-related statistics were found to have a high discriminatory power in predicting the result of matches with shots and goalkeeper-blocked shots being discriminatory variables in all three phases.Knowledge of the characteristics of women’s water polo game-related statistics of the winning teams and their power to predict match outcomes will allow coaches to take these characteristics into account when planning training and match preparation. PMID:24149356
Equitability, mutual information, and the maximal information coefficient.
Kinney, Justin B; Atwal, Gurinder S
2014-03-04
How should one quantify the strength of association between two random variables without bias for relationships of a specific form? Despite its conceptual simplicity, this notion of statistical "equitability" has yet to receive a definitive mathematical formalization. Here we argue that equitability is properly formalized by a self-consistency condition closely related to Data Processing Inequality. Mutual information, a fundamental quantity in information theory, is shown to satisfy this equitability criterion. These findings are at odds with the recent work of Reshef et al. [Reshef DN, et al. (2011) Science 334(6062):1518-1524], which proposed an alternative definition of equitability and introduced a new statistic, the "maximal information coefficient" (MIC), said to satisfy equitability in contradistinction to mutual information. These conclusions, however, were supported only with limited simulation evidence, not with mathematical arguments. Upon revisiting these claims, we prove that the mathematical definition of equitability proposed by Reshef et al. cannot be satisfied by any (nontrivial) dependence measure. We also identify artifacts in the reported simulation evidence. When these artifacts are removed, estimates of mutual information are found to be more equitable than estimates of MIC. Mutual information is also observed to have consistently higher statistical power than MIC. We conclude that estimating mutual information provides a natural (and often practical) way to equitably quantify statistical associations in large datasets.
NASA Astrophysics Data System (ADS)
Berg, Jacob; Patton, Edward G.; Sullivan, Peter S.
2017-11-01
The effect of mesh resolution and size on shear driven atmospheric boundary layers in a stable stratified environment is investigated with the NCAR pseudo-spectral LES model (J. Atmos. Sci. v68, p2395, 2011 and J. Atmos. Sci. v73, p1815, 2016). The model applies FFT in the two horizontal directions and finite differencing in the vertical direction. With vanishing heat flux at the surface and a capping inversion entraining potential temperature into the boundary layer the situation is often called the conditional neutral atmospheric boundary layer (ABL). Due to its relevance in high wind applications such as wind power meteorology, we emphasize on second order statistics important for wind turbines including spectral information. The simulations range from mesh sizes of 643 to 10243 grid points. Due to the non-stationarity of the problem, different simulations are compared at equal eddy-turnover times. Whereas grid convergence is mostly achieved in the middle portion of the ABL, statistics close to the surface of the ABL, where the presence of the ground limits the growth of the energy containing eddies, second order statistics are not converged on the studies meshes. Higher order structure functions also reveal non-Gaussian statistics highly dependent on the resolution.
Identifying Wave-Particle Interactions in the Solar Wind using Statistical Correlations
NASA Astrophysics Data System (ADS)
Broiles, T. W.; Jian, L. K.; Gary, S. P.; Lepri, S. T.; Stevens, M. L.
2017-12-01
Heavy ions are a trace component of the solar wind, which can resonate with plasma waves, causing heating and acceleration relative to the bulk plasma. While wave-particle interactions are generally accepted as the cause of heavy ion heating and acceleration, observations to constrain the physics are lacking. In this work, we statistically link specific wave modes to heavy ion heating and acceleration. We have computed the Fast Fourier Transform (FFT) of transverse and compressional magnetic waves between 0 and 5.5 Hz using 9 days of ACE and Wind Magnetometer data. The FFTs are averaged over plasma measurement cycles to compute statistical correlations between magnetic wave power at each discrete frequency, and ion kinetic properties measured by ACE/SWICS and Wind/SWE. The results show that lower frequency transverse oscillations (< 0.2 Hz) and higher frequency compressional oscillations (> 0.4 Hz) are positively correlated with enhancements in the heavy ion thermal and drift speeds. Moreover, the correlation results for the He2+ and O6+ were similar on most days. The correlations were often weak, but most days had some frequencies that correlated with statistical significance. This work suggests that the solar wind heavy ions are possibly being heated and accelerated by both transverse and compressional waves at different frequencies.
A global estimate of the Earth's magnetic crustal thickness
NASA Astrophysics Data System (ADS)
Vervelidou, Foteini; Thébault, Erwan
2014-05-01
The Earth's lithosphere is considered to be magnetic only down to the Curie isotherm. Therefore the Curie isotherm can, in principle, be estimated by analysis of magnetic data. Here, we propose such an analysis in the spectral domain by means of a newly introduced regional spatial power spectrum. This spectrum is based on the Revised Spherical Cap Harmonic Analysis (R-SCHA) formalism (Thébault et al., 2006). We briefly discuss its properties and its relationship with the Spherical Harmonic spatial power spectrum. This relationship allows us to adapt any theoretical expression of the lithospheric field power spectrum expressed in Spherical Harmonic degrees to the regional formulation. We compared previously published statistical expressions (Jackson, 1994 ; Voorhies et al., 2002) to the recent lithospheric field models derived from the CHAMP and airborne measurements and we finally developed a new statistical form for the power spectrum of the Earth's magnetic lithosphere that we think provides more consistent results. This expression depends on the mean magnetization, the mean crustal thickness and a power law value that describes the amount of spatial correlation of the sources. In this study, we make a combine use of the R-SCHA surface power spectrum and this statistical form. We conduct a series of regional spectral analyses for the entire Earth. For each region, we estimate the R-SCHA surface power spectrum of the NGDC-720 Spherical Harmonic model (Maus, 2010). We then fit each of these observational spectra to the statistical expression of the power spectrum of the Earth's lithosphere. By doing so, we estimate the large wavelengths of the magnetic crustal thickness on a global scale that are not accessible directly from the magnetic measurements due to the masking core field. We then discuss these results and compare them to the results we obtained by conducting a similar spectral analysis, but this time in the cartesian coordinates, by means of a published statistical expression (Maus et al., 1997). We also compare our results to crustal thickness global maps derived by means of additional geophysical data (Purucker et al., 2002).
Rare Variant Association Test with Multiple Phenotypes
Lee, Selyeong; Won, Sungho; Kim, Young Jin; Kim, Yongkang; Kim, Bong-Jo; Park, Taesung
2016-01-01
Although genome-wide association studies (GWAS) have now discovered thousands of genetic variants associated with common traits, such variants cannot explain the large degree of “missing heritability,” likely due to rare variants. The advent of next generation sequencing technology has allowed rare variant detection and association with common traits, often by investigating specific genomic regions for rare variant effects on a trait. Although multiply correlated phenotypes are often concurrently observed in GWAS, most studies analyze only single phenotypes, which may lessen statistical power. To increase power, multivariate analyses, which consider correlations between multiple phenotypes, can be used. However, few existing multi-variant analyses can identify rare variants for assessing multiple phenotypes. Here, we propose Multivariate Association Analysis using Score Statistics (MAAUSS), to identify rare variants associated with multiple phenotypes, based on the widely used Sequence Kernel Association Test (SKAT) for a single phenotype. We applied MAAUSS to Whole Exome Sequencing (WES) data from a Korean population of 1,058 subjects, to discover genes associated with multiple traits of liver function. We then assessed validation of those genes by a replication study, using an independent dataset of 3,445 individuals. Notably, we detected the gene ZNF620 among five significant genes. We then performed a simulation study to compare MAAUSS's performance with existing methods. Overall, MAAUSS successfully conserved type 1 error rates and in many cases, had a higher power than the existing methods. This study illustrates a feasible and straightforward approach for identifying rare variants correlated with multiple phenotypes, with likely relevance to missing heritability. PMID:28039885
Analysis of economic determinants of fertility in Iran: a multilevel approach.
Moeeni, Maryam; Pourreza, Abolghasem; Torabi, Fatemeh; Heydari, Hassan; Mahmoudi, Mahmood
2014-08-01
During the last three decades, the Total Fertility Rate (TFR) in Iran has fallen considerably; from 6.5 per woman in 1983 to 1.89 in 2010. This paper analyzes the extent to which economic determinants at the micro and macro levels are associated with the number of children in Iranian households. Household data from the 2010 Household Expenditure and Income Survey (HEIS) is linked to provincial data from the 2010 Iran Multiple-Indicator Demographic and Health Survey (IrMIDHS), the National Census of Population and Housing conducted in 1986, 1996, 2006 and 2011, and the 1985-2010 Iran statistical year books. Fertility is measured as the number of children in each household. A random intercept multilevel Poisson regression function is specified based on a collective model of intra-household bargaining power to investigate potential determinants of the number of children in Iranian households. Ceteris paribus (other things being equal), probability of having more children drops significantly as either real per capita educational expenditure or real total expenditure of each household increase. Both the low- and the high-income households show probabilities of having more children compared to the middle-income households. Living in provinces with either higher average amount of value added of manufacturing establishments or lower average rate of house rent is associated to higher probability of having larger number of children. Higher levels of gender gap indices, resulting in household's wife's limited power over household decision-making, positively affect the probability of having more children. Economic determinants at the micro and macro levels, distribution of intra-household bargaining power between spouses and demographic covariates determined fertility behavior of Iranian households.
Analysis of economic determinants of fertility in Iran: a multilevel approach
Moeeni, Maryam; Pourreza, Abolghasem; Torabi, Fatemeh; Heydari, Hassan; Mahmoudi, Mahmood
2014-01-01
Background: During the last three decades, the Total Fertility Rate (TFR) in Iran has fallen considerably; from 6.5 per woman in 1983 to 1.89 in 2010. This paper analyzes the extent to which economic determinants at the micro and macro levels are associated with the number of children in Iranian households. Methods: Household data from the 2010 Household Expenditure and Income Survey (HEIS) is linked to provincial data from the 2010 Iran Multiple-Indicator Demographic and Health Survey (IrMIDHS), the National Census of Population and Housing conducted in 1986, 1996, 2006 and 2011, and the 1985–2010 Iran statistical year books. Fertility is measured as the number of children in each household. A random intercept multilevel Poisson regression function is specified based on a collective model of intra-household bargaining power to investigate potential determinants of the number of children in Iranian households. Results: Ceteris paribus (other things being equal), probability of having more children drops significantly as either real per capita educational expenditure or real total expenditure of each household increase. Both the low- and the high-income households show probabilities of having more children compared to the middle-income households. Living in provinces with either higher average amount of value added of manufacturing establishments or lower average rate of house rent is associated to higher probability of having larger number of children. Higher levels of gender gap indices, resulting in household’s wife’s limited power over household decision-making, positively affect the probability of having more children. Conclusion: Economic determinants at the micro and macro levels, distribution of intra-household bargaining power between spouses and demographic covariates determined fertility behavior of Iranian households. PMID:25197678
Comparison of Physical and Physiological Profiles in Elite and Amateur Young Wrestlers.
Demirkan, Erkan; Koz, Mitat; Kutlu, Mehmet; Favre, Mike
2015-07-01
The aim of this study is to examine the physical and physiological determinants of wrestling success between elite and amateur male wrestlers. The wrestlers (N = 126) were first assigned to 3 groups based on their competitive level (top elite, elite, and amateur) and then to 6 groups according to their body mass (light, middle, and heavy weight) and their competitive level (elite and amateur). Top elite and elite wrestlers had significantly (p ≤ 0.05) more training experiences and maximal oxygen uptake compared with the amateur group. In separating weight classes, light- and middle-weight elite (MWE) wrestlers had significantly (p ≤ 0.05) more training experience (7-20%) compared with the light- and middle-weight amateur (MWA) wrestlers. No significant differences were detected between elite and amateur groups (light-, middle-, and heavy-weight wrestlers) for age, body mass, height, body mass index, and body fat (p > 0.05), with the exception of height for heavy wrestlers. Leg average and peak power values (in watts and watts per kilogram) in MWE were higher than MWA (6.5 and 13%, p ≤ 0.05). Relative leg average power value in heavy-weight elite (HWE) (in watts per kilogram) was higher than heavy-weight amateur (HWA) (9.6%, p ≤ 0.05). It was seen that elite wrestlers in MWE and HWE statistically possessed a higher V̇O2max (12.5 and 11.4%, respectively) than amateur middle- and heavy-weight wrestlers (p ≤ 0.05). The results of this study suggest that training experience, aerobic endurance, and anaerobic power and capacity will give a clear advantage for the wrestlers to take part in the elite group.
Pejtersen, Jan Hyld; Burr, Hermann; Hannerz, Harald; Fishta, Alba; Hurwitz Eller, Nanna
2015-01-01
The present review deals with the relationship between occupational psychosocial factors and the incidence of ischemic heart disease (IHD) with special regard to the statistical power of the findings. This review with 4 inclusion criteria is an update of a 2009 review of which the first 3 criteria were included in the original review: (1) STUDY: a prospective or case-control study if exposure was not self-reported (prognostic studies excluded); (2) OUTCOME: definite IHD determined externally; (3) EXPOSURE: psychosocial factors at work (excluding shift work, trauma, violence or accidents, and social capital); and (4) Statistical power: acceptable to detect a 20% increased risk in IHD. Eleven new papers met the inclusion criteria 1-3; a total of 44 papers were evaluated regarding inclusion criteria 4. Of 169 statistical analyses, only 10 analyses in 2 papers had acceptable statistical power. The results of the 2 papers pointed in the same direction, namely that only the control dimension of job strain explained the excess risk for myocardial infarction for job strain. The large number of underpowered studies and the focus on psychosocial models, such as the job strain models, make it difficult to determine to what extent psychosocial factors at work are risk factors of IHD. There is a need for considering statistical power when planning studies.
2017-01-01
The annual report presents data tables describing the electricity industry in each State. Data include: summary statistics; the 10 largest plants by generating capacity; the top five entities ranked by sector; electric power industry generating capacity by primary energy source; electric power industry generation by primary energy source; utility delivered fuel prices for coal, petroleum, and natural gas; electric power industry emissions estimates; retail sales, revenue, and average retail price by sector; retail electricity sales statistics; and supply and disposition of electricity; net metering counts and capacity by technology and customer type; and advanced metering counts by customer type.
Increasing the lensing figure of merit through higher order convergence moments
NASA Astrophysics Data System (ADS)
Vicinanza, Martina; Cardone, Vincenzo F.; Maoli, Roberto; Scaramella, Roberto; Er, Xinzhong
2018-01-01
The unprecedented quality, the increased data set, and the wide area of ongoing and near future weak lensing surveys allows one to move beyond the standard two points statistics, thus making it worthwhile to investigate higher order probes. As an interesting step toward this direction, we explore the use of higher order moments (HOM) of the convergence field as a way to increase the lensing figure of merit (FoM). To this end, we rely on simulated convergence to first show that HOM can be measured and calibrated so that it is indeed possible to predict them for a given cosmological model provided suitable nuisance parameters are introduced and then marginalized over. We then forecast the accuracy on cosmological parameters from the use of HOM alone and in combination with standard shear power spectra tomography. It turns out that HOM allow one to break some common degeneracies, thus significantly boosting the overall FoM. We also qualitatively discuss possible systematics and how they can be dealt with.
Groen, Iris I. A.; Ghebreab, Sennay; Lamme, Victor A. F.; Scholte, H. Steven
2012-01-01
The visual world is complex and continuously changing. Yet, our brain transforms patterns of light falling on our retina into a coherent percept within a few hundred milliseconds. Possibly, low-level neural responses already carry substantial information to facilitate rapid characterization of the visual input. Here, we computationally estimated low-level contrast responses to computer-generated naturalistic images, and tested whether spatial pooling of these responses could predict image similarity at the neural and behavioral level. Using EEG, we show that statistics derived from pooled responses explain a large amount of variance between single-image evoked potentials (ERPs) in individual subjects. Dissimilarity analysis on multi-electrode ERPs demonstrated that large differences between images in pooled response statistics are predictive of more dissimilar patterns of evoked activity, whereas images with little difference in statistics give rise to highly similar evoked activity patterns. In a separate behavioral experiment, images with large differences in statistics were judged as different categories, whereas images with little differences were confused. These findings suggest that statistics derived from low-level contrast responses can be extracted in early visual processing and can be relevant for rapid judgment of visual similarity. We compared our results with two other, well- known contrast statistics: Fourier power spectra and higher-order properties of contrast distributions (skewness and kurtosis). Interestingly, whereas these statistics allow for accurate image categorization, they do not predict ERP response patterns or behavioral categorization confusions. These converging computational, neural and behavioral results suggest that statistics of pooled contrast responses contain information that corresponds with perceived visual similarity in a rapid, low-level categorization task. PMID:23093921
Can power-law scaling and neuronal avalanches arise from stochastic dynamics?
Touboul, Jonathan; Destexhe, Alain
2010-02-11
The presence of self-organized criticality in biology is often evidenced by a power-law scaling of event size distributions, which can be measured by linear regression on logarithmic axes. We show here that such a procedure does not necessarily mean that the system exhibits self-organized criticality. We first provide an analysis of multisite local field potential (LFP) recordings of brain activity and show that event size distributions defined as negative LFP peaks can be close to power-law distributions. However, this result is not robust to change in detection threshold, or when tested using more rigorous statistical analyses such as the Kolmogorov-Smirnov test. Similar power-law scaling is observed for surrogate signals, suggesting that power-law scaling may be a generic property of thresholded stochastic processes. We next investigate this problem analytically, and show that, indeed, stochastic processes can produce spurious power-law scaling without the presence of underlying self-organized criticality. However, this power-law is only apparent in logarithmic representations, and does not survive more rigorous analysis such as the Kolmogorov-Smirnov test. The same analysis was also performed on an artificial network known to display self-organized criticality. In this case, both the graphical representations and the rigorous statistical analysis reveal with no ambiguity that the avalanche size is distributed as a power-law. We conclude that logarithmic representations can lead to spurious power-law scaling induced by the stochastic nature of the phenomenon. This apparent power-law scaling does not constitute a proof of self-organized criticality, which should be demonstrated by more stringent statistical tests.
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.
2010-01-01
Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827
Sound texture perception via statistics of the auditory periphery: Evidence from sound synthesis
McDermott, Josh H.; Simoncelli, Eero P.
2014-01-01
Rainstorms, insect swarms, and galloping horses produce “sound textures” – the collective result of many similar acoustic events. Sound textures are distinguished by temporal homogeneity, suggesting they could be recognized with time-averaged statistics. To test this hypothesis, we processed real-world textures with an auditory model containing filters tuned for sound frequencies and their modulations, and measured statistics of the resulting decomposition. We then assessed the realism and recognizability of novel sounds synthesized to have matching statistics. Statistics of individual frequency channels, capturing spectral power and sparsity, generally failed to produce compelling synthetic textures. However, combining them with correlations between channels produced identifiable and natural-sounding textures. Synthesis quality declined if statistics were computed from biologically implausible auditory models. The results suggest that sound texture perception is mediated by relatively simple statistics of early auditory representations, presumably computed by downstream neural populations. The synthesis methodology offers a powerful tool for their further investigation. PMID:21903084
Frontal cortex absolute beta power measurement in Panic Disorder with Agoraphobia patients.
de Carvalho, Marcele Regine; Velasques, Bruna Brandão; Freire, Rafael C; Cagy, Maurício; Marques, Juliana Bittencourt; Teixeira, Silmar; Thomaz, Rafael; Rangé, Bernard P; Piedade, Roberto; Akiskal, Hagop Souren; Nardi, Antonio Egidio; Ribeiro, Pedro
2015-09-15
Panic disorder patients are hypervigilant to danger cues and highly sensitive to unpredictable aversive events, what leads to anticipatory anxiety, that is one key component of the disorder maintenance. Prefrontal cortex seems to be involved in these processes and beta band activity may be related to the involvement of top-down processing, whose function is supposed to be disrupted in pathological anxiety. The objective of this study was to measure frontal absolute beta-power (ABP) with qEEG in panic disorder and agoraphobia (PDA) patients compared to healthy controls. qEEG data were acquired while participants (24 PDA patients and 21 controls) watched a computer simulation (CS), consisting of moments classified as "high anxiety" (HAM) and "low anxiety" (LAM). qEEG data were also acquired during two rest conditions, before and after the computer simulation display. The statistical analysis was performed by means of a repeated measure analysis of variance (two-way ANOVA) and ABP was the dependent variable of interest. The main hypothesis was that a higher ABP in PDA patients would be found related to controls. Moreover, in HAM the ABP would be different than in LAM. the main finding was an interaction between the moment and group for the electrodes F7, F8, Fp1 and Fp2. We observed a higher ABP in PDA patients when compared to controls while watching the CS. The higher beta-power in the frontal cortex for the PDA group may reflect a state of high excitability, together with anticipatory anxiety and maintenance of hypervigilant cognitive state. our results suggest a possible deficiency in top-down processing reflected by a higher ABP in the PDA group while watching the CS and they highlight the recruitment of prefrontal regions during the exposure to anxiogenic stimuli. the small sample, the wide age range of participants and the use of psychotropic medications by most of the PDA patients. Copyright © 2015 Elsevier B.V. All rights reserved.
The potential for increased power from combining P-values testing the same hypothesis.
Ganju, Jitendra; Julie Ma, Guoguang
2017-02-01
The conventional approach to hypothesis testing for formal inference is to prespecify a single test statistic thought to be optimal. However, we usually have more than one test statistic in mind for testing the null hypothesis of no treatment effect but we do not know which one is the most powerful. Rather than relying on a single p-value, combining p-values from prespecified multiple test statistics can be used for inference. Combining functions include Fisher's combination test and the minimum p-value. Using randomization-based tests, the increase in power can be remarkable when compared with a single test and Simes's method. The versatility of the method is that it also applies when the number of covariates exceeds the number of observations. The increase in power is large enough to prefer combined p-values over a single p-value. The limitation is that the method does not provide an unbiased estimator of the treatment effect and does not apply to situations when the model includes treatment by covariate interaction.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
A flexibly shaped space-time scan statistic for disease outbreak detection and monitoring.
Takahashi, Kunihiko; Kulldorff, Martin; Tango, Toshiro; Yih, Katherine
2008-04-11
Early detection of disease outbreaks enables public health officials to implement disease control and prevention measures at the earliest possible time. A time periodic geographical disease surveillance system based on a cylindrical space-time scan statistic has been used extensively for disease surveillance along with the SaTScan software. In the purely spatial setting, many different methods have been proposed to detect spatial disease clusters. In particular, some spatial scan statistics are aimed at detecting irregularly shaped clusters which may not be detected by the circular spatial scan statistic. Based on the flexible purely spatial scan statistic, we propose a flexibly shaped space-time scan statistic for early detection of disease outbreaks. The performance of the proposed space-time scan statistic is compared with that of the cylindrical scan statistic using benchmark data. In order to compare their performances, we have developed a space-time power distribution by extending the purely spatial bivariate power distribution. Daily syndromic surveillance data in Massachusetts, USA, are used to illustrate the proposed test statistic. The flexible space-time scan statistic is well suited for detecting and monitoring disease outbreaks in irregularly shaped areas.
Ranft, Ulrich; Miskovic, Peter; Pesch, Beate; Jakubis, Pavel; Fabianova, Elenora; Keegan, Tom; Hergemöller, Andre; Jakubis, Marian; Nieuwenhuijsen, Mark J
2003-06-01
To assess the arsenic exposure of a population living in the vicinity of a coal-burning power plant with high arsenic emission in the Prievidza District, Slovakia, 548 spot urine samples were speciated for inorganic As (Asinorg), monomethylarsonic acid (MMA), dimethylarsinic acid (DMA), and their sum (Assum). The urine samples were collected from the population of a case-control study on nonmelanoma skin cancer (NMSC). A total of 411 samples with complete As speciations and sufficient urine quality and without fish consumption were used for statistical analysis. Although current environmental As exposure and urinary As concentrations were low (median As in soil within 5 km distance to the power plant, 41 micro g/g; median urinary Assum, 5.8 microg/L), there was a significant but weak association between As in soil and urinary Assum(r = 0.21, p < 0.01). We performed a multivariate regression analysis to calculate adjusted regression coefficients for environmental As exposure and other determinants of urinary As. Persons living in the vicinity of the plant had 27% higher Assum values (p < 0.01), based on elevated concentrations of the methylated species. A 32% increase of MMA occurred among subjects who consumed homegrown food (p < 0.001). NMSC cases had significantly higher levels of Assum, DMA, and Asinorg. The methylation index Asinorg/(MMA + DMA) was about 20% lower among cases (p < 0.05) and in men (p < 0.05) compared with controls and females, respectively.
Ranft, Ulrich; Miskovic, Peter; Pesch, Beate; Jakubis, Pavel; Fabianova, Elenora; Keegan, Tom; Hergemöller, Andre; Jakubis, Marian; Nieuwenhuijsen, Mark J
2003-01-01
To assess the arsenic exposure of a population living in the vicinity of a coal-burning power plant with high arsenic emission in the Prievidza District, Slovakia, 548 spot urine samples were speciated for inorganic As (Asinorg), monomethylarsonic acid (MMA), dimethylarsinic acid (DMA), and their sum (Assum). The urine samples were collected from the population of a case-control study on nonmelanoma skin cancer (NMSC). A total of 411 samples with complete As speciations and sufficient urine quality and without fish consumption were used for statistical analysis. Although current environmental As exposure and urinary As concentrations were low (median As in soil within 5 km distance to the power plant, 41 micro g/g; median urinary Assum, 5.8 microg/L), there was a significant but weak association between As in soil and urinary Assum(r = 0.21, p < 0.01). We performed a multivariate regression analysis to calculate adjusted regression coefficients for environmental As exposure and other determinants of urinary As. Persons living in the vicinity of the plant had 27% higher Assum values (p < 0.01), based on elevated concentrations of the methylated species. A 32% increase of MMA occurred among subjects who consumed homegrown food (p < 0.001). NMSC cases had significantly higher levels of Assum, DMA, and Asinorg. The methylation index Asinorg/(MMA + DMA) was about 20% lower among cases (p < 0.05) and in men (p < 0.05) compared with controls and females, respectively. PMID:12782488
The renormalization group method in statistical hydrodynamics
NASA Astrophysics Data System (ADS)
Eyink, Gregory L.
1994-09-01
This paper gives a first principles formulation of a renormalization group (RG) method appropriate to study of turbulence in incompressible fluids governed by Navier-Stokes equations. The present method is a momentum-shell RG of Kadanoff-Wilson type based upon the Martin-Siggia-Rose (MSR) field-theory formulation of stochastic dynamics. A simple set of diagrammatic rules are developed which are exact within perturbation theory (unlike the well-known Ma-Mazenko prescriptions). It is also shown that the claim of Yakhot and Orszag (1986) is false that higher-order terms are irrelevant in the ɛ expansion RG for randomly forced Navier-Stokes (RFNS) with power-law force spectrum F̂(k)=D0k-d+(4-ɛ). In fact, as a consequence of Galilei covariance, there are an infinite number of higher-order nonlinear terms marginal by power counting in the RG analysis of the power-law RFNS, even when ɛ≪4. The difficulty does not occur in the Forster-Nelson-Stephen (FNS) RG analysis of thermal fluctuations in an equilibrium NS fluid, which justifies a linear regression law for d≳2. On the other hand, the problem occurs also at the nontrivial fixed point in the FNS Model A, or its Burgers analog, when d<2. The marginal terms can still be present at the strong-coupling fixed point in true NS turbulence. If so, infinitely many fixed points may exist in turbulence and be associated to a somewhat surprising phenomenon: nonuniversality of the inertial-range scaling laws depending upon the dissipation-range dynamics.
Inferring Demographic History Using Two-Locus Statistics.
Ragsdale, Aaron P; Gutenkunst, Ryan N
2017-06-01
Population demographic history may be learned from contemporary genetic variation data. Methods based on aggregating the statistics of many single loci into an allele frequency spectrum (AFS) have proven powerful, but such methods ignore potentially informative patterns of linkage disequilibrium (LD) between neighboring loci. To leverage such patterns, we developed a composite-likelihood framework for inferring demographic history from aggregated statistics of pairs of loci. Using this framework, we show that two-locus statistics are more sensitive to demographic history than single-locus statistics such as the AFS. In particular, two-locus statistics escape the notorious confounding of depth and duration of a bottleneck, and they provide a means to estimate effective population size based on the recombination rather than mutation rate. We applied our approach to a Zambian population of Drosophila melanogaster Notably, using both single- and two-locus statistics, we inferred a substantially lower ancestral effective population size than previous works and did not infer a bottleneck history. Together, our results demonstrate the broad potential for two-locus statistics to enable powerful population genetic inference. Copyright © 2017 by the Genetics Society of America.
The influence of control group reproduction on the statistical ...
Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of breeding pairs of medaka. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) will have on the statistical power of the test. A software tool, the MEOGRT Reproduction Power Analysis Tool, was developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. The manuscript illustrates how the reproductive performance of the control medaka that are used in a MEOGRT influence statistical power, and therefore the successful implementation of the protocol. Example scenarios, based upon medaka reproduction data collected at MED, are discussed that bolster the recommendation that facilities planning to implement the MEOGRT should have a culture of medaka with hi
Effect of Nd: YAG laser irradiation on surface properties and bond strength of zirconia ceramics.
Liu, Li; Liu, Suogang; Song, Xiaomeng; Zhu, Qingping; Zhang, Wei
2015-02-01
This study investigated the effect of neodymium-doped yttrium aluminum garnet (Nd: YAG) laser irradiation on surface properties and bond strength of zirconia ceramics. Specimens of zirconia ceramic pieces were divided into 11 groups according to surface treatments as follows: one control group (no treatment), one air abrasion group, and nine laser groups (Nd: YAG irradiation). The laser groups were divided by applying with different output power (1, 2, or 3 W) and irradiation time (30, 60, or 90 s). Following surface treatments, the morphological characteristics of ceramic pieces was observed, and the surface roughness was measured. All specimens were bonded to resin cement. After, stored in water for 24 h and additionally aged by thermocycling, the shear bond strength was measured. Dunnett's t test and one-way ANOVA were performed as the statistical analyses for the surface roughness and the shear bond strength, respectively, with α = .05. Rougher surface of the ceramics could be obtained by laser irradiation with higher output power (2 and 3 W). However, cracks and defects were also found on material surface. The shear bond strength of laser groups was not obviously increased, and it was significantly lower than that of air abrasion group. No significant differences of the shear bond strength were found among laser groups treated with different output power or irradiation time. Nd: YAG laser irradiation cannot improve the surface properties of zirconia ceramics and cannot increase the bond strength of the ceramics. Enhancing irradiation power and extending irradiation time cannot induce higher bond strength of the ceramics and may cause material defect.
Libiger, Ondrej; Schork, Nicholas J.
2015-01-01
It is now feasible to examine the composition and diversity of microbial communities (i.e., “microbiomes”) that populate different human organs and orifices using DNA sequencing and related technologies. To explore the potential links between changes in microbial communities and various diseases in the human body, it is essential to test associations involving different species within and across microbiomes, environmental settings and disease states. Although a number of statistical techniques exist for carrying out relevant analyses, it is unclear which of these techniques exhibit the greatest statistical power to detect associations given the complexity of most microbiome datasets. We compared the statistical power of principal component regression, partial least squares regression, regularized regression, distance-based regression, Hill's diversity measures, and a modified test implemented in the popular and widely used microbiome analysis methodology “Metastats” across a wide range of simulated scenarios involving changes in feature abundance between two sets of metagenomic samples. For this purpose, simulation studies were used to change the abundance of microbial species in a real dataset from a published study examining human hands. Each technique was applied to the same data, and its ability to detect the simulated change in abundance was assessed. We hypothesized that a small subset of methods would outperform the rest in terms of the statistical power. Indeed, we found that the Metastats technique modified to accommodate multivariate analysis and partial least squares regression yielded high power under the models and data sets we studied. The statistical power of diversity measure-based tests, distance-based regression and regularized regression was significantly lower. Our results provide insight into powerful analysis strategies that utilize information on species counts from large microbiome data sets exhibiting skewed frequency distributions obtained on a small to moderate number of samples. PMID:26734061
Statistical modeling to support power system planning
NASA Astrophysics Data System (ADS)
Staid, Andrea
This dissertation focuses on data-analytic approaches that improve our understanding of power system applications to promote better decision-making. It tackles issues of risk analysis, uncertainty management, resource estimation, and the impacts of climate change. Tools of data mining and statistical modeling are used to bring new insight to a variety of complex problems facing today's power system. The overarching goal of this research is to improve the understanding of the power system risk environment for improved operation, investment, and planning decisions. The first chapter introduces some challenges faced in planning for a sustainable power system. Chapter 2 analyzes the driving factors behind the disparity in wind energy investments among states with a goal of determining the impact that state-level policies have on incentivizing wind energy. Findings show that policy differences do not explain the disparities; physical and geographical factors are more important. Chapter 3 extends conventional wind forecasting to a risk-based focus of predicting maximum wind speeds, which are dangerous for offshore operations. Statistical models are presented that issue probabilistic predictions for the highest wind speed expected in a three-hour interval. These models achieve a high degree of accuracy and their use can improve safety and reliability in practice. Chapter 4 examines the challenges of wind power estimation for onshore wind farms. Several methods for wind power resource assessment are compared, and the weaknesses of the Jensen model are demonstrated. For two onshore farms, statistical models outperform other methods, even when very little information is known about the wind farm. Lastly, chapter 5 focuses on the power system more broadly in the context of the risks expected from tropical cyclones in a changing climate. Risks to U.S. power system infrastructure are simulated under different scenarios of tropical cyclone behavior that may result from climate change. The scenario-based approach allows me to address the deep uncertainty present by quantifying the range of impacts, identifying the most critical parameters, and assessing the sensitivity of local areas to a changing risk. Overall, this body of work quantifies the uncertainties present in several operational and planning decisions for power system applications.
NASA Astrophysics Data System (ADS)
Simatos, N.; Perivolaropoulos, L.
2001-01-01
We use the publicly available code CMBFAST, as modified by Pogosian and Vachaspati, to simulate the effects of wiggly cosmic strings on the cosmic microwave background (CMB). Using the modified CMBFAST code, which takes into account vector modes and models wiggly cosmic strings by the one-scale model, we go beyond the angular power spectrum to construct CMB temperature maps with a resolution of a few degrees. The statistics of these maps are then studied using conventional and recently proposed statistical tests optimized for the detection of hidden temperature discontinuities induced by the Gott-Kaiser-Stebbins effect. We show, however, that these realistic maps cannot be distinguished in a statistically significant way from purely Gaussian maps with an identical power spectrum.
NASA Astrophysics Data System (ADS)
Nunhokee, C. D.; Bernardi, G.; Kohn, S. A.; Aguirre, J. E.; Thyagarajan, N.; Dillon, J. S.; Foster, G.; Grobler, T. L.; Martinot, J. Z. E.; Parsons, A. R.
2017-10-01
A critical challenge in the observation of the redshifted 21 cm line is its separation from bright Galactic and extragalactic foregrounds. In particular, the instrumental leakage of polarized foregrounds, which undergo significant Faraday rotation as they propagate through the interstellar medium, may harmfully contaminate the 21 cm power spectrum. We develop a formalism to describe the leakage due to instrumental widefield effects in visibility-based power spectra measured with redundant arrays, extending the delay-spectrum approach presented in Parsons et al. We construct polarized sky models and propagate them through the instrument model to simulate realistic full-sky observations with the Precision Array to Probe the Epoch of Reionization. We find that the leakage due to a population of polarized point sources is expected to be higher than diffuse Galactic polarization at any k mode for a 30 m reference baseline. For the same reference baseline, a foreground-free window at k > 0.3 h Mpc-1 can be defined in terms of leakage from diffuse Galactic polarization even under the most pessimistic assumptions. If measurements of polarized foreground power spectra or a model of polarized foregrounds are given, our method is able to predict the polarization leakage in actual 21 cm observations, potentially enabling its statistical subtraction from the measured 21 cm power spectrum.
ERIC Educational Resources Information Center
Center for Education Statistics (ED/OERI), Washington, DC.
The Financial Statistics machine-readable data file (MRDF) is a subfile of the larger Higher Education General Information Survey (HEGIS). It contains basic financial statistics for over 3,000 institutions of higher education in the United States and its territories. The data are arranged sequentially by institution, with institutional…
Hammond, David S; Wallman, Josh; Wildsoet, Christine F
2014-01-01
Purpose Young eyes compensate for the defocus imposed by spectacle lenses by changing their rate of elongation and their choroidal thickness, bringing their refractive status back to the pre-lens condition. We asked whether the initial rate of change either in the ocular components or in refraction is a function of the power of the lenses worn, a result that would be consistent with the existence of a proportional controller mechanism. Methods Two separate studies were conducted; both tracked changes in refractive errors and ocular dimensions. Study A: To study the effects of lens power and sign, young chicks were tracked for 4 days after they were fitted with positive (+5, +10 or +15 D) or negative (−5, −10, −15 D) lenses over one eye. In another experiment, biometric changes to plano, +1, +2 and +3 D lenses were tracked over a 24 h treatment period. Study B: Normal emmetropisation was tracked from hatching to 6 days of age and then a defocusing lens, either +6 D or −7 D, was fitted over one eye and additional biometric data collected after 48 h. Results In study A, animals treated with positive lenses (+5, +10 or +15 D) showed statistical similar initial choroid responses, with a mean thickening 24 μm h−1 over the first 5 h. Likewise, with the low power positive lenses, a statistically similar magnitude of choroidal thickening was observed across groups (+1 D: 46.0 ± 7.8 μm h−1; +2 D: 53.5 ± 9.9 μm h−1; +3 D 53.3 ± 24.1 μm h−1) in the first hour of lens wear compared to that of a plano control group. These similar rates of change in choroidal thickness indicate that the signalling response is binary in nature and not influenced by the magnitude of the myopic defocus. Treatments with −5, −10 and −15 D lenses induced statistically similar amounts of choroidal thinning, averaging −70 ± 15 μm after 5h and −96 ± 45 μm after 24 h. Similar rates in inner axial length changes were also seen with these lens treatments until compensation was reached, once again indicating that the signalling response is not influenced by the magnitude of hyperopic defocus. In study B, after 48 h of +6 D lens treatment, the average refractive error and choroidal changes were found to be larger in magnitude than expected if perfect compensation had taken place, with a + 2.4 D overshoot in refractive compensation. Conclusion Taken together, our results with both weak and higher power positive lenses suggest that eye growth is guided more by the sign than by the magnitude of the defocus, and our results for higher power negative lenses support a similar conclusion. These behaviour patterns and the overshoot seen in Study B are more consistent with the behaviour of a bang-bang controller than a proportional controller. PMID:23662956
Low power and type II errors in recent ophthalmology research.
Khan, Zainab; Milko, Jordan; Iqbal, Munir; Masri, Moness; Almeida, David R P
2016-10-01
To investigate the power of unpaired t tests in prospective, randomized controlled trials when these tests failed to detect a statistically significant difference and to determine the frequency of type II errors. Systematic review and meta-analysis. We examined all prospective, randomized controlled trials published between 2010 and 2012 in 4 major ophthalmology journals (Archives of Ophthalmology, British Journal of Ophthalmology, Ophthalmology, and American Journal of Ophthalmology). Studies that used unpaired t tests were included. Power was calculated using the number of subjects in each group, standard deviations, and α = 0.05. The difference between control and experimental means was set to be (1) 20% and (2) 50% of the absolute value of the control's initial conditions. Power and Precision version 4.0 software was used to carry out calculations. Finally, the proportion of articles with type II errors was calculated. β = 0.3 was set as the largest acceptable value for the probability of type II errors. In total, 280 articles were screened. Final analysis included 50 prospective, randomized controlled trials using unpaired t tests. The median power of tests to detect a 50% difference between means was 0.9 and was the same for all 4 journals regardless of the statistical significance of the test. The median power of tests to detect a 20% difference between means ranged from 0.26 to 0.9 for the 4 journals. The median power of these tests to detect a 50% and 20% difference between means was 0.9 and 0.5 for tests that did not achieve statistical significance. A total of 14% and 57% of articles with negative unpaired t tests contained results with β > 0.3 when power was calculated for differences between means of 50% and 20%, respectively. A large portion of studies demonstrate high probabilities of type II errors when detecting small differences between means. The power to detect small difference between means varies across journals. It is, therefore, worthwhile for authors to mention the minimum clinically important difference for individual studies. Journals can consider publishing statistical guidelines for authors to use. Day-to-day clinical decisions rely heavily on the evidence base formed by the plethora of studies available to clinicians. Prospective, randomized controlled clinical trials are highly regarded as a robust study and are used to make important clinical decisions that directly affect patient care. The quality of study designs and statistical methods in major clinical journals is improving overtime, 1 and researchers and journals are being more attentive to statistical methodologies incorporated by studies. The results of well-designed ophthalmic studies with robust methodologies, therefore, have the ability to modify the ways in which diseases are managed. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Extraction of phase information in daily stock prices
NASA Astrophysics Data System (ADS)
Fujiwara, Yoshi; Maekawa, Satoshi
2000-06-01
It is known that, in an intermediate time-scale such as days, stock market fluctuations possess several statistical properties that are common to different markets. Namely, logarithmic returns of an asset price have (i) truncated Pareto-Lévy distribution, (ii) vanishing linear correlation, (iii) volatility clustering and its power-law autocorrelation. The fact (ii) is a consequence of nonexistence of arbitragers with simple strategies, but this does not mean statistical independence of market fluctuations. Little attention has been paid to temporal structure of higher-order statistics, although it contains some important information on market dynamics. We applied a signal separation technique, called Independent Component Analysis (ICA), to actual data of daily stock prices in Tokyo and New York Stock Exchange (TSE/NYSE). ICA does a linear transformation of lag vectors from time-series to find independent components by a nonlinear algorithm. We obtained a similar impulse response for these dataset. If it were a Martingale process, it can be shown that impulse response should be a delta-function under a few conditions that could be numerically checked and as was verified by surrogate data. This result would provide information on the market dynamics including speculative bubbles and arbitrating processes. .
Quantum work in the Bohmian framework
NASA Astrophysics Data System (ADS)
Sampaio, R.; Suomela, S.; Ala-Nissila, T.; Anders, J.; Philbin, T. G.
2018-01-01
At nonzero temperature classical systems exhibit statistical fluctuations of thermodynamic quantities arising from the variation of the system's initial conditions and its interaction with the environment. The fluctuating work, for example, is characterized by the ensemble of system trajectories in phase space and, by including the probabilities for various trajectories to occur, a work distribution can be constructed. However, without phase-space trajectories, the task of constructing a work probability distribution in the quantum regime has proven elusive. Here we use quantum trajectories in phase space and define fluctuating work as power integrated along the trajectories, in complete analogy to classical statistical physics. The resulting work probability distribution is valid for any quantum evolution, including cases with coherences in the energy basis. We demonstrate the quantum work probability distribution and its properties with an exactly solvable example of a driven quantum harmonic oscillator. An important feature of the work distribution is its dependence on the initial statistical mixture of pure states, which is reflected in higher moments of the work. The proposed approach introduces a fundamentally different perspective on quantum thermodynamics, allowing full thermodynamic characterization of the dynamics of quantum systems, including the measurement process.
Patterson, Megan S; Goodson, Patricia
2017-05-01
Compulsive exercise, a form of unhealthy exercise often associated with prioritizing exercise and feeling guilty when exercise is missed, is a common precursor to and symptom of eating disorders. College-aged women are at high risk of exercising compulsively compared with other groups. Social network analysis (SNA) is a theoretical perspective and methodology allowing researchers to observe the effects of relational dynamics on the behaviors of people. SNA was used to assess the relationship between compulsive exercise and body dissatisfaction, physical activity, and network variables. Descriptive statistics were conducted using SPSS, and quadratic assignment procedure (QAP) analyses were conducted using UCINET. QAP regression analysis revealed a statistically significant model (R 2 = .375, P < .0001) predicting compulsive exercise behavior. Physical activity, body dissatisfaction, and network variables were statistically significant predictor variables in the QAP regression model. In our sample, women who are connected to "important" or "powerful" people in their network are likely to have higher compulsive exercise scores. This result provides healthcare practitioners key target points for intervention within similar groups of women. For scholars researching eating disorders and associated behaviors, this study supports looking into group dynamics and network structure in conjunction with body dissatisfaction and exercise frequency.
Kempton, Thomas; Sirotic, Anita C; Coutts, Aaron J
2017-04-01
To examine differences in physical and technical performance profiles using a large sample of match observations drawn from successful and less-successful professional rugby league teams. Match activity profiles were collected using global positioning satellite (GPS) technology from 29 players from a successful rugby league team during 24 games and 25 players from a less-successful team during 18 games throughout 2 separate competition seasons. Technical performance data were obtained from a commercial statistics provider. A progressive magnitude-based statistical approach was used to compare differences in physical and technical performance variables between the reference teams. There were no clear differences in playing time, absolute and relative total distances, or low-speed running distances between successful and less-successful teams. The successful team possibly to very likely had lower higher-speed running demands and likely had fewer physical collisions than the less-successful team, although they likely to most likely demonstrated more accelerations and decelerations and likely had higher average metabolic power. The successful team very likely gained more territory in attack, very likely had more possessions, and likely committed fewer errors. In contrast, the less-successful team was likely required to attempt more tackles, most likely missed more tackles, and very likely had a lower effective tackle percentage. In the current study, successful match performance was not contingent on higher match running outputs or more physical collisions; rather, proficiency in technical performance components better differentiated successful and less-successful teams.
Comparative efficacy of two battery-powered toothbrushes on dental plaque removal.
Ruhlman, C Douglas; Bartizek, Robert D; Biesbrock, Aaron R
2002-01-01
A number of clinical studies have consistently demonstrated that power toothbrushes deliver superior plaque removal compared to manual toothbrushes. Recently, a new power toothbrush (Crest SpinBrush) has been marketed with a design that fundamentally differs from other marketed power toothbrushes. Other power toothbrushes feature a small, round head designed to oscillate for enhanced cleaning between the teeth and below the gumline. The new power toothbrush incorporates a similar round oscillating head in conjunction with fixed bristles, which allows the user to brush with optimal manual brushing technique. The objective of this randomized, examiner-blind, parallel design study was to compare the plaque removal efficacy of a positive control power toothbrush (Colgate Actibrush) to an experimental toothbrush (Crest SpinBrush) following a single use among 59 subjects. Baseline plaque scores were 1.64 and 1.40 for the experimental toothbrush and control toothbrush treatment groups, respectively. With regard to all surfaces examined, the experimental toothbrush delivered an adjusted (via analysis of covariance) mean difference between baseline and post-brushing plaque scores of 0.47, while the control toothbrush delivered an adjusted mean difference of 0.33. On average, the difference between toothbrushes was statistically significant (p = 0.013). Because the covariate slope for the experimental group was statistically significantly greater (p = 0.001) than the slope for the control group, a separate slope model was used. Further analysis demonstrated that the experimental group had statistically significantly greater plaque removal than the control group for baseline plaque scores above 1.43. With respect to buccal surfaces, using a separate slope analysis of covariance, the experimental toothbrush delivered an adjusted mean difference between baseline and post-brushing plaque scores of 0.61, while the control toothbrush delivered an adjusted mean difference of 0.39. This difference between toothbrushes was also statistically significant (p = 0.002). On average, the results on lingual surfaces demonstrated similar directional scores favoring the experimental toothbrush; however these results did not achieve statistical significance. In conclusion, the experimental Crest SpinBrush, with its novel fixed and oscillating bristle design, was found to be more effective than the positive control Colgate Actibrush, which is designed with a small round oscillating cluster of bristles.
Torres Silva dos Santos, Alexandre; Moisés Santos e Silva, Cláudio
2013-01-01
Wind speed analyses are currently being employed in several fields, especially in wind power generation. In this study, we used wind speed data from records of Universal Fuess anemographs at an altitude of 10 m from 47 weather stations of the National Institute of Meteorology (Instituto Nacional de Meteorologia-INMET) from January 1986 to December 2011. The objective of the study was to investigate climatological aspects and wind speed trends. To this end, the following methods were used: filling of missing data, descriptive statistical calculations, boxplots, cluster analysis, and trend analysis using the Mann-Kendall statistical method. The seasonal variability of the average wind speeds of each group presented higher values for winter and spring and lower values in the summer and fall. The groups G1, G2, and G5 showed higher annual averages in the interannual variability of wind speeds. These observed peaks were attributed to the El Niño and La Niña events, which change the behavior of global wind circulation and influence wind speeds over the region. Trend analysis showed more significant negative values for the G3, G4, and G5 groups for all seasons of the year and in the annual average for the period under study.
Santos e Silva, Cláudio Moisés
2013-01-01
Wind speed analyses are currently being employed in several fields, especially in wind power generation. In this study, we used wind speed data from records of Universal Fuess anemographs at an altitude of 10 m from 47 weather stations of the National Institute of Meteorology (Instituto Nacional de Meteorologia-INMET) from January 1986 to December 2011. The objective of the study was to investigate climatological aspects and wind speed trends. To this end, the following methods were used: filling of missing data, descriptive statistical calculations, boxplots, cluster analysis, and trend analysis using the Mann-Kendall statistical method. The seasonal variability of the average wind speeds of each group presented higher values for winter and spring and lower values in the summer and fall. The groups G1, G2, and G5 showed higher annual averages in the interannual variability of wind speeds. These observed peaks were attributed to the El Niño and La Niña events, which change the behavior of global wind circulation and influence wind speeds over the region. Trend analysis showed more significant negative values for the G3, G4, and G5 groups for all seasons of the year and in the annual average for the period under study. PMID:24250267
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Koch, Michael; Denzler, Joachim; Redies, Christoph
2010-01-01
Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f2 characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f2 characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains to be investigated. PMID:20808863
Koch, Michael; Denzler, Joachim; Redies, Christoph
2010-08-19
Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f(2) characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f(2) characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains to be investigated.
Gulf War Logistics: Theory Into Practice
1995-04-01
sources are documentary in nature, emphasizing statistics like tonnage of supplies moved and number of troops sustained in the field. Other sources...Washington: GPO, 1993), 207-208. See also, Table 23 in Gulf War Air Power Survey Statistical Compendium. In Vol 3 of Gulf War Air Power Survey...Operational Structures Coursebook , (Maxwell AFB: Air Command and Staff College, 1995), 58. 40"Theater Logistics in the Gulf War: August 1990-December 1991
ERIC Educational Resources Information Center
Hoover, H. D.; Plake, Barbara
The relative power of the Mann-Whitney statistic, the t-statistic, the median test, a test based on exceedances (A,B), and two special cases of (A,B) the Tukey quick test and the revised Tukey quick test, was investigated via a Monte Carlo experiment. These procedures were compared across four population probability models: uniform, beta, normal,…
NASA Astrophysics Data System (ADS)
Xu, Liangfei; Reimer, Uwe; Li, Jianqiu; Huang, Haiyan; Hu, Zunyan; Jiang, Hongliang; Janßen, Holger; Ouyang, Minggao; Lehnert, Werner
2018-02-01
City buses using polymer electrolyte membrane (PEM) fuel cells are considered to be the most likely fuel cell vehicles to be commercialized in China. The technical specifications of the fuel cell systems (FCSs) these buses are equipped with will differ based on the powertrain configurations and vehicle control strategies, but can generally be classified into the power-follow and soft-run modes. Each mode imposes different levels of electrochemical stress on the fuel cells. Evaluating the aging behavior of fuel cell stacks under the conditions encountered in fuel cell buses requires new durability test protocols based on statistical results obtained during actual driving tests. In this study, we propose a systematic design method for fuel cell durability test protocols that correspond to the power-follow mode based on three parameters for different fuel cell load ranges. The powertrain configurations and control strategy are described herein, followed by a presentation of the statistical data for the duty cycles of FCSs in one city bus in the demonstration project. Assessment protocols are presented based on the statistical results using mathematical optimization methods, and are compared to existing protocols with respect to common factors, such as time at open circuit voltage and root-mean-square power.
Higher criticism approach to detect rare variants using whole genome sequencing data
2014-01-01
Because of low statistical power of single-variant tests for whole genome sequencing (WGS) data, the association test for variant groups is a key approach for genetic mapping. To address the features of sparse and weak genetic effects to be detected, the higher criticism (HC) approach has been proposed and theoretically has proven optimal for detecting sparse and weak genetic effects. Here we develop a strategy to apply the HC approach to WGS data that contains rare variants as the majority. By using Genetic Analysis Workshop 18 "dose" genetic data with simulated phenotypes, we assess the performance of HC under a variety of strategies for grouping variants and collapsing rare variants. The HC approach is compared with the minimal p-value method and the sequence kernel association test. The results show that the HC approach is preferred for detecting weak genetic effects. PMID:25519367
Statistical analyses support power law distributions found in neuronal avalanches.
Klaus, Andreas; Yu, Shan; Plenz, Dietmar
2011-01-01
The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.
Which soft contact lens power is better for piggyback fitting in keratoconus?
Romero-Jiménez, Miguel; Santodomingo-Rubido, Jacinto; Flores-Rodríguez, Patricia; González-Méijome, Jose Manuel
2013-02-01
To evaluate the impact of differente soft contact lens power in the anterior corneal curvature and regularity in subjects with keratoconus. Nineteen subjects (30 eyes) with keratoconus were included in the study. Six corneal topographies were taken with Pentacam Eye System over the naked eye and successively with soft lens (Senofilcon A) powers of -3.00, -1.50, 0.00, +1.50 and +3.00 D. Corneal measurements of mean central keratometry (MCK), maximum tangential curvature (TK), maximum front elevation (MFE) and eccentricity (Ecc) at 6 and 8 mm diameters as well as anterior corneal surface high order aberrations (i.e. total RMS, spherical- and coma-like and secondary astigmatism) were evaluated. Negative- and plano-powered soft lenses flattened (p<0.05 in all cases), whereas positive-powered lenses did not induce any significant changes (p>0.05 in all cases) in MCK in comparison to the naked eye. The TK power decreased with negative lenses (p<0.05 in both cases) and increased with +3.00 D lenses (p=0.03) in comparison to the naked eye. No statistically significant differences were found in MFE with any soft lens power in comparison to the naked eye (p>0.05 in all cases). Corneal eccentricity increased at 8 mm diameter for all lens powers (p<0.05 in all cases). No statistically differences were found in HOA RMS and spherical-like aberration (both p>0.05). Statistically differences were found in coma-like and secondary astigmatism (both p<0.05). Negative-powered soft contact lenses provide a flatter anterior surface in comparison to positive-powered lenses in subjects with keratoconus and thus they might be more suitable for piggyback contact lens fitting. Copyright © 2012 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.
Statistics Report on TEQSA Registered Higher Education Providers
ERIC Educational Resources Information Center
Australian Government Tertiary Education Quality and Standards Agency, 2015
2015-01-01
This statistics report provides a comprehensive snapshot of national statistics on all parts of the sector for the year 2013, by bringing together data collected directly by TEQSA with data sourced from the main higher education statistics collections managed by the Australian Government Department of Education and Training. The report provides…
An efficient study design to test parent-of-origin effects in family trios.
Yu, Xiaobo; Chen, Gao; Feng, Rui
2017-11-01
Increasing evidence has shown that genes may cause prenatal, neonatal, and pediatric diseases depending on their parental origins. Statistical models that incorporate parent-of-origin effects (POEs) can improve the power of detecting disease-associated genes and help explain the missing heritability of diseases. In many studies, children have been sequenced for genome-wide association testing. But it may become unaffordable to sequence their parents and evaluate POEs. Motivated by the reality, we proposed a budget-friendly study design of sequencing children and only genotyping their parents through single nucleotide polymorphism array. We developed a powerful likelihood-based method, which takes into account both sequence reads and linkage disequilibrium to infer the parental origins of children's alleles and estimate their POEs on the outcome. We evaluated the performance of our proposed method and compared it with an existing method using only genotypes, through extensive simulations. Our method showed higher power than the genotype-based method. When either the mean read depth or the pair-end length was reasonably large, our method achieved ideal power. When single parents' genotypes were unavailable or parental genotypes at the testing locus were not typed, both methods lost power compared with when complete data were available; but the power loss from our method was smaller than the genotype-based method. We also extended our method to accommodate mixed genotype, low-, and high-coverage sequence data from children and their parents. At presence of sequence errors, low-coverage parental sequence data may lead to lower power than parental genotype data. © 2017 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Nishimichi, Takahiro; Li, Baojiu; Higuchi, Yuichi
2017-04-01
We investigate the information content of various cosmic shear statistics on the theory of gravity. Focusing on the Hu-Sawicki-type f(R) model, we perform a set of ray-tracing simulations and measure the convergence bispectrum, peak counts and Minkowski functionals. We first show that while the convergence power spectrum does have sensitivity to the current value of extra scalar degree of freedom |fR0|, it is largely compensated by a change in the present density amplitude parameter σ8 and the matter density parameter Ωm0. With accurate covariance matrices obtained from 1000 lensing simulations, we then examine the constraining power of the three additional statistics. We find that these probes are indeed helpful to break the parameter degeneracy, which cannot be resolved from the power spectrum alone. We show that especially the peak counts and Minkowski functionals have the potential to rigorously (marginally) detect the signature of modified gravity with the parameter |fR0| as small as 10-5 (10-6) if we can properly model them on small (˜1 arcmin) scale in a future survey with a sky coverage of 1500 deg2. We also show that the signal level is similar among the additional three statistics and all of them provide complementary information to the power spectrum. These findings indicate the importance of combining multiple probes beyond the standard power spectrum analysis to detect possible modifications to general relativity.
[The application of the prospective space-time statistic in early warning of infectious disease].
Yin, Fei; Li, Xiao-Song; Feng, Zi-Jian; Ma, Jia-Qi
2007-06-01
To investigate the application of prospective space-time scan statistic in the early stage of detecting infectious disease outbreaks. The prospective space-time scan statistic was tested by mimicking daily prospective analyses of bacillary dysentery data of Chengdu city in 2005 (3212 cases in 102 towns and villages). And the results were compared with that of purely temporal scan statistic. The prospective space-time scan statistic could give specific messages both in spatial and temporal. The results of June indicated that the prospective space-time scan statistic could timely detect the outbreaks that started from the local site, and the early warning message was powerful (P = 0.007). When the merely temporal scan statistic for detecting the outbreak was sent two days later, and the signal was less powerful (P = 0.039). The prospective space-time scan statistic could make full use of the spatial and temporal information in infectious disease data and could timely and effectively detect the outbreaks that start from the local sites. The prospective space-time scan statistic could be an important tool for local and national CDC to set up early detection surveillance systems.
Venter, Anre; Maxwell, Scott E; Bolig, Erika
2002-06-01
Adding a pretest as a covariate to a randomized posttest-only design increases statistical power, as does the addition of intermediate time points to a randomized pretest-posttest design. Although typically 5 waves of data are required in this instance to produce meaningful gains in power, a 3-wave intensive design allows the evaluation of the straight-line growth model and may reduce the effect of missing data. The authors identify the statistically most powerful method of data analysis in the 3-wave intensive design. If straight-line growth is assumed, the pretest-posttest slope must assume fairly extreme values for the intermediate time point to increase power beyond the standard analysis of covariance on the posttest with the pretest as covariate, ignoring the intermediate time point.
Barry, Robert L.; Klassen, L. Martyn; Williams, Joy M.; Menon, Ravi S.
2008-01-01
A troublesome source of physiological noise in functional magnetic resonance imaging (fMRI) is due to the spatio-temporal modulation of the magnetic field in the brain caused by normal subject respiration. fMRI data acquired using echo-planar imaging is very sensitive to these respiratory-induced frequency offsets, which cause significant geometric distortions in images. Because these effects increase with main magnetic field, they can nullify the gains in statistical power expected by the use of higher magnetic fields. As a study of existing navigator correction techniques for echo-planar fMRI has shown that further improvements can be made in the suppression of respiratory-induced physiological noise, a new hybrid two-dimensional (2D) navigator is proposed. Using a priori knowledge of the slow spatial variations of these induced frequency offsets, 2D field maps are constructed for each shot using spatial frequencies between ±0.5 cm−1 in k-space. For multi-shot fMRI experiments, we estimate that the improvement of hybrid 2D navigator correction over the best performance of one-dimensional navigator echo correction translates into a 15% increase in the volume of activation, 6% and 10% increases in the maximum and average t-statistics, respectively, for regions with high t-statistics, and 71% and 56% increases in the maximum and average t-statistics, respectively, in regions with low t-statistics due to contamination by residual physiological noise. PMID:18024159
High Efficiency Microwave Power Amplifier: From the Lab to Industry
NASA Technical Reports Server (NTRS)
Sims, William Herbert, III; Bell, Joseph L. (Technical Monitor)
2001-01-01
Since the beginnings of space travel, various microwave power amplifier designs have been employed. These included Class-A, -B, and -C bias arrangements. However, shared limitation of these topologies is the inherent high total consumption of input power associated with the generation of radio frequency (RF)/microwave power. The power amplifier has always been the largest drain for the limited available power on the spacecraft. Typically, the conversion efficiency of a microwave power amplifier is 10 to 20%. For a typical microwave power amplifier of 20 watts, input DC power of at least 100 watts is required. Such a large demand for input power suggests that a better method of RF/microwave power generation is required. The price paid for using a linear amplifier where high linearity is unnecessary includes higher initial and operating costs, lower DC-to-RF conversion efficiency, high power consumption, higher power dissipation and the accompanying need for higher capacity heat removal means, and an amplifier that is more prone to parasitic oscillation. The first use of a higher efficiency mode of power generation was described by Baxandall in 1959. This higher efficiency mode, Class-D, is achieved through distinct switching techniques to reduce the power losses associated with switching, conduction, and gate drive losses of a given transistor.
Bone mineral density and correlation factor analysis in normal Taiwanese children.
Shu, San-Ging
2007-01-01
Our aim was to establish reference data and linear regression equations for lumbar bone mineral density (BMD) in normal Taiwanese children. Several influencing factors of lumbar BMD were investigated. Two hundred fifty-seven healthy children were recruited from schools, 136 boys and 121 girls, aged 4-18 years were enrolled on a voluntary basis with written consent. Their height, weight, blood pressure, puberty stage, bone age and lumbar BMD (L2-4) by dual energy x-ray absorptiometry (DEXA) were measured. Data were analyzed using Pearson correlation and stepwise regression tests. All measurements increased with age. Prior to age 8, there was no gender difference. Parameters such as height, weight, and bone age (BA) in girls surpassed boys between ages 8-13 without statistical significance (p> or =0.05). This was reversed subsequently after age 14 in height (p<0.05). BMD difference had the same trend but was not statistically significant either. The influencing power of puberty stage and bone age over BMD was almost equal to or higher than that of height and weight. All the other factors correlated with BMD to variable powers. Multiple linear regression equations for boys and girls were formulated. BMD reference data is provided and can be used to monitor childhood pathological conditions. However, BMD in those with abnormal bone age or pubertal development could need modifications to ensure accuracy.
Zhao, Huiying; Nyholt, Dale R; Yang, Yuanhao; Wang, Jihua; Yang, Yuedong
2017-06-14
Genome-wide association studies (GWAS) have successfully identified single variants associated with diseases. To increase the power of GWAS, gene-based and pathway-based tests are commonly employed to detect more risk factors. However, the gene- and pathway-based association tests may be biased towards genes or pathways containing a large number of single-nucleotide polymorphisms (SNPs) with small P-values caused by high linkage disequilibrium (LD) correlations. To address such bias, numerous pathway-based methods have been developed. Here we propose a novel method, DGAT-path, to divide all SNPs assigned to genes in each pathway into LD blocks, and to sum the chi-square statistics of LD blocks for assessing the significance of the pathway by permutation tests. The method was proven robust with the type I error rate >1.6 times lower than other methods. Meanwhile, the method displays a higher power and is not biased by the pathway size. The applications to the GWAS summary statistics for schizophrenia and breast cancer indicate that the detected top pathways contain more genes close to associated SNPs than other methods. As a result, the method identified 17 and 12 significant pathways containing 20 and 21 novel associated genes, respectively for two diseases. The method is available online by http://sparks-lab.org/server/DGAT-path .
Zhou, Yunyi; Tao, Chenyang; Lu, Wenlian; Feng, Jianfeng
2018-04-20
Functional connectivity is among the most important tools to study brain. The correlation coefficient, between time series of different brain areas, is the most popular method to quantify functional connectivity. Correlation coefficient in practical use assumes the data to be temporally independent. However, the time series data of brain can manifest significant temporal auto-correlation. A widely applicable method is proposed for correcting temporal auto-correlation. We considered two types of time series models: (1) auto-regressive-moving-average model, (2) nonlinear dynamical system model with noisy fluctuations, and derived their respective asymptotic distributions of correlation coefficient. These two types of models are most commonly used in neuroscience studies. We show the respective asymptotic distributions share a unified expression. We have verified the validity of our method, and shown our method exhibited sufficient statistical power for detecting true correlation on numerical experiments. Employing our method on real dataset yields more robust functional network and higher classification accuracy than conventional methods. Our method robustly controls the type I error while maintaining sufficient statistical power for detecting true correlation in numerical experiments, where existing methods measuring association (linear and nonlinear) fail. In this work, we proposed a widely applicable approach for correcting the effect of temporal auto-correlation on functional connectivity. Empirical results favor the use of our method in functional network analysis. Copyright © 2018. Published by Elsevier B.V.
Distinguishing models of reionization using future radio observations of 21-cm 1-point statistics
NASA Astrophysics Data System (ADS)
Watkinson, C. A.; Pritchard, J. R.
2014-10-01
We explore the impact of reionization topology on 21-cm statistics. Four reionization models are presented which emulate large ionized bubbles around overdense regions (21CMFAST/global-inside-out), small ionized bubbles in overdense regions (local-inside-out), large ionized bubbles around underdense regions (global-outside-in) and small ionized bubbles around underdense regions (local-outside-in). We show that first generation instruments might struggle to distinguish global models using the shape of the power spectrum alone. All instruments considered are capable of breaking this degeneracy with the variance, which is higher in outside-in models. Global models can also be distinguished at small scales from a boost in the power spectrum from a positive correlation between the density and neutral-fraction fields in outside-in models. Negative skewness is found to be unique to inside-out models and we find that pre-Square Kilometre Array (SKA) instruments could detect this feature in maps smoothed to reduce noise errors. The early, mid- and late phases of reionization imprint signatures in the brightness-temperature moments, we examine their model dependence and find pre-SKA instruments capable of exploiting these timing constraints in smoothed maps. The dimensional skewness is introduced and is shown to have stronger signatures of the early and mid-phase timing if the inside-out scenario is correct.
Generalized functional linear models for gene-based case-control association studies.
Fan, Ruzong; Wang, Yifan; Mills, James L; Carter, Tonia C; Lobach, Iryna; Wilson, Alexander F; Bailey-Wilson, Joan E; Weeks, Daniel E; Xiong, Momiao
2014-11-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene region are disease related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease datasets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. © 2014 WILEY PERIODICALS, INC.
Generalized Functional Linear Models for Gene-based Case-Control Association Studies
Mills, James L.; Carter, Tonia C.; Lobach, Iryna; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Weeks, Daniel E.; Xiong, Momiao
2014-01-01
By using functional data analysis techniques, we developed generalized functional linear models for testing association between a dichotomous trait and multiple genetic variants in a genetic region while adjusting for covariates. Both fixed and mixed effect models are developed and compared. Extensive simulations show that Rao's efficient score tests of the fixed effect models are very conservative since they generate lower type I errors than nominal levels, and global tests of the mixed effect models generate accurate type I errors. Furthermore, we found that the Rao's efficient score test statistics of the fixed effect models have higher power than the sequence kernel association test (SKAT) and its optimal unified version (SKAT-O) in most cases when the causal variants are both rare and common. When the causal variants are all rare (i.e., minor allele frequencies less than 0.03), the Rao's efficient score test statistics and the global tests have similar or slightly lower power than SKAT and SKAT-O. In practice, it is not known whether rare variants or common variants in a gene are disease-related. All we can assume is that a combination of rare and common variants influences disease susceptibility. Thus, the improved performance of our models when the causal variants are both rare and common shows that the proposed models can be very useful in dissecting complex traits. We compare the performance of our methods with SKAT and SKAT-O on real neural tube defects and Hirschsprung's disease data sets. The Rao's efficient score test statistics and the global tests are more sensitive than SKAT and SKAT-O in the real data analysis. Our methods can be used in either gene-disease genome-wide/exome-wide association studies or candidate gene analyses. PMID:25203683
NASA Astrophysics Data System (ADS)
Franz, Trenton; Wang, Tiejun
2015-04-01
Approximately 40% of global food production comes from irrigated agriculture. With the increasing demand for food even greater pressures will be placed on water resources within these systems. In this work we aimed to characterize the spatial and temporal patterns of soil moisture at the field-scale (~500 m) using the newly developed cosmic-ray neutron rover near Waco, NE USA. Here we mapped soil moisture of 144 quarter section fields (a mix of maize, soybean, and natural areas) each week during the 2014 growing season (May to September). The 12 by 12 km study domain also contained three stationary cosmic-ray neutron probes for independent validation of the rover surveys. Basic statistical analysis of the domain indicated a strong relationship between the mean and variance of soil moisture at several averaging scales. The relationships between the mean and higher order moments were not significant. Scaling analysis indicated strong power law behavior between the variance of soil moisture and averaging area with minimal dependence of mean soil moisture on the slope of the power law function. In addition, we combined the data from the three stationary cosmic-ray neutron probes and mobile surveys using linear regression to derive a daily soil moisture product at 1, 3, and 12 km spatial resolutions for the entire growing season. The statistical relationships derived from the rover dataset offer a novel set of observations that will be useful in: 1) calibrating and validating land surface models, 2) calibrating and validating crop models, 3) soil moisture covariance estimates for statistical downscaling of remote sensing products such as SMOS and SMAP, and 4) provide daily center-pivot scale mean soil moisture data for optimal irrigation timing and volume amounts.
The Higher Education System in Israel: Statistical Abstract and Analysis.
ERIC Educational Resources Information Center
Herskovic, Shlomo
This edition of a statistical abstract published every few years on the higher education system in Israel presents the most recent data available through 1990-91. The data were gathered through the cooperation of the Central Bureau of Statistics and institutions of higher education. Chapter 1 presents a summary of principal findings covering the…
NASA Astrophysics Data System (ADS)
Tien, Hai Minh; Le, Kien Anh; Le, Phung Thi Kim
2017-09-01
Bio hydrogen is a sustainable energy resource due to its potentially higher efficiency of conversion to usable power, high energy efficiency and non-polluting nature resource. In this work, the experiments have been carried out to indicate the possibility of generating bio hydrogen as well as identifying effective factors and the optimum conditions from cassava starch. Experimental design was used to investigate the effect of operating temperature (37-43 °C), pH (6-7), and inoculums ratio (6-10 %) to the yield hydrogen production, the COD reduction and the ratio of volume of hydrogen production to COD reduction. The statistical analysis of the experiment indicated that the significant effects for the fermentation yield were the main effect of temperature, pH and inoculums ratio. The interaction effects between them seem not significant. The central composite design showed that the polynomial regression models were in good agreement with the experimental results. This result will be applied to enhance the process of cassava starch processing wastewater treatment.
The efficacy of adult christian support groups in coping with the death of a significant loved one.
Goodman, Herbert; Stone, Mark H
2009-09-01
Psychologists sometimes minimize important resources such as religion and spiritual beliefs for coping with bereavement. Alienation of therapeutic psychology from religious values contrasts to professional and public interest in religious experience and commitment. A supportive viewpoint has come about partially as a result of recognizing important values which clinicians have found absent in many of their clients. Until spiritual belief systems become integrated into the work of clinicians, clients may not be fully integrative in coping with loss. The key finding of this study was that individuals who participated in Christian and secular support groups showed no statistically significant difference in their mean endorsement of negative criteria on the BHS, and no statistically significant difference for their mean score endorsement of positive criteria on the RCOPE. However, a Christian-oriented approach was no less effective than a psychological-oriented one. In both groups, a spiritual connection to a specific or generalized higher power was frequently identified which clients ascribed to facilitating the management of their coping.
Hughes, Alec; Huang, Yuexi; Schwartz, Michael L; Hynynen, Kullervo
2018-05-14
To analyze clinical data indicating a reduction in the induced energy-temperature efficiency relationship during transcranial focused ultrasound (FUS) Essential Tremor (ET) thalamotomy treatments at higher acoustic powers, establish its relationship with the spatial distribution of the focal temperature elevation, and explore its cause. A retrospective observational study of patients (n = 19) treated between July 2015 and August 2016 for (ET) by FUS thalamotomy was performed. These data were analyzed to compare the relationships between the applied power, the applied energy, the resultant peak temperature achieved in the brain, and the dispersion of the focal volume. Full ethics approval was received and all patients provided signed informed consent forms before the initiation of the study. Computer simulations, animal experiments, and clinical system tests were performed to determine the effects of skull heating, changes in brain properties and transducer acoustic output, respectively. All animal procedures were approved by the Animal Care and Use Committee and conformed to the guidelines set out by the Canadian Council on Animal Care. MATLAB was used to perform statistical analysis. The reduction in the energy efficiency relationship during treatment correlates with the increase in size of the focal volume at higher sonication powers. A linear relationship exists showing that a decrease in treatment efficiency correlates positively with an increase in the focal size over the course of treatment (P < 0.01), supporting the hypothesis of transient skull and tissue heating causing acoustic aberrations leading to a decrease in efficiency. Changes in thermal conductivity, perfusion, absorption rates in the brain, as well as ultrasound transducer acoustic output levels were found to have minimal effects on the observed reduction in efficiency. The reduction in energy-temperature efficiency during high-power FUS treatments correlated with observed increases in the size of the focal volume and is likely caused by transient changes in the tissue and skull during heating. © 2018 American Association of Physicists in Medicine.
Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir
2010-07-15
With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.
Analysis of Noise Mechanisms in Cell-Size Control.
Modi, Saurabh; Vargas-Garcia, Cesar Augusto; Ghusinga, Khem Raj; Singh, Abhyudai
2017-06-06
At the single-cell level, noise arises from multiple sources, such as inherent stochasticity of biomolecular processes, random partitioning of resources at division, and fluctuations in cellular growth rates. How these diverse noise mechanisms combine to drive variations in cell size within an isoclonal population is not well understood. Here, we investigate the contributions of different noise sources in well-known paradigms of cell-size control, such as adder (division occurs after adding a fixed size from birth), sizer (division occurs after reaching a size threshold), and timer (division occurs after a fixed time from birth). Analysis reveals that variation in cell size is most sensitive to errors in partitioning of volume among daughter cells, and not surprisingly, this process is well regulated among microbes. Moreover, depending on the dominant noise mechanism, different size-control strategies (or a combination of them) provide efficient buffering of size variations. We further explore mixer models of size control, where a timer phase precedes/follows an adder, as has been proposed in Caulobacter crescentus. Although mixing a timer and an adder can sometimes attenuate size variations, it invariably leads to higher-order moments growing unboundedly over time. This results in a power-law distribution for the cell size, with an exponent that depends inversely on the noise in the timer phase. Consistent with theory, we find evidence of power-law statistics in the tail of C. crescentus cell-size distribution, although there is a discrepancy between the observed power-law exponent and that predicted from the noise parameters. The discrepancy, however, is removed after data reveal that the size added by individual newborns in the adder phase itself exhibits power-law statistics. Taken together, this study provides key insights into the role of noise mechanisms in size homeostasis, and suggests an inextricable link between timer-based models of size control and heavy-tailed cell-size distributions. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek
2012-01-01
As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single-variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p value, no matter how many loci. Copyright © 2013 S. Karger AG, Basel.
Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Andrew; Musolf, Anthony; Matise, Tara C.; Finch, Stephen J.; Gordon, Derek
2013-01-01
As with any new technology, next generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model, based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to that data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have lower power than the corresponding single variant simulation results, most probably due to our specification of multi-variant SNP correlation values. In conclusion, our LTTae,NGS addresses two key challenges with NGS disease studies; first, it allows for differential misclassification when computing the statistic; and second, it addresses the multiple-testing issue in that there is a multi-variant form of the statistic that has only one degree of freedom, and provides a single p-value, no matter how many loci. PMID:23594495
Ramanathan, Arvind; Savol, Andrej J.; Agarwal, Pratul K.; Chennubhotla, Chakra S.
2012-01-01
Biomolecular simulations at milli-second and longer timescales can provide vital insights into functional mechanisms. Since post-simulation analyses of such large trajectory data-sets can be a limiting factor in obtaining biological insights, there is an emerging need to identify key dynamical events and relating these events to the biological function online, that is, as simulations are progressing. Recently, we have introduced a novel computational technique, quasi-anharmonic analysis (QAA) (PLoS One 6(1): e15827), for partitioning the conformational landscape into a hierarchy of functionally relevant sub-states. The unique capabilities of QAA are enabled by exploiting anharmonicity in the form of fourth-order statistics for characterizing atomic fluctuations. In this paper, we extend QAA for analyzing long time-scale simulations online. In particular, we present HOST4MD - a higher-order statistical toolbox for molecular dynamics simulations, which (1) identifies key dynamical events as simulations are in progress, (2) explores potential sub-states and (3) identifies conformational transitions that enable the protein to access those sub-states. We demonstrate HOST4MD on micro-second time-scale simulations of the enzyme adenylate kinase in its apo state. HOST4MD identifies several conformational events in these simulations, revealing how the intrinsic coupling between the three sub-domains (LID, CORE and NMP) changes during the simulations. Further, it also identifies an inherent asymmetry in the opening/closing of the two binding sites. We anticipate HOST4MD will provide a powerful and extensible framework for detecting biophysically relevant conformational coordinates from long time-scale simulations. PMID:22733562
A Bayesian pick-the-winner design in a randomized phase II clinical trial.
Chen, Dung-Tsa; Huang, Po-Yu; Lin, Hui-Yi; Chiappori, Alberto A; Gabrilovich, Dmitry I; Haura, Eric B; Antonia, Scott J; Gray, Jhanelle E
2017-10-24
Many phase II clinical trials evaluate unique experimental drugs/combinations through multi-arm design to expedite the screening process (early termination of ineffective drugs) and to identify the most effective drug (pick the winner) to warrant a phase III trial. Various statistical approaches have been developed for the pick-the-winner design but have been criticized for lack of objective comparison among the drug agents. We developed a Bayesian pick-the-winner design by integrating a Bayesian posterior probability with Simon two-stage design in a randomized two-arm clinical trial. The Bayesian posterior probability, as the rule to pick the winner, is defined as probability of the response rate in one arm higher than in the other arm. The posterior probability aims to determine the winner when both arms pass the second stage of the Simon two-stage design. When both arms are competitive (i.e., both passing the second stage), the Bayesian posterior probability performs better to correctly identify the winner compared with the Fisher exact test in the simulation study. In comparison to a standard two-arm randomized design, the Bayesian pick-the-winner design has a higher power to determine a clear winner. In application to two studies, the approach is able to perform statistical comparison of two treatment arms and provides a winner probability (Bayesian posterior probability) to statistically justify the winning arm. We developed an integrated design that utilizes Bayesian posterior probability, Simon two-stage design, and randomization into a unique setting. It gives objective comparisons between the arms to determine the winner.
Quantitative analysis of the text and graphic content in ophthalmic slide presentations.
Ing, Edsel; Celo, Erdit; Ing, Royce; Weisbrod, Lawrence; Ing, Mercedes
2017-04-01
To determine the characteristics of ophthalmic digital slide presentations. Retrospective quantitative analysis. Slide presentations from a 2015 Canadian primary eye care conference were analyzed for their duration, character and word count, font size, words per minute (wpm), lines per slide, words per slide, slides per minute (spm), text density product (wpm × spm), proportion of graphic content, and Flesch Reading Ease (FRE) score using Microsoft PowerPoint and Word. The median audience evaluation score for the lectures was used to dichotomize the higher scoring lectures (HSL) from the lower scoring lectures (LSL). A priori we hypothesized that there would be a difference in the wpm, spm, text density product, and FRE score between HSL and LSL. Wilcoxon rank-sum tests with Bonferroni correction were utilized. The 17 lectures had medians of 2.5 spm, 20.3 words per slide, 5.0 lines per slide, 28-point sans serif font, 36% graphic content, and text density product of 136.4 words × slides/minute 2 . Although not statistically significant, the HSL had more wpm, fewer words per slide, more graphics per slide, greater text density, and higher FRE score than LSL. There was a statistically significant difference in the spm of the HSL (3.1 ± 1.0) versus the LSL (2.2 ± 1.0) at p = 0.0124. All presenters showed more than 1 slide per minute. The HSL showed more spm than the LSL. The descriptive statistics from this study may aid in the preparation of slides used for teaching and conferences. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Hamed, Moath; Schraml, Frank; Wilson, Jeffrey; Galvin, James; Sabbagh, Marwan N
2018-01-01
To determine whether occipital and cingulate hypometabolism is being under-reported or missed on 18-fluorodeoxyglucose positron emission tomography (FDG-PET) CT scans in patients with Dementia with Lewy Bodies (DLB). Recent studies have reported higher sensitivity and specificity for occipital and cingulate hypometabolism on FDG-PET of DLB patients. This retrospective chart review looked at regions of interest (ROI's) in FDG-PET CT scan reports in 35 consecutive patients with a clinical diagnosis of probable, possible, or definite DLB as defined by the latest DLB Consortium Report. ROI's consisting of glucose hypometabolism in frontal, parietal, temporal, occipital, and cingulate areas were tabulated and charted separately by the authors from the reports. A blinded Nuclear medicine physician read the images independently and marked ROI's separately. A Cohen's Kappa coefficient statistic was calculated to determine agreement between the reports and the blinded reads. On the radiology reports, 25.71% and 17.14% of patients reported occipital and cingulate hypometabolism respectively. Independent reads demonstrated significant disagreement with the proportion of occipital and cingulate hypometabolism being reported on initial reads: 91.43% and 85.71% respectively. Cohen's Kappa statistic determinations demonstrated significant agreement only with parietal hypometabolism (p<0.05). Occipital and cingulate hypometabolism is under-reported and missed frequently on clinical interpretations of FDG-PET scans of patients with DLB, but the frequency of hypometabolism is even higher than previously reported. Further studies with more statistical power and receiver operating characteristic analyses are needed to delineate the sensitivity and specificity of these in vivo biomarkers.
NASA Astrophysics Data System (ADS)
Law, Yan Nei; Lieng, Monica Keiko; Li, Jingmei; Khoo, David Aik-Aun
2014-03-01
Breast cancer is the most common cancer and second leading cause of cancer death among women in the US. The relative survival rate is lower among women with a more advanced stage at diagnosis. Early detection through screening is vital. Mammography is the most widely used and only proven screening method for reliably and effectively detecting abnormal breast tissues. In particular, mammographic density is one of the strongest breast cancer risk factors, after age and gender, and can be used to assess the future risk of disease before individuals become symptomatic. A reliable method for automatic density assessment would be beneficial and could assist radiologists in the evaluation of mammograms. To address this problem, we propose a density classification method which uses statistical features from different parts of the breast. Our method is composed of three parts: breast region identification, feature extraction and building ensemble classifiers for density assessment. It explores the potential of the features extracted from second and higher order statistical information for mammographic density classification. We further investigate the registration of bilateral pairs and time-series of mammograms. The experimental results on 322 mammograms demonstrate that (1) a classifier using features from dense regions has higher discriminative power than a classifier using only features from the whole breast region; (2) these high-order features can be effectively combined to boost the classification accuracy; (3) a classifier using these statistical features from dense regions achieves 75% accuracy, which is a significant improvement from 70% accuracy obtained by the existing approaches.
Performance of Reclassification Statistics in Comparing Risk Prediction Models
Paynter, Nina P.
2012-01-01
Concerns have been raised about the use of traditional measures of model fit in evaluating risk prediction models for clinical use, and reclassification tables have been suggested as an alternative means of assessing the clinical utility of a model. Several measures based on the table have been proposed, including the reclassification calibration (RC) statistic, the net reclassification improvement (NRI), and the integrated discrimination improvement (IDI), but the performance of these in practical settings has not been fully examined. We used simulations to estimate the type I error and power for these statistics in a number of scenarios, as well as the impact of the number and type of categories, when adding a new marker to an established or reference model. The type I error was found to be reasonable in most settings, and power was highest for the IDI, which was similar to the test of association. The relative power of the RC statistic, a test of calibration, and the NRI, a test of discrimination, varied depending on the model assumptions. These tools provide unique but complementary information. PMID:21294152
An Adaptive Association Test for Multiple Phenotypes with GWAS Summary Statistics.
Kim, Junghi; Bai, Yun; Pan, Wei
2015-12-01
We study the problem of testing for single marker-multiple phenotype associations based on genome-wide association study (GWAS) summary statistics without access to individual-level genotype and phenotype data. For most published GWASs, because obtaining summary data is substantially easier than accessing individual-level phenotype and genotype data, while often multiple correlated traits have been collected, the problem studied here has become increasingly important. We propose a powerful adaptive test and compare its performance with some existing tests. We illustrate its applications to analyses of a meta-analyzed GWAS dataset with three blood lipid traits and another with sex-stratified anthropometric traits, and further demonstrate its potential power gain over some existing methods through realistic simulation studies. We start from the situation with only one set of (possibly meta-analyzed) genome-wide summary statistics, then extend the method to meta-analysis of multiple sets of genome-wide summary statistics, each from one GWAS. We expect the proposed test to be useful in practice as more powerful than or complementary to existing methods. © 2015 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Torrado, Jesús; Hu, Bin; Achúcarro, Ana
2017-10-01
We update the search for features in the cosmic microwave background (CMB) power spectrum due to transient reductions in the speed of sound, using Planck 2015 CMB temperature and polarization data. We enlarge the parameter space to much higher oscillatory frequencies of the feature, and define a robust prior independent of the ansatz for the reduction, guaranteed to reproduce the assumptions of the theoretical model. This prior exhausts the regime in which features coming from a Gaussian reduction are easily distinguishable from the baseline cosmology. We find a fit to the ℓ≈20 - 40 minus /plus structure in Planck TT power spectrum, as well as features spanning along higher ℓ's (ℓ≈100 - 1500 ). None of those fits is statistically significant, either in terms of their improvement of the likelihood or in terms of the Bayes ratio. For the higher-ℓ ones, their oscillatory frequency (and their amplitude to a lesser extent) is tightly constrained, so they can be considered robust, falsifiable predictions for their correlated features in the CMB bispectrum. We compute said correlated features, and assess their signal to noise and correlation with the secondary bispectrum of the correlation between the gravitational lensing of the CMB and the integrated Sachs-Wolfe effect. We compare our findings to the shape-agnostic oscillatory template tested in Planck 2015, and we comment on some tantalizing coincidences with some of the traits described in Planck's 2015 bispectrum data.
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
A Novel Analysis Of The Connection Between Indian Monsoon Rainfall And Solar Activity
NASA Astrophysics Data System (ADS)
Bhattacharyya, S.; Narasimha, R.
2005-12-01
The existence of possible correlations between the solar cycle period as extracted from the yearly means of sunspot numbers and any periodicities that may be present in the Indian monsoon rainfall has been addressed using wavelet analysis. The wavelet transform coefficient maps of sunspot-number time series and those of the homogeneous Indian monsoon rainfall annual time series data reveal striking similarities, especially around the 11-year period. A novel method to analyse and quantify this similarity devising statistical schemes is suggested in this paper. The wavelet transform coefficient maxima at the 11-year period for the sunspot numbers and the monsoon rainfall have each been modelled as a point process in time and a statistical scheme for identifying a trend or dependence between the two processes has been devised. A regression analysis of parameters in these processes reveals a nearly linear trend with small but systematic deviations from the regressed line. Suitable function models for these deviations have been obtained through an unconstrained error minimisation scheme. These models provide an excellent fit to the time series of the given wavelet transform coefficient maxima obtained from actual data. Statistical significance tests on these deviations suggest with 99% confidence that the deviations are sample fluctuations obtained from normal distributions. In fact our earlier studies (see, Bhattacharyya and Narasimha, 2005, Geophys. Res. Lett., Vol. 32, No. 5) revealed that average rainfall is higher during periods of greater solar activity for all cases, at confidence levels varying from 75% to 99%, being 95% or greater in 3 out of 7 of them. Analysis using standard wavelet techniques reveals higher power in the 8--16 y band during the higher solar activity period, in 6 of the 7 rainfall time series, at confidence levels exceeding 99.99%. Furthermore, a comparison between the wavelet cross spectra of solar activity with rainfall and noise (including those simulating the rainfall spectrum and probability distribution) revealed that over the two test-periods respectively of high and low solar activity, the average cross power of the solar activity index with rainfall exceeds that with the noise at z-test confidence levels exceeding 99.99% over period-bands covering the 11.6 y sunspot cycle (see, Bhattacharyya and Narasimha, SORCE 2005 14-16th September, at Durango, Colorado USA). These results provide strong evidence for connections between Indian rainfall and solar activity. The present study reveals in addition the presence of subharmonics of the solar cycle period in the monsoon rainfall time series together with information on their phase relationships.
Kwak, Sang-Won; Moon, Young-Mi; Yoo, Yeon-Jee; Baek, Seung-Ho; Lee, WooCheol; Kim, Hyeon-Cheol
2014-11-01
The purpose of this study was to compare the cutting efficiency of a newly developed microprojection tip and a diamond-coated tip under two different engine powers. The apical 3-mm of each root was resected, and root-end preparation was performed with upward and downward pressure using one of the ultrasonic tips, KIS-1D (Obtura Spartan) or JT-5B (B&L Biotech Ltd.). The ultrasonic engine was set to power-1 or -4. Forty teeth were randomly divided into four groups: K1 (KIS-1D / Power-1), J1 (JT-5B / Power-1), K4 (KIS-1D / Power-4), and J4 (JT-5B / Power-4). The total time required for root-end preparation was recorded. All teeth were resected and the apical parts were evaluated for the number and length of cracks using a confocal scanning micrscope. The size of the root-end cavity and the width of the remaining dentin were recorded. The data were statistically analyzed using two-way analysis of variance and a Mann-Whitney test. There was no significant difference in the time required between the instrument groups, but the power-4 groups showed reduced preparation time for both instrument groups (p < 0.05). The K4 and J4 groups with a power-4 showed a significantly higher crack formation and a longer crack irrespective of the instruments. There was no significant difference in the remaining dentin thickness or any of the parameters after preparation. Ultrasonic tips with microprojections would be an option to substitute for the conventional ultrasonic tips with a diamond coating with the same clinical efficiency.
Kwak, Sang-Won; Moon, Young-Mi; Yoo, Yeon-Jee; Baek, Seung-Ho; Lee, WooCheol
2014-01-01
Objectives The purpose of this study was to compare the cutting efficiency of a newly developed microprojection tip and a diamond-coated tip under two different engine powers. Materials and Methods The apical 3-mm of each root was resected, and root-end preparation was performed with upward and downward pressure using one of the ultrasonic tips, KIS-1D (Obtura Spartan) or JT-5B (B&L Biotech Ltd.). The ultrasonic engine was set to power-1 or -4. Forty teeth were randomly divided into four groups: K1 (KIS-1D / Power-1), J1 (JT-5B / Power-1), K4 (KIS-1D / Power-4), and J4 (JT-5B / Power-4). The total time required for root-end preparation was recorded. All teeth were resected and the apical parts were evaluated for the number and length of cracks using a confocal scanning micrscope. The size of the root-end cavity and the width of the remaining dentin were recorded. The data were statistically analyzed using two-way analysis of variance and a Mann-Whitney test. Results There was no significant difference in the time required between the instrument groups, but the power-4 groups showed reduced preparation time for both instrument groups (p < 0.05). The K4 and J4 groups with a power-4 showed a significantly higher crack formation and a longer crack irrespective of the instruments. There was no significant difference in the remaining dentin thickness or any of the parameters after preparation. Conclusions Ultrasonic tips with microprojections would be an option to substitute for the conventional ultrasonic tips with a diamond coating with the same clinical efficiency. PMID:25383346
Monterde, David; Vela, Emili; Clèries, Montse; García Eroles, Luis; Pérez Sust, Pol
2018-02-09
To compare the performance in terms of goodness of fit and explanatory power of 2morbidity groupers in primary care (PC): adjusted morbidity groups (AMG) and clinical risk groups (CRG). Cross-sectional study. PC in the Catalan Institute for the Health (CIH), Catalonia, Spain. Population allocated in primary care centers of the CIH for the year 2014. Three indicators of interest are analyzed such as urgent hospitalization, number of visits and spending in pharmacy. A stratified analysis by centers is applied adjusting generalized lineal models from the variables age, sex and morbidity grouping to explain each one of the 3variables of interest. The statistical measures to analyze the performance of the different models applied are the Akaike index, the Bayes index and the pseudo-variability explained by deviance change. The results show that in the area of the primary care the explanatory power of the AMGs is higher to that offered by the CRGs, especially for the case of the visits and the pharmacy. The performance of GMAs in the area of the CIH PC is higher than that shown by the CRGs. Copyright © 2018 The Authors. Publicado por Elsevier España, S.L.U. All rights reserved.
Presotto, L; Bettinardi, V; De Bernardi, E; Belli, M L; Cattaneo, G M; Broggi, S; Fiorino, C
2018-06-01
The analysis of PET images by textural features, also known as radiomics, shows promising results in tumor characterization. However, radiomic metrics (RMs) analysis is currently not standardized and the impact of the whole processing chain still needs deep investigation. We characterized the impact on RM values of: i) two discretization methods, ii) acquisition statistics, and iii) reconstruction algorithm. The influence of tumor volume and standardized-uptake-value (SUV) on RM was also investigated. The Chang-Gung-Image-Texture-Analysis (CGITA) software was used to calculate 39 RMs using phantom data. Thirty noise realizations were acquired to measure statistical effect size indicators for each RM. The parameter η 2 (fraction of variance explained by the nuisance factor) was used to assess the effect of categorical variables, considering η 2 < 20% and 20% < η 2 < 40% as representative of a "negligible" and a "small" dependence respectively. The Cohen's d was used as discriminatory power to quantify the separation of two distributions. We found the discretization method based on fixed-bin-number (FBN) to outperform the one based on fixed-bin-size in units of SUV (FBS), as the latter shows a higher SUV dependence, with 30 RMs showing η 2 > 20%. FBN was also less influenced by the acquisition and reconstruction setup:with FBN 37 RMs had η 2 < 40%, only 20 with FBS. Most RMs showed a good discriminatory power among heterogeneous PET signals (for FBN: 29 out of 39 RMs with d > 3). For RMs analysis, FBN should be preferred. A group of 21 RMs was suggested for PET radiomics analysis. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Glatter, Otto; Fuchs, Heribert; Jorde, Christian; Eigner, Wolf-Dieter
1987-03-01
The microprocessor of an 8-bit PC system is used as a central control unit for the acquisition and evaluation of data from quasi-elastic light scattering experiments. Data are sampled with a width of 8 bits under control of the CPU. This limits the minimum sample time to 20 μs. Shorter sample times would need a direct memory access channel. The 8-bit CPU can address a 64-kbyte RAM without additional paging. Up to 49 000 sample points can be measured without interruption. After storage, a correlation function or a power spectrum can be calculated from such a primary data set. Furthermore access is provided to the primary data for stability control, statistical tests, and for comparison of different evaluation methods for the same experiment. A detailed analysis of the signal (histogram) and of the effect of overflows is possible and shows that the number of pulses but not the number of overflows determines the error in the result. The correlation function can be computed with reasonable accuracy from data with a mean pulse rate greater than one, the power spectrum needs a three times higher pulse rate for convergence. The statistical accuracy of the results from 49 000 sample points is of the order of a few percent. Additional averages are necessary to improve their quality. The hardware extensions for the PC system are inexpensive. The main disadvantage of the present system is the high minimum sampling time of 20 μs and the fact that the correlogram or the power spectrum cannot be computed on-line as it can be done with hardware correlators or spectrum analyzers. These shortcomings and the storage size restrictions can be removed with a faster 16/32-bit CPU.
Power Laws and Market Crashes ---Empirical Laws on Bursting Bubbles---
NASA Astrophysics Data System (ADS)
Kaizoji, T.
In this paper, we quantitatively investigate the statistical properties of a statistical ensemble of stock prices. We selected 1200 stocks traded on the Tokyo Stock Exchange, and formed a statistical ensemble of daily stock prices for each trading day in the 3-year period from January 4, 1999 to December 28, 2001, corresponding to the period of the forming of the internet bubble in Japn, and its bursting in the Japanese stock market. We found that the tail of the complementary cumulative distribution function of the ensemble of stock prices in the high value of the price is well described by a power-law distribution, P (S > x) ˜ x^{-α}, with an exponent that moves in the range of 1.09 < α < 1.27. Furthermore, we found that as the power-law exponents α approached unity, the bubbles collapsed. This suggests that Zipf's law for stock prices is a sign that bubbles are going to burst.
Statistical power comparisons at 3T and 7T with a GO / NOGO task.
Torrisi, Salvatore; Chen, Gang; Glen, Daniel; Bandettini, Peter A; Baker, Chris I; Reynolds, Richard; Yen-Ting Liu, Jeffrey; Leshin, Joseph; Balderston, Nicholas; Grillon, Christian; Ernst, Monique
2018-07-15
The field of cognitive neuroscience is weighing evidence about whether to move from standard field strength to ultra-high field (UHF). The present study contributes to the evidence by comparing a cognitive neuroscience paradigm at 3 Tesla (3T) and 7 Tesla (7T). The goal was to test and demonstrate the practical effects of field strength on a standard GO/NOGO task using accessible preprocessing and analysis tools. Two independent matched healthy samples (N = 31 each) were analyzed at 3T and 7T. Results show gains at 7T in statistical strength, the detection of smaller effects and group-level power. With an increased availability of UHF scanners, these gains may be exploited by cognitive neuroscientists and other neuroimaging researchers to develop more efficient or comprehensive experimental designs and, given the same sample size, achieve greater statistical power at 7T. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Naylor, M.; Main, I. G.; Greenhough, J.; Bell, A. F.; McCloskey, J.
2009-04-01
The Sumatran Boxing Day earthquake and subsequent large events provide an opportunity to re-evaluate the statistical evidence for characteristic earthquake events in frequency-magnitude distributions. Our aims are to (i) improve intuition regarding the properties of samples drawn from power laws, (ii) illustrate using random samples how appropriate Poisson confidence intervals can both aid the eye and provide an appropriate statistical evaluation of data drawn from power-law distributions, and (iii) apply these confidence intervals to test for evidence of characteristic earthquakes in subduction-zone frequency-magnitude distributions. We find no need for a characteristic model to describe frequency magnitude distributions in any of the investigated subduction zones, including Sumatra, due to an emergent skew in residuals of power law count data at high magnitudes combined with a sample bias for examining large earthquakes as candidate characteristic events.
A κ-generalized statistical mechanics approach to income analysis
NASA Astrophysics Data System (ADS)
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
The statistics of primordial density fluctuations
NASA Astrophysics Data System (ADS)
Barrow, John D.; Coles, Peter
1990-05-01
The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.
NASA Technical Reports Server (NTRS)
Murphy, Kyle R.; Mann, Ian R.; Rae, I. Jonathan; Sibeck, David G.; Watt, Clare E. J.
2016-01-01
Wave-particle interactions play a crucial role in energetic particle dynamics in the Earths radiation belts. However, the relative importance of different wave modes in these dynamics is poorly understood. Typically, this is assessed during geomagnetic storms using statistically averaged empirical wave models as a function of geomagnetic activity in advanced radiation belt simulations. However, statistical averages poorly characterize extreme events such as geomagnetic storms in that storm-time ultralow frequency wave power is typically larger than that derived over a solar cycle and Kp is a poor proxy for storm-time wave power.
A new statistical methodology predicting chip failure probability considering electromigration
NASA Astrophysics Data System (ADS)
Sun, Ted
In this research thesis, we present a new approach to analyze chip reliability subject to electromigration (EM) whose fundamental causes and EM phenomenon happened in different materials are presented in this thesis. This new approach utilizes the statistical nature of EM failure in order to assess overall EM risk. It includes within-die temperature variations from the chip's temperature map extracted by an Electronic Design Automation (EDA) tool to estimate the failure probability of a design. Both the power estimation and thermal analysis are performed in the EDA flow. We first used the traditional EM approach to analyze the design with a single temperature across the entire chip that involves 6 metal and 5 via layers. Next, we used the same traditional approach but with a realistic temperature map. The traditional EM analysis approach and that coupled with a temperature map and the comparison between the results of considering and not considering temperature map are presented in in this research. A comparison between these two results confirms that using a temperature map yields a less pessimistic estimation of the chip's EM risk. Finally, we employed the statistical methodology we developed considering a temperature map and different use-condition voltages and frequencies to estimate the overall failure probability of the chip. The statistical model established considers the scaling work with the usage of traditional Black equation and four major conditions. The statistical result comparisons are within our expectations. The results of this statistical analysis confirm that the chip level failure probability is higher i) at higher use-condition frequencies for all use-condition voltages, and ii) when a single temperature instead of a temperature map across the chip is considered. In this thesis, I start with an overall review on current design types, common flows, and necessary verifications and reliability checking steps used in this IC design industry. Furthermore, the important concepts about "Scripting Automation" which is used in all the integration of using diversified EDA tools in this research work are also described in detail with several examples and my completed coding works are also put in the appendix for your reference. Hopefully, this construction of my thesis will give readers a thorough understanding about my research work from the automation of EDA tools to the statistical data generation, from the nature of EM to the statistical model construction, and the comparisons among the traditional EM analysis and the statistical EM analysis approaches.
Teaching Principles of Linkage and Gene Mapping with the Tomato.
ERIC Educational Resources Information Center
Hawk, James A.; And Others
1980-01-01
A three-point linkage system in tomatoes is used to explain concepts of gene mapping, linking and statistical analysis. The system is designed for teaching the effective use of statistics, and the power of genetic analysis from statistical analysis of phenotypic ratios. (Author/SA)
APPLICATION OF STATISTICAL ENERGY ANALYSIS TO VIBRATIONS OF MULTI-PANEL STRUCTURES.
cylindrical shell are compared with predictions obtained from statistical energy analysis . Generally good agreement is observed. The flow of mechanical...the coefficients of proportionality between power flow and average modal energy difference, which one must know in order to apply statistical energy analysis . No
Kim, W; Kim, H; Citrome, L; Akiskal, H S; Goffin, K C; Miller, S; Holtzman, J N; Hooshmand, F; Wang, P W; Hill, S J; Ketter, T A
2016-09-01
Assess strengths and limitations of mixed bipolar depression definitions made more inclusive than that of the Diagnostic and Statistical Manual of Mental Disorders Fifth Edition (DSM-5) by requiring fewer than three 'non-overlapping' mood elevation symptoms (NOMES). Among bipolar disorder (BD) out-patients assessed with Systematic Treatment Enhancement Program for BD (STEP-BD) Affective Disorders Evaluation, we assessed prevalence, demographics, and clinical correlates of mixed vs. pure depression, using less inclusive (≥3 NOMES, DSM-5), more inclusive (≥2 NOMES), and most inclusive (≥1 NOMES) definitions. Among 153 depressed BD, compared to less inclusive DSM-5 threshold, our more and most inclusive thresholds, yielded approximately two- and five-fold higher mixed depression rates (7.2%, 15.0%, and 34.6% respectively), and important statistically significant clinical correlates for mixed compared to pure depression (e.g. more lifetime anxiety disorder comorbidity, more current irritability), which were not significant using the DSM-5 threshold. Further studies assessing strengths and limitations of more inclusive mixed depression definitions are warranted, including assessing the extent to which enhanced statistical power vs. other factors contributes to more vs. less inclusive mixed bipolar depression thresholds having more statistically significant clinical correlates, and whether 'overlapping' mood elevation symptoms should be counted. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Karulin, Alexey Y; Caspell, Richard; Dittrich, Marcus; Lehmann, Paul V
2015-03-02
Accurate assessment of positive ELISPOT responses for low frequencies of antigen-specific T-cells is controversial. In particular, it is still unknown whether ELISPOT counts within replicate wells follow a theoretical distribution function, and thus whether high power parametric statistics can be used to discriminate between positive and negative wells. We studied experimental distributions of spot counts for up to 120 replicate wells of IFN-γ production by CD8+ T-cell responding to EBV LMP2A (426 - 434) peptide in human PBMC. The cells were tested in serial dilutions covering a wide range of average spot counts per condition, from just a few to hundreds of spots per well. Statistical analysis of the data using diagnostic Q-Q plots and the Shapiro-Wilk normality test showed that in the entire dynamic range of ELISPOT spot counts within replicate wells followed a normal distribution. This result implies that the Student t-Test and ANOVA are suited to identify positive responses. We also show experimentally that borderline responses can be reliably detected by involving more replicate wells, plating higher numbers of PBMC, addition of IL-7, or a combination of these. Furthermore, we have experimentally verified that the number of replicates needed for detection of weak responses can be calculated using parametric statistics.
Hunt, Sheri A; Bartizek, Robert D
2004-01-01
To evaluate the stain removal efficacy of two different toothbrush designs using a laboratory stained pellicle test with seven different dentifrices. The toothbrushes were a prototype powered toothbrush (Crest SpinBrush Pro Whitening) and an ADA reference manual toothbrush, as a control. The dentifrices used in the study were: Crest Dual Action Whitening (Cool Mint), Crest Extra Whitening with Tartar Control (Clean Mint), Crest MultiCare Whitening (Fresh Mint), Colgate Total, Colgate Total Plus Whitening, Arm & Hammer Advance White with Tartar Control and Rembrandt Plus with Active Dental Peroxide. This was a randomized, parallel group study that examined stain removal with a novel toothbrushing configuration adapted for powered and manual toothbrushes. Stain was scored before and after brushing for two consecutive, 1-minute periods using digital image analysis. The mean change in L* was statistically compared between toothbrushes with ANCOVA. Labial enamel specimens were obtained from bovine permanent incisors and these specimens were subjected to a laboratory staining process until the L* values for the samples were in the range of 35-45. Digital images for CIE L*a*b* analysis were captured using a high-resolution digital camera under standard polarized lighting conditions. Based on the L* values, the enamel specimens were divided into 14 groups of nine specimens each. Baseline L* values ranged from 40.62 to 41.38 for the 14 toothbrush/dentifrice combinations. The change in L* (post-brushing minus baseline), denoted as deltaL*, was calculated for each specimen and the resulting data were subjected to a two-way ANCOVA. Toothbrush type and dentifrice type were the two terms in the model, and baseline L* was the covariate. Pairwise tests were performed on the adjusted means in order to compare the stain removal efficacy of the two toothbrushes for each of the seven dentifrices evaluated. The powered toothbrush resulted in statistically significantly greater deltaL* values (all P < or = 0.006) than the manual toothbrush for every dentifrice tested. The deltaL* values for dentifrices used with the powered toothbrush were from 66.0-164.2% higher than for the same dentifrice used with the manual toothbrush.
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Foster, James L.; DiGirolamo, Nicolo E.; Riggs, George A.
2010-01-01
Earlier onset of springtime weather including earlier snowmelt has been documented in the western United States over at least the last 50 years. Because the majority (>70%) of the water supply in the western U.S. comes from snowmelt, analysis of the declining spring snowpack (and shrinking glaciers) has important implications for streamflow management. The amount of water in a snowpack influences stream discharge which can also influence erosion and sediment transport by changing stream power, or the rate at which a stream can do work such as move sediment and erode the stream bed. The focus of this work is the Wind River Range (WRR) in west-central Wyoming. Ten years of Moderate-Resolution Imaging Spectroradiometer (MODIS) snow-cover, cloud- gap-filled (CGF) map products and 30 years of discharge and meteorological station data are studied. Streamflow data from six streams in the WRR drainage basins show lower annual discharge and earlier snowmelt in the decade of the 2000s than in the previous three decades, though no trend of either lower streamflow or earlier snowmelt was observed using MODIS snow-cover maps within the decade of the 2000s. Results show a statistically-significant trend at the 95% confidence level (or higher) of increasing weekly maximum air temperature (for three out of the five meteorological stations studied) in the decade of the 1970s, and also for the 40-year study period. MODIS-derived snow cover (percent of basin covered) measured on 30 April explains over 89% of the variance in discharge for maximum monthly streamflow in the decade of the 2000s using Spearman rank correlation analysis. We also investigated stream power for Bull Lake Creek Above Bull Lake from 1970 to 2009; a statistically-significant end toward reduced stream power was found (significant at the 90% confidence level). Observed changes in streamflow and stream power may be related to increasing weekly maximum air temperature measured during the 40-year study period. The strong relationship between percent of basin covered and streamflow indicates that MODIS data is useful for predicting streamflow, leading to improved reservoir management
NASA Technical Reports Server (NTRS)
Hall, Dorothy K.; Foster, James L.; Riggs, George A.; DiGirolano, Nocolo E.
2010-01-01
Earlier onset of springtime weather including earlier snowmelt has been documented in the western United States over at least the last 50 years. Because the majority (>70%) of the water supply in the western U.S. comes from snowmelt, analysis of the declining spring snowpack (and shrinking glaciers) has important implications for streamflow management. The amount of water in a snowpack influences stream discharge which can also influence erosion and sediment transport by changing stream power, or the rate at which a stream can do work such as move sediment and erode the stream bed. The focus of this work is the Wind River Range (WRR) in west-central Wyoming. Ten years of Moderate-Resolution Imaging Spectroradiometer (MODIS) snow-cover, cloud- gap-filled (CGF) map products and 30 years of discharge and meteorological station a are studied. Streamflow data from six streams in the WRR drainage basins show lower annual discharge and earlier snowmelt in the decade of the 2000s than in the previous three decades, though no trend of either lower streamflow or earlier snowmelt was observed using MODIS snow-cover maps within the decade of the 2000s. Results show a statistically-significant trend at the 95% confidence level (or higher) of increasing weekly maximum air temperature (for three out of the five meteorological stations studied) in the decade of the 1970s, and also for the 40-year study period. MODIS- derived snow cover (percent of basin covered) measured on 30 April explains over 89% of the variance in discharge for maximum monthly streamflow in the decade of the 2000s using Spearman rank correlation analysis. We also investigated stream power for Bull Lake Creek Above Bull Lake from 1970 to 2009; a statistically-significant trend toward reduced stream power was found (significant at the 90% confidence level). Observed changes in streamflow and stream power may be related to increasing weekly maximum air temperature measured during the 40-year study period. The strong relationship between percent of basin covered and streamflow indicates that MODIS data is useful for predicting streamflow, leading to improved reservoir management.
The power and promise of RNA-seq in ecology and evolution.
Todd, Erica V; Black, Michael A; Gemmell, Neil J
2016-03-01
Reference is regularly made to the power of new genomic sequencing approaches. Using powerful technology, however, is not the same as having the necessary power to address a research question with statistical robustness. In the rush to adopt new and improved genomic research methods, limitations of technology and experimental design may be initially neglected. Here, we review these issues with regard to RNA sequencing (RNA-seq). RNA-seq adds large-scale transcriptomics to the toolkit of ecological and evolutionary biologists, enabling differential gene expression (DE) studies in nonmodel species without the need for prior genomic resources. High biological variance is typical of field-based gene expression studies and means that larger sample sizes are often needed to achieve the same degree of statistical power as clinical studies based on data from cell lines or inbred animal models. Sequencing costs have plummeted, yet RNA-seq studies still underutilize biological replication. Finite research budgets force a trade-off between sequencing effort and replication in RNA-seq experimental design. However, clear guidelines for negotiating this trade-off, while taking into account study-specific factors affecting power, are currently lacking. Study designs that prioritize sequencing depth over replication fail to capitalize on the power of RNA-seq technology for DE inference. Significant recent research effort has gone into developing statistical frameworks and software tools for power analysis and sample size calculation in the context of RNA-seq DE analysis. We synthesize progress in this area and derive an accessible rule-of-thumb guide for designing powerful RNA-seq experiments relevant in eco-evolutionary and clinical settings alike. © 2016 John Wiley & Sons Ltd.
Flynn, Kevin; Swintek, Joe; Johnson, Rodney
2017-02-01
Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of medaka breeding pairs. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) would have on the statistical power of the test. The MEOGRT Reproduction Power Analysis Tool (MRPAT) is a software tool developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. Example scenarios are detailed that highlight the importance of the reproductive parameters on statistical power. When control fecundity is increased from 21 to 38 eggs per pair per day and the variance decreased from 49 to 20, the gain in power is equivalent to increasing replication by 2.5 times. On the other hand, if 10% of the breeding pairs, including controls, do not spawn, the power to detect a 40% decrease in fecundity drops to 0.54 from nearly 0.98 when all pairs have some level of egg production. Perhaps most importantly, MRPAT was used to inform the decision making process that lead to the final recommendation of the MEOGRT to have 24 control breeding pairs and 12 breeding pairs in each exposure group. Published by Elsevier Inc.
Wu, Zheyang; Zhao, Hongyu
2012-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies.
Wu, Zheyang; Zhao, Hongyu
2013-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies. PMID:23956610
Needleman, Ian G; Hirsch, Nicholas P; Leemans, Michele; Moles, David R; Wilson, Michael; Ready, Derren R; Ismail, Salim; Ciric, Lena; Shaw, Michael J; Smith, Martin; Garner, Anne; Wilson, Sally
2011-03-01
To investigate the effect of a powered toothbrush on colonization of dental plaque by ventilator-associated pneumonia (VAP)-associated organisms and dental plaque removal. Parallel-arm, single-centre, examiner- and analyst-masked randomized controlled trial. Forty-six adults were recruited within 48 h of admission. Test intervention: powered toothbrush, control intervention: sponge toothette, both used four times per day for 2 min. Groups received 20 ml, 0.2% chlorhexidine mouthwash at each time point. The results showed a low prevalence of respiratory pathogens throughout with no statistically significant differences between groups. A highly statistically significantly greater reduction in dental plaque was produced by the powered toothbrush compared with the control treatment; mean plaque index at day 5, powered toothbrush 0.75 [95% confidence interval (CI) 0.53, 1.00], sponge toothette 1.35 (95% CI 0.95, 1.74), p=0.006. Total bacterial viable count was also highly statistically significantly lower in the test group at day 5; Log(10) mean total bacterial counts: powered toothbrush 5.12 (95% CI 4.60, 5.63), sponge toothette 6.61 (95% CI 5.93, 7.28), p=0.002. Powered toothbrushes are highly effective for plaque removal in intubated patients in a critical unit and should be tested for their potential to reduce VAP incidence and health complications. © 2011 John Wiley & Sons A/S.
Hossain, Monowar; Mekhilef, Saad; Afifi, Firdaus; Halabi, Laith M; Olatomiwa, Lanre; Seyedmahmoudian, Mehdi; Horan, Ben; Stojcevski, Alex
2018-01-01
In this paper, the suitability and performance of ANFIS (adaptive neuro-fuzzy inference system), ANFIS-PSO (particle swarm optimization), ANFIS-GA (genetic algorithm) and ANFIS-DE (differential evolution) has been investigated for the prediction of monthly and weekly wind power density (WPD) of four different locations named Mersing, Kuala Terengganu, Pulau Langkawi and Bayan Lepas all in Malaysia. For this aim, standalone ANFIS, ANFIS-PSO, ANFIS-GA and ANFIS-DE prediction algorithm are developed in MATLAB platform. The performance of the proposed hybrid ANFIS models is determined by computing different statistical parameters such as mean absolute bias error (MABE), mean absolute percentage error (MAPE), root mean square error (RMSE) and coefficient of determination (R2). The results obtained from ANFIS-PSO and ANFIS-GA enjoy higher performance and accuracy than other models, and they can be suggested for practical application to predict monthly and weekly mean wind power density. Besides, the capability of the proposed hybrid ANFIS models is examined to predict the wind data for the locations where measured wind data are not available, and the results are compared with the measured wind data from nearby stations.
NASA Astrophysics Data System (ADS)
Ali Saif, M.; Gade, Prashant M.
2009-03-01
Pareto law, which states that wealth distribution in societies has a power-law tail, has been the subject of intensive investigations in the statistical physics community. Several models have been employed to explain this behavior. However, most of the agent based models assume the conservation of number of agents and wealth. Both these assumptions are unrealistic. In this paper, we study the limiting wealth distribution when one or both of these assumptions are not valid. Given the universality of the law, we have tried to study the wealth distribution from the asset exchange models point of view. We consider models in which (a) new agents enter the market at a constant rate (b) richer agents fragment with higher probability introducing newer agents in the system (c) both fragmentation and entry of new agents is taking place. While models (a) and (c) do not conserve total wealth or number of agents, model (b) conserves total wealth. All these models lead to a power-law tail in the wealth distribution pointing to the possibility that more generalized asset exchange models could help us to explain the emergence of a power-law tail in wealth distribution.
Mekhilef, Saad; Afifi, Firdaus; Halabi, Laith M.; Olatomiwa, Lanre; Seyedmahmoudian, Mehdi; Stojcevski, Alex
2018-01-01
In this paper, the suitability and performance of ANFIS (adaptive neuro-fuzzy inference system), ANFIS-PSO (particle swarm optimization), ANFIS-GA (genetic algorithm) and ANFIS-DE (differential evolution) has been investigated for the prediction of monthly and weekly wind power density (WPD) of four different locations named Mersing, Kuala Terengganu, Pulau Langkawi and Bayan Lepas all in Malaysia. For this aim, standalone ANFIS, ANFIS-PSO, ANFIS-GA and ANFIS-DE prediction algorithm are developed in MATLAB platform. The performance of the proposed hybrid ANFIS models is determined by computing different statistical parameters such as mean absolute bias error (MABE), mean absolute percentage error (MAPE), root mean square error (RMSE) and coefficient of determination (R2). The results obtained from ANFIS-PSO and ANFIS-GA enjoy higher performance and accuracy than other models, and they can be suggested for practical application to predict monthly and weekly mean wind power density. Besides, the capability of the proposed hybrid ANFIS models is examined to predict the wind data for the locations where measured wind data are not available, and the results are compared with the measured wind data from nearby stations. PMID:29702645
ERIC Educational Resources Information Center
Tabor, Josh
2010-01-01
On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)
The evens and odds of CMB anomalies
NASA Astrophysics Data System (ADS)
Gruppuso, A.; Kitazawa, N.; Lattanzi, M.; Mandolesi, N.; Natoli, P.; Sagnotti, A.
2018-06-01
The lack of power of large-angle CMB anisotropies is known to increase its statistical significance at higher Galactic latitudes, where a string-inspired pre-inflationary scale Δ can also be detected. Considering the Planck 2015 data, and relying largely on a Bayesian approach, we show that the effect is mostly driven by the even - ℓ harmonic multipoles with ℓ ≲ 20, which appear sizably suppressed in a way that is robust with respect to Galactic masking, along with the corresponding detections of Δ. On the other hand, the first odd - ℓ multipoles are only suppressed at high Galactic latitudes. We investigate this behavior in different sky masks, constraining Δ through even and odd multipoles, and we elaborate on possible implications. We include low- ℓ polarization data which, despite being noise-limited, help in attaining confidence levels of about 3 σ in the detection of Δ. We also show by direct forecasts that a future all-sky E-mode cosmic-variance-limited polarization survey may push the constraining power for Δ beyond 5 σ.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai Lijun; Wei Haiyan; Wang Lingqing
2007-06-15
Coal burning may enhance human exposure to the natural radionuclides that occur around coal-fired power plants (CFPP). In this study, the spatial distribution and hazard assessment of radionuclides found in soils around a CFPP were investigated using statistics, geostatistics, and geographic information system (GIS) techniques. The concentrations of {sup 226}Ra, {sup 232}Th, and {sup 40}K in soils range from 12.54 to 40.18, 38.02 to 72.55, and 498.02 to 1126.98 Bq kg{sup -1}, respectively. Ordinary kriging was carried out to map the spatial patterns of radionuclides, and disjunctive kriging was used to quantify the probability of radium equivalent activity (Ra{sub eq})more » higher than the threshold. The maps show that the spatial variability of the natural radionuclide concentrations in soils was apparent. The results of this study could provide valuable information for risk assessment of environmental pollution and decision support.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, L.J.; Wei, H.Y.; Wang, L.Q.
2007-06-15
Coal burning may enhance human exposure to the natural radionuclides that occur around coal-fired power plants (CFPP). In this study, the spatial distribution and hazard assessment of radionuclides found in soils around a CFPP were investigated using statistics, geostatistics, and geographic information system (GIS) techniques. The concentrations of Ra-226, Th-232, and K-40 in soils range from 12.54 to 40.18, 38.02 to 72.55, and 498.02 to 1126.98 Bq kg{sup -1}, respectively. Ordinary kriging was carried out to map the spatial patterns of radionuclides, and disjunctive kriging was used to quantify the probability of radium equivalent activity (Ra{sub eq}) higher than themore » threshold. The maps show that the spatial variability of the natural radionuclide concentrations in soils was apparent. The results of this study could provide valuable information for risk assessment of environmental pollution and decision support.« less
NASA Astrophysics Data System (ADS)
Mukherjee, Suvodip; Souradeep, Tarun
2016-06-01
Recent measurements of the temperature field of the cosmic microwave background (CMB) provide tantalizing evidence for violation of statistical isotropy (SI) that constitutes a fundamental tenet of contemporary cosmology. CMB space based missions, WMAP, and Planck have observed a 7% departure in the SI temperature field at large angular scales. However, due to higher cosmic variance at low multipoles, the significance of this measurement is not expected to improve from any future CMB temperature measurements. We demonstrate that weak lensing of the CMB due to scalar perturbations produces a corresponding SI violation in B modes of CMB polarization at smaller angular scales. The measurability of this phenomenon depends upon the scales (l range) over which power asymmetry is present. Power asymmetry, which is restricted only to l <64 in the temperature field, cannot lead to any significant observable effect from this new window. However, this effect can put an independent bound on the spatial range of scales of hemispherical asymmetry present in the scalar sector.
Differentiating psychotic patients from nonpsychotic patients with the MMPI-2 and Rorschach.
Dao, Tam K; Prevatt, Frances; Horne, Heather Leveta
2008-01-01
The goal of this study was to examine the incremental validity and the clinical utility of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2; (Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) and Rorschach (Rorschach, 1942) with regard to differential diagnosis in a sample of adult inpatients with a primary psychotic disorder or a primary mood disorder without psychotic features. Diagnostic efficiency statistics have suggested that the Rorschach Perceptual Thinking Index (PTI; Exner, 2000a, 2000b) was better than MMPI-2 scales in discriminating psychotic patients from nonpsychotic patients. We compared the 84% overall correct classification rate (OCC) for the PTI to an OCC of 70% for the MMPI-2 scales. Adding the MMPI-2 scales to the PTI resulted in a decrease in OCC of 1%, whereas adding the PTI to the MMPI-2 resulted in an increase in OCC of 14%. Sensitivity, specificity, positive predictive power, negative predictive power, and kappa were equal or higher with only the PTI in the model.
Mukherjee, Suvodip; Souradeep, Tarun
2016-06-03
Recent measurements of the temperature field of the cosmic microwave background (CMB) provide tantalizing evidence for violation of statistical isotropy (SI) that constitutes a fundamental tenet of contemporary cosmology. CMB space based missions, WMAP, and Planck have observed a 7% departure in the SI temperature field at large angular scales. However, due to higher cosmic variance at low multipoles, the significance of this measurement is not expected to improve from any future CMB temperature measurements. We demonstrate that weak lensing of the CMB due to scalar perturbations produces a corresponding SI violation in B modes of CMB polarization at smaller angular scales. The measurability of this phenomenon depends upon the scales (l range) over which power asymmetry is present. Power asymmetry, which is restricted only to l<64 in the temperature field, cannot lead to any significant observable effect from this new window. However, this effect can put an independent bound on the spatial range of scales of hemispherical asymmetry present in the scalar sector.
Biophysical properties of carboxymethyl derivatives of mannan and dextran.
Korcová, Jana; Machová, Eva; Filip, Jaroslav; Bystrický, Slavomír
2015-12-10
Mannan from Candida albicans, dextran from Leuconostoc spp. and their carboxymethyl (CM)-derivatives were tested on antioxidant and thrombolytic activities. As antioxidant tests, protection of liposomes against OH radicals and reducing power assay were used. Dextran and mannan protected liposomes in dose-dependent manner. Carboxymethylation significantly increased antioxidant properties of both CM-derivatives up to concentration of 10mg/mL, higher concentrations did not change the protection of liposomes. The reducing power of CM-mannan (DS 0.92) was significantly lower (P<0.05) than underivatized mannan. No reductive activity was found for dextran and CM-dextran. All CM-derivatives demonstrated statistically significant increasing activity compared with underivatized polysaccharides. The highest thrombolytic activity was found using CM-mannan (DS 0.92). The clot lysis here amounted to 68.78 ± 6.52% compared with 0.9% NaCl control (18.3 ± 6.3%). Three-dimensional surface profiles of mannan, dextran, and their CM-derivatives were compared by atomic force microscopy. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Worker heat disorders at the Fukushima Daiichi nuclear power plant].
Tsuji, Masayoshi; Kakamu, Takeyasu; Hayakawa, Takehito; Kumagai, Tomohiro; Hidaka, Tomoo; Kanda, Hideyuki; Fukushima, Tetsuhito
2013-01-01
Ever since the Fukushima Daiichi nuclear power plant (NPP) accident, every day about 3,000 workers have been working to repair the situation. The frequent occurrence of heat disorders has been a concern for the workers wearing protective clothing with poor ventilation. We have been analyzing the heat disorder problem since the accident in order to come up with a solution to prevent future heat disorder incidents among Fukushima Daiichi NPP accident clean-up workers. From March 22 to September 16, 2011, the Fukushima Labor Bureau assessed 43 cases of nuclear power plant workers with heat disorders. Age of subject, month and time of occurrence, temperature, and humidity were examined for each case, as well as the severity of heat disorders. The grade of severity was divided into Grade I and Grade II or higher. Then, age, temperature, and humidity were analyzed using the Mann-Whitney Utest, and age, temperature, humidity, and presence or absence of a cool-vest were analyzed using the χ(2) test and logistic regression analysis. SPSS version 17.0 statistical software was used with a level of significance of p< 0.05. Heat disorders occurred most frequently in subjects in their 40s (30.2%), followed by those in their 30s (25.6%), mostly in July (46.5%) between 7 am and 12 pm (69.8%). Heat disorders occurred most frequently in environments with temperatures more than 25°C (76.7%) and humidity of 70-80% (39.5%). Heat disorders of Grade II or higher occurred in 10 cases, 5 of which were in June. According to statistical analysis, there were no significant differences in difference of severity for all factors. Heat disorders usually occur in workers aged 45-60; however, cases of heat disorders at the Fukushima Daiichi NPP occurred in clean-up workers at the relatively younger ages of 30-40, suggesting the need for heat disorder prevention measures for these younger workers. Heat disorder cases primarily occurred in the morning, necessitating preventive measures for the early hours of the day. In addition, because heat disorders of Grade II or higher occurred in June in 5 of 10 cases, we believe heat disorder precautions should be implemented from June. The lack of significant difference in severity difference may be attributable to the small number of cases or other factors. We think Fukushima Daiichi NPP accident clean-up workers need heat disorder prevention measures for their safety, based on the results of this study.
NASA Astrophysics Data System (ADS)
Xu, Ding; Li, Qun
2017-01-01
This paper addresses the power allocation problem for cognitive radio (CR) based on hybrid-automatic-repeat-request (HARQ) with chase combining (CC) in Nakagamimslow fading channels. We assume that, instead of the perfect instantaneous channel state information (CSI), only the statistical CSI is available at the secondary user (SU) transmitter. The aim is to minimize the SU outage probability under the primary user (PU) interference outage constraint. Using the Lagrange multiplier method, an iterative and recursive algorithm is derived to obtain the optimal power allocation for each transmission round. Extensive numerical results are presented to illustrate the performance of the proposed algorithm.
The Power Prior: Theory and Applications
Ibrahim, Joseph G.; Chen, Ming-Hui; Gwon, Yeongjin; Chen, Fang
2015-01-01
The power prior has been widely used in many applications covering a large number of disciplines. The power prior is intended to be an informative prior constructed from historical data. It has been used in clinical trials, genetics, health care, psychology, environmental health, engineering, economics, and business. It has also been applied for a wide variety of models and settings, both in the experimental design and analysis contexts. In this review article, we give an A to Z exposition of the power prior and its applications to date. We review its theoretical properties, variations in its formulation, statistical contexts for which it has been used, applications, and its advantages over other informative priors. We review models for which it has been used, including generalized linear models, survival models, and random effects models. Statistical areas where the power prior has been used include model selection, experimental design, hierarchical modeling, and conjugate priors. Prequentist properties of power priors in posterior inference are established and a simulation study is conducted to further examine the empirical performance of the posterior estimates with power priors. Real data analyses are given illustrating the power prior as well as the use of the power prior in the Bayesian design of clinical trials. PMID:26346180
Power Analysis Software for Educational Researchers
ERIC Educational Resources Information Center
Peng, Chao-Ying Joanne; Long, Haiying; Abaci, Serdar
2012-01-01
Given the importance of statistical power analysis in quantitative research and the repeated emphasis on it by American Educational Research Association/American Psychological Association journals, the authors examined the reporting practice of power analysis by the quantitative studies published in 12 education/psychology journals between 2005…
The Use of Meta-Analytic Statistical Significance Testing
ERIC Educational Resources Information Center
Polanin, Joshua R.; Pigott, Terri D.
2015-01-01
Meta-analysis multiplicity, the concept of conducting multiple tests of statistical significance within one review, is an underdeveloped literature. We address this issue by considering how Type I errors can impact meta-analytic results, suggest how statistical power may be affected through the use of multiplicity corrections, and propose how…
Tsallis p⊥ distribution from statistical clusters
NASA Astrophysics Data System (ADS)
Bialas, A.
2015-07-01
It is shown that the transverse momentum distributions of particles emerging from the decay of statistical clusters, distributed according to a power law in their transverse energy, closely resemble those following from the Tsallis non-extensive statistical model. The experimental data are well reproduced with the cluster temperature T ≈ 160 MeV.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciampelli, I.; Crocella, R.
1962-06-01
Cases of Herpes Zoster observed in patients with neoplasms who were undergoing radiation or chemotherapeutic treatment are reported. Although no statistical value is attributed to the caselist, the higher incidence of symptomatic Zoster in systemic forms and particularly in lymphogranuloma is confirmed. On the basis of the concept of conditioned infectious disease, it is believed that the taking of the Herpes virus is mainly favored by the fall in the body's defense powers. The local effect of the ionizing radiations or of the neoplasm itself might have a certain value in localizing the virus in a given nervous segment. (auth)
Multivariate analysis techniques
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bendavid, Josh; Fisher, Wade C.; Junk, Thomas R.
2016-01-01
The end products of experimental data analysis are designed to be simple and easy to understand: hypothesis tests and measurements of parameters. But, the experimental data themselves are voluminous and complex. Furthermore, in modern collider experiments, many petabytes of data must be processed in search of rare new processes which occur together with much more copious background processes that are of less interest to the task at hand. The systematic uncertainties on the background may be larger than the expected signal in many cases. The statistical power of an analysis and its sensitivity to systematic uncertainty can therefore usually bothmore » be improved by separating signal events from background events with higher efficiency and purity.« less
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased precision of composites for estimating mean conditions. However, low sample sizes (<5 fish) did not achieve 80% power to detect near-threshold values (i.e., <1 mg Se/kg) under any scenario we evaluated. This analysis can assist the sampling design and interpretation of Se assessments from fish tissue by accounting for natural variation in stream fish populations.
Doshi, Dharmil; Limdi, Purvi; Parekh, Nilesh; Gohil, Neepa
2017-01-01
Accurate Intraocular Lens (IOL) power calculation in cataract surgery is very important for providing postoperative precise vision. Selection of most appropriate formula is difficult in high myopic and hypermetropic patients. To investigate the predictability of different IOL (Intra Ocular Lens) power calculation formulae in eyes with short and long Axial Length (AL) and to find out most accurate IOL power calculation formula in both groups. A prospective study was conducted on 80 consecutive patients who underwent phacoemulsification with monofocal IOL implantation after obtaining an informed and written consent. Preoperative keratometry was done by IOL Master. Axial length and anterior chamber depth was measured using A-scan machine ECHORULE 2 (BIOMEDIX). Patients were divided into two groups based on AL. (40 in each group). Group A with AL<22 mm and Group B with AL>24.5 mm. The IOL power calculation in each group was done by Haigis, Hoffer Q, Holladay-I, SRK/T formulae using the software of ECHORULE 2. The actual postoperative Spherical Equivalent (SE), Estimation error (E) and Absolute Error (AE) were calculated at one and half months and were used in data analysis. The predictive accuracy of each formula in each group was analyzed by comparing the Absolute Error (AE). The Kruskal Wallis test was used to compare differences in the (AE) of the formulae. A statistically significant difference was defined as p-value<0.05. In Group A, Hoffer Q, Holladay 1 and SRK/T formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm and accuracy of these three formulae was significantly higher than Haigis formula. Whereas in Group B, Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL more than 24.5 mm. Hoffer Q, Holladay 1 and SRK/T formulae were showing significantly higher accuracy than Haigis formula in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm. In eyes with AL more than 24.5 mm Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate.
Limdi, Purvi; Parekh, Nilesh; Gohil, Neepa
2017-01-01
Introduction Accurate Intraocular Lens (IOL) power calculation in cataract surgery is very important for providing postoperative precise vision. Selection of most appropriate formula is difficult in high myopic and hypermetropic patients. Aim To investigate the predictability of different IOL (Intra Ocular Lens) power calculation formulae in eyes with short and long Axial Length (AL) and to find out most accurate IOL power calculation formula in both groups. Materials and Methods A prospective study was conducted on 80 consecutive patients who underwent phacoemulsification with monofocal IOL implantation after obtaining an informed and written consent. Preoperative keratometry was done by IOL Master. Axial length and anterior chamber depth was measured using A-scan machine ECHORULE 2 (BIOMEDIX). Patients were divided into two groups based on AL. (40 in each group). Group A with AL<22 mm and Group B with AL>24.5 mm. The IOL power calculation in each group was done by Haigis, Hoffer Q, Holladay-I, SRK/T formulae using the software of ECHORULE 2. The actual postoperative Spherical Equivalent (SE), Estimation error (E) and Absolute Error (AE) were calculated at one and half months and were used in data analysis. The predictive accuracy of each formula in each group was analyzed by comparing the Absolute Error (AE). The Kruskal Wallis test was used to compare differences in the (AE) of the formulae. A statistically significant difference was defined as p-value<0.05. Results In Group A, Hoffer Q, Holladay 1 and SRK/T formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm and accuracy of these three formulae was significantly higher than Haigis formula. Whereas in Group B, Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL more than 24.5 mm. Conclusion Hoffer Q, Holladay 1 and SRK/T formulae were showing significantly higher accuracy than Haigis formula in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm. In eyes with AL more than 24.5 mm Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate. PMID:28273986
Universal Power Law of the Gravity Wave Manifestation in the AIM CIPS Polar Mesospheric Cloud Images
NASA Astrophysics Data System (ADS)
Rong, P. P.; Yue, J.; Russell, J. M., III; Siskind, D. E.; Randall, C. E.
2017-12-01
A large ensemble of gravity waves (GWs) resides in the PMCs and we aim to extract the universal law that governs the wave display throughout the GW population. More specifically, we examined how wave display morphology and clarity level varies throughout the wave population manifested through the PMC albedo data. Higher clarity refers to more distinct exhibition of the features which often correspond to larger variances and better organized nature. A gravity wave tracking algorithm is designed and applied to the PMC albedo data taken by the AIM Cloud Imaging and Particle Size (CIPS) instrument to obtain the gravity wave detections throughout the two northern summers in 2007 and 2010. The horizontal wavelengths in the range of 20-60km are the focus of the study because they are the most commonly observed and readily captured in the CIPS orbital strips. A 1-dimensional continuous wavelet transform (CWT) is applied to PMC albedo along all radial directions within an elliptical region that has a radius of 400 km and an axial ratio of 0.65. The center of the elliptical region moves around the CIPS orbital strips so that waves at different locations and orientations can be captured. It shows that the CWT albedo power statistically increases as the background gets brighter. We resample the wave detections to conform to a normal distribution via removing the dependence of the albedo power on the background cloud brightness because we tend to examine the wave morphology beyond the cloud brightness impact. Sample cases are selected at the two tails and the peak of the normal distribution, and at three brightness levels, to represent the high, medium, and low albedo power categories. For these cases the albedo CWT power spectra follow exponential decay toward smaller scales. The high albedo power has the most rapid decay (i.e., exponent=-3.2) and corresponds to the most distinct wave display. Overall higher albedo power and more rapid decay both contributed to the more distinct display. The wave display becomes increasingly more blurry for the medium and low power categories that hold the exponents of -2.9 and -2.5, respectively. The majority of waves are straight waves whose clarity levels can be collapsed irrespective of the brightness levels but in the brighter background the wave signatures seem to exhibit mildly turbulent-like behavior.
ERIC Educational Resources Information Center
Delgado, Antonio
2012-01-01
Higher education is a distribution center of knowledge and economic, social, and cultural power (Cervero & Wilson, 2001). A critical approach to understanding a higher education classroom begins with recognizing the instructor's position of power and authority (Tisdell, Hanley, & Taylor, 2000). The power instructors wield exists…
Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben
2017-09-15
Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Improved score statistics for meta-analysis in single-variant and gene-level association studies.
Yang, Jingjing; Chen, Sai; Abecasis, Gonçalo
2018-06-01
Meta-analysis is now an essential tool for genetic association studies, allowing them to combine large studies and greatly accelerating the pace of genetic discovery. Although the standard meta-analysis methods perform equivalently as the more cumbersome joint analysis under ideal settings, they result in substantial power loss under unbalanced settings with various case-control ratios. Here, we investigate the power loss problem by the standard meta-analysis methods for unbalanced studies, and further propose novel meta-analysis methods performing equivalently to the joint analysis under both balanced and unbalanced settings. We derive improved meta-score-statistics that can accurately approximate the joint-score-statistics with combined individual-level data, for both linear and logistic regression models, with and without covariates. In addition, we propose a novel approach to adjust for population stratification by correcting for known population structures through minor allele frequencies. In the simulated gene-level association studies under unbalanced settings, our method recovered up to 85% power loss caused by the standard methods. We further showed the power gain of our methods in gene-level tests with 26 unbalanced studies of age-related macular degeneration . In addition, we took the meta-analysis of three unbalanced studies of type 2 diabetes as an example to discuss the challenges of meta-analyzing multi-ethnic samples. In summary, our improved meta-score-statistics with corrections for population stratification can be used to construct both single-variant and gene-level association studies, providing a useful framework for ensuring well-powered, convenient, cross-study analyses. © 2018 WILEY PERIODICALS, INC.
General Framework for Meta-analysis of Rare Variants in Sequencing Association Studies
Lee, Seunggeun; Teslovich, Tanya M.; Boehnke, Michael; Lin, Xihong
2013-01-01
We propose a general statistical framework for meta-analysis of gene- or region-based multimarker rare variant association tests in sequencing association studies. In genome-wide association studies, single-marker meta-analysis has been widely used to increase statistical power by combining results via regression coefficients and standard errors from different studies. In analysis of rare variants in sequencing studies, region-based multimarker tests are often used to increase power. We propose meta-analysis methods for commonly used gene- or region-based rare variants tests, such as burden tests and variance component tests. Because estimation of regression coefficients of individual rare variants is often unstable or not feasible, the proposed method avoids this difficulty by calculating score statistics instead that only require fitting the null model for each study and then aggregating these score statistics across studies. Our proposed meta-analysis rare variant association tests are conducted based on study-specific summary statistics, specifically score statistics for each variant and between-variant covariance-type (linkage disequilibrium) relationship statistics for each gene or region. The proposed methods are able to incorporate different levels of heterogeneity of genetic effects across studies and are applicable to meta-analysis of multiple ancestry groups. We show that the proposed methods are essentially as powerful as joint analysis by directly pooling individual level genotype data. We conduct extensive simulations to evaluate the performance of our methods by varying levels of heterogeneity across studies, and we apply the proposed methods to meta-analysis of rare variant effects in a multicohort study of the genetics of blood lipid levels. PMID:23768515
The effect of warm-ups with stretching on the isokinetic moments of collegiate men.
Park, Hyoung-Kil; Jung, Min-Kyung; Park, Eunkyung; Lee, Chang-Young; Jee, Yong-Seok; Eun, Denny; Cha, Jun-Youl; Yoo, Jaehyun
2018-02-01
Performing warm-ups increases muscle temperature and blood flow, which contributes to improved exercise performance and reduced risk of injuries to muscles and tendons. Stretching increases the range of motion of the joints and is effective for the maintenance and enhancement of exercise performance and flexibility, as well as for injury prevention. However, stretching as a warm-up activity may temporarily decrease muscle strength, muscle power, and exercise performance. This study aimed to clarify the effect of stretching during warm-ups on muscle strength, muscle power, and muscle endurance in a nonathletic population. The subjects of this study consisted of 13 physically active male collegiate students with no medical conditions. A self-assessment questionnaire regarding how well the subjects felt about their physical abilities was administered to measure psychological readiness before and after the warm-up. Subjects performed a non-warm-up, warm-up, or warm-up regimen with stretching prior to the assessment of the isokinetic moments of knee joints. After the measurements, the respective variables were analyzed using nonparametric tests. First, no statistically significant intergroup differences were found in the flexor and extensor peak torques of the knee joints at 60°/sec, which were assessed to measure muscle strength. Second, no statistically significant intergroup differences were found in the flexor and extensor peak torques of the knee joints at 180°/sec, which were assessed to measure muscle power. Third, the total work of the knee joints at 240°/sec, intended to measure muscle endurance, was highest in the aerobic-stretch-warm-ups (ASW) group, but no statistically significant differences were found among the groups. Finally, the psychological readiness for physical activity according to the type of warm-up was significantly higher in ASW. Simple stretching during warm-ups appears to have no effect on variables of exercise physiology in nonathletes who participate in routine recreational sport activities. However, they seem to have a meaningful effect on exercise performance by affording psychological stability, preparation, and confidence in exercise performance.
NASA Astrophysics Data System (ADS)
Zhu, Hao
Sparsity plays an instrumental role in a plethora of scientific fields, including statistical inference for variable selection, parsimonious signal representations, and solving under-determined systems of linear equations - what has led to the ground-breaking result of compressive sampling (CS). This Thesis leverages exciting ideas of sparse signal reconstruction to develop sparsity-cognizant algorithms, and analyze their performance. The vision is to devise tools exploiting the 'right' form of sparsity for the 'right' application domain of multiuser communication systems, array signal processing systems, and the emerging challenges in the smart power grid. Two important power system monitoring tasks are addressed first by capitalizing on the hidden sparsity. To robustify power system state estimation, a sparse outlier model is leveraged to capture the possible corruption in every datum, while the problem nonconvexity due to nonlinear measurements is handled using the semidefinite relaxation technique. Different from existing iterative methods, the proposed algorithm approximates well the global optimum regardless of the initialization. In addition, for enhanced situational awareness, a novel sparse overcomplete representation is introduced to capture (possibly multiple) line outages, and develop real-time algorithms for solving the combinatorially complex identification problem. The proposed algorithms exhibit near-optimal performance while incurring only linear complexity in the number of lines, which makes it possible to quickly bring contingencies to attention. This Thesis also accounts for two basic issues in CS, namely fully-perturbed models and the finite alphabet property. The sparse total least-squares (S-TLS) approach is proposed to furnish CS algorithms for fully-perturbed linear models, leading to statistically optimal and computationally efficient solvers. The S-TLS framework is well motivated for grid-based sensing applications and exhibits higher accuracy than existing sparse algorithms. On the other hand, exploiting the finite alphabet of unknown signals emerges naturally in communication systems, along with sparsity coming from the low activity of each user. Compared to approaches only accounting for either one of the two, joint exploitation of both leads to statistically optimal detectors with improved error performance.
On the Power Functions of Test Statistics in Order Restricted Inference.
1984-10-01
California-Davis Actuarial Science Davis, California 95616 The University of Iowa Iowa City, Iowa 52242 *F. T. Wright Department of Mathematics and...34 SUMMARY --We study the power functions of both the likelihood ratio and con- trast statistics for detecting a totally ordered trend in a collection...samples from normal populations, Bartholomew (1959 a,b; 1961) studied the likelihood ratio tests (LRTs) for H0 versus H -H assuming in one case that
Statistical scaling of pore-scale Lagrangian velocities in natural porous media.
Siena, M; Guadagnini, A; Riva, M; Bijeljic, B; Pereira Nunes, J P; Blunt, M J
2014-08-01
We investigate the scaling behavior of sample statistics of pore-scale Lagrangian velocities in two different rock samples, Bentheimer sandstone and Estaillades limestone. The samples are imaged using x-ray computer tomography with micron-scale resolution. The scaling analysis relies on the study of the way qth-order sample structure functions (statistical moments of order q of absolute increments) of Lagrangian velocities depend on separation distances, or lags, traveled along the mean flow direction. In the sandstone block, sample structure functions of all orders exhibit a power-law scaling within a clearly identifiable intermediate range of lags. Sample structure functions associated with the limestone block display two diverse power-law regimes, which we infer to be related to two overlapping spatially correlated structures. In both rocks and for all orders q, we observe linear relationships between logarithmic structure functions of successive orders at all lags (a phenomenon that is typically known as extended power scaling, or extended self-similarity). The scaling behavior of Lagrangian velocities is compared with the one exhibited by porosity and specific surface area, which constitute two key pore-scale geometric observables. The statistical scaling of the local velocity field reflects the behavior of these geometric observables, with the occurrence of power-law-scaling regimes within the same range of lags for sample structure functions of Lagrangian velocity, porosity, and specific surface area.
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
Nateghi, Roshanak; Guikema, Seth D; Quiring, Steven M
2011-12-01
This article compares statistical methods for modeling power outage durations during hurricanes and examines the predictive accuracy of these methods. Being able to make accurate predictions of power outage durations is valuable because the information can be used by utility companies to plan their restoration efforts more efficiently. This information can also help inform customers and public agencies of the expected outage times, enabling better collective response planning, and coordination of restoration efforts for other critical infrastructures that depend on electricity. In the long run, outage duration estimates for future storm scenarios may help utilities and public agencies better allocate risk management resources to balance the disruption from hurricanes with the cost of hardening power systems. We compare the out-of-sample predictive accuracy of five distinct statistical models for estimating power outage duration times caused by Hurricane Ivan in 2004. The methods compared include both regression models (accelerated failure time (AFT) and Cox proportional hazard models (Cox PH)) and data mining techniques (regression trees, Bayesian additive regression trees (BART), and multivariate additive regression splines). We then validate our models against two other hurricanes. Our results indicate that BART yields the best prediction accuracy and that it is possible to predict outage durations with reasonable accuracy. © 2011 Society for Risk Analysis.
Joint resonant CMB power spectrum and bispectrum estimation
NASA Astrophysics Data System (ADS)
Meerburg, P. Daniel; Münchmeyer, Moritz; Wandelt, Benjamin
2016-02-01
We develop the tools necessary to assess the statistical significance of resonant features in the CMB correlation functions, combining power spectrum and bispectrum measurements. This significance is typically addressed by running a large number of simulations to derive the probability density function (PDF) of the feature-amplitude in the Gaussian case. Although these simulations are tractable for the power spectrum, for the bispectrum they require significant computational resources. We show that, by assuming that the PDF is given by a multivariate Gaussian where the covariance is determined by the Fisher matrix of the sine and cosine terms, we can efficiently produce spectra that are statistically close to those derived from full simulations. By drawing a large number of spectra from this PDF, both for the power spectrum and the bispectrum, we can quickly determine the statistical significance of candidate signatures in the CMB, considering both single frequency and multifrequency estimators. We show that for resonance models, cosmology and foreground parameters have little influence on the estimated amplitude, which allows us to simplify the analysis considerably. A more precise likelihood treatment can then be applied to candidate signatures only. We also discuss a modal expansion approach for the power spectrum, aimed at quickly scanning through large families of oscillating models.
Analysis of Loss-of-Offsite-Power Events 1997-2015
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Nancy Ellen; Schroeder, John Alton
2016-07-01
Loss of offsite power (LOOP) can have a major negative impact on a power plant’s ability to achieve and maintain safe shutdown conditions. LOOP event frequencies and times required for subsequent restoration of offsite power are important inputs to plant probabilistic risk assessments. This report presents a statistical and engineering analysis of LOOP frequencies and durations at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience during calendar years 1997 through 2015. LOOP events during critical operation that do not result in a reactor trip, are not included. Frequencies and durations weremore » determined for four event categories: plant-centered, switchyard-centered, grid-related, and weather-related. Emergency diesel generator reliability is also considered (failure to start, failure to load and run, and failure to run more than 1 hour). There is an adverse trend in LOOP durations. The previously reported adverse trend in LOOP frequency was not statistically significant for 2006-2015. Grid-related LOOPs happen predominantly in the summer. Switchyard-centered LOOPs happen predominantly in winter and spring. Plant-centered and weather-related LOOPs do not show statistically significant seasonality. The engineering analysis of LOOP data shows that human errors have been much less frequent since 1997 than in the 1986 -1996 time period.« less
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials. Copyright © 2013 John Wiley & Sons, Ltd.
Solar activity and economic fundamentals: Evidence from 12 geographically disparate power grids
NASA Astrophysics Data System (ADS)
Forbes, Kevin F.; St. Cyr, O. C.
2008-10-01
This study uses local (ground-based) magnetometer data as a proxy for geomagnetically induced currents (GICs) to address whether there is a space weather/electricity market relationship in 12 geographically disparate power grids: Eirgrid, the power grid that serves the Republic of Ireland; Scottish and Southern Electricity, the power grid that served northern Scotland until April 2005; Scottish Power, the power grid that served southern Scotland until April 2005; the power grid that serves the Czech Republic; E.ON Netz, the transmission system operator in central Germany; the power grid in England and Wales; the power grid in New Zealand; the power grid that serves the vast proportion of the population in Australia; ISO New England, the power grid that serves New England; PJM, a power grid that over the sample period served all or parts of Delaware, Maryland, New Jersey, Ohio, Pennsylvania, Virginia, West Virginia, and the District of Columbia; NYISO, the power grid that serves New York State; and the power grid in the Netherlands. This study tests the hypothesis that GIC levels (proxied by the time variation of local magnetic field measurements (dH/dt)) and electricity grid conditions are related using Pearson's chi-squared statistic. The metrics of power grid conditions include measures of electricity market imbalances, energy losses, congestion costs, and actions by system operators to restore grid stability. The results of the analysis indicate that real-time market conditions in these power grids are statistically related with the GIC proxy.
Reed, Donovan S; Apsey, Douglas; Steigleman, Walter; Townley, James; Caldwell, Matthew
2017-11-01
In an attempt to maximize treatment outcomes, refractive surgery techniques are being directed toward customized ablations to correct not only lower-order aberrations but also higher-order aberrations specific to the individual eye. Measurement of the entirety of ocular aberrations is the most definitive means to establish the true effect of refractive surgery on image quality and visual performance. Whether or not there is a statistically significant difference in induced higher-order corneal aberrations between the VISX Star S4 (Abbott Medical Optics, Santa Ana, California) and the WaveLight EX500 (Alcon, Fort Worth, Texas) lasers was examined. A retrospective analysis was performed to investigate the difference in root-mean-square (RMS) value of the higher-order corneal aberrations postoperatively between two currently available laser platforms, the VISX Star S4 and the WaveLight EX500 lasers. The RMS is a compilation of higher-order corneal aberrations. Data from 240 total eyes of active duty military or Department of Defense beneficiaries who completed photorefractive keratectomy (PRK) or laser in situ keratomileusis (LASIK) refractive surgery at the Wilford Hall Ambulatory Surgical Center Joint Warfighter Refractive Surgery Center were examined. Using SPSS statistics software (IBM Corp., Armonk, New York), the mean changes in RMS values between the two lasers and refractive surgery procedures were determined. A Student t test was performed to compare the RMS of the higher-order aberrations of the subjects' corneas from the lasers being studied. A regression analysis was performed to adjust for preoperative spherical equivalent. The study and a waiver of informed consent have been approved by the Clinical Research Division of the 59th Medical Wing Institutional Review Board (Protocol Number: 20150093H). The mean change in RMS value for PRK using the VISX laser was 0.00122, with a standard deviation of 0.02583. The mean change in RMS value for PRK using the WaveLight EX500 laser was 0.004323, with a standard deviation of 0.02916. The mean change in RMS value for LASIK using the VISX laser was 0.00841, with a standard deviation of 0.03011. The mean change in RMS value for LASIK using the WaveLight EX500 laser was 0.0174, with a standard deviation of 0.02417. When comparing the two lasers for PRK and LASIK procedures, the p values were 0.431 and 0.295, respectively. The results of this study suggest no statistically significant difference concerning induced higher-order aberrations between the two laser platforms for either LASIK or PRK. Overall, the VISX laser did have consistently lower induced higher-order aberrations postoperatively, but this did not reach statistical significance. It is likely the statistical significance of this study was hindered by the power, given the relatively small sample size. Additional limitations of the study include its design, being a retrospective analysis, and the generalizability of the study, as the Department of Defense population may be significantly different from the typical refractive surgery population in terms of overall health and preoperative refractive error. Further investigation of visual outcomes between the two laser platforms should be investigated before determining superiority in terms of visual image and quality postoperatively. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.
Agarwal, Charu; Máthé, Katalin; Hofmann, Tamás; Csóka, Levente
2018-03-01
Ultrasonication was used to extract bioactive compounds from Cannabis sativa L. such as polyphenols, flavonoids, and cannabinoids. The influence of 3 independent factors (time, input power, and methanol concentration) was evaluated on the extraction of total phenols (TPC), flavonoids (TF), ferric reducing ability of plasma (FRAP) and the overall yield. A face-centered central composite design was used for statistical modelling of the response data, followed by regression and analysis of variance in order to determine the significance of the model and factors. Both the solvent composition and the time significantly affected the extraction while the sonication power had no significant impact on the responses. The response predictions obtained at optimum extraction conditions of 15 min time, 130 W power, and 80% methanol were 314.822 mg GAE/g DW of TPC, 28.173 mg QE/g DW of TF, 18.79 mM AAE/g DW of FRAP, and 10.86% of yield. A good correlation was observed between the predicted and experimental values of the responses, which validated the mathematical model. On comparing the ultrasonic process with the control extraction, noticeably higher values were obtained for each of the responses. Additionally, ultrasound considerably improved the extraction of cannabinoids present in Cannabis. Low frequency ultrasound was employed to extract bioactive compounds from the inflorescence part of Cannabis. The responses evaluated were-total phenols, flavonoids, ferric reducing assay and yield. The solvent composition and time significantly influenced the extraction process. Appreciably higher extraction of cannabinoids was achieved on sonication against control. © 2018 Institute of Food Technologists®.
Sub-Shot Noise Power Source for Microelectronics
NASA Technical Reports Server (NTRS)
Strekalov, Dmitry V.; Yu, Nan; Mansour, Kamjou
2011-01-01
Low-current, high-impedance microelectronic devices can be affected by electric current shot noise more than they are affected by Nyquist noise, even at room temperature. An approach to implementing a sub-shot noise current source for powering such devices is based on direct conversion of amplitude-squeezed light to photocurrent. The phenomenon of optical squeezing allows for the optical measurements below the fundamental shot noise limit, which would be impossible in the domain of classical optics. This becomes possible by affecting the statistical properties of photons in an optical mode, which can be considered as a case of information encoding. Once encoded, the information describing the photon (or any other elementary excitations) statistics can be also transmitted. In fact, it is such information transduction from optics to an electronics circuit, via photoelectric effect, that has allowed the observation of the optical squeezing. It is very difficult, if not technically impossible, to directly measure the statistical distribution of optical photons except at extremely low light level. The photoelectric current, on the other hand, can be easily analyzed using RF spectrum analyzers. Once it was observed that the photocurrent noise generated by a tested light source in question is below the shot noise limit (e.g. produced by a coherent light beam), it was concluded that the light source in question possess the property of amplitude squeezing. The main novelty of this technology is to turn this well-known information transduction approach around. Instead of studying the statistical property of an optical mode by measuring the photoelectron statistics, an amplitude-squeezed light source and a high-efficiency linear photodiode are used to generate photocurrent with sub-Poissonian electron statistics. By powering microelectronic devices with this current source, their performance can be improved, especially their noise parameters. Therefore, a room-temperature sub-shot noise current source can be built that will be beneficial for a very broad range of low-power, low-noise electronic instruments and applications, both cryogenic and room-temperature. Taking advantage of recent demonstrations of the squeezed light sources based on optical micro-disks, this sub-shot noise current source can be made compatible with the size/power requirements specific of the electronic devices it will support.
Michiels, Bart; Heyvaert, Mieke; Onghena, Patrick
2018-04-01
The conditional power (CP) of the randomization test (RT) was investigated in a simulation study in which three different single-case effect size (ES) measures were used as the test statistics: the mean difference (MD), the percentage of nonoverlapping data (PND), and the nonoverlap of all pairs (NAP). Furthermore, we studied the effect of the experimental design on the RT's CP for three different single-case designs with rapid treatment alternation: the completely randomized design (CRD), the randomized block design (RBD), and the restricted randomized alternation design (RRAD). As a third goal, we evaluated the CP of the RT for three types of simulated data: data generated from a standard normal distribution, data generated from a uniform distribution, and data generated from a first-order autoregressive Gaussian process. The results showed that the MD and NAP perform very similarly in terms of CP, whereas the PND performs substantially worse. Furthermore, the RRAD yielded marginally higher power in the RT, followed by the CRD and then the RBD. Finally, the power of the RT was almost unaffected by the type of the simulated data. On the basis of the results of the simulation study, we recommend at least 20 measurement occasions for single-case designs with a randomized treatment order that are to be evaluated with an RT using a 5% significance level. Furthermore, we do not recommend use of the PND, because of its low power in the RT.
Federal Statistics (FedStats) offers the full range of official statistical information available to the public from the Federal Government. It uses the Internet's powerful linking and searching capabilities to track economic and population trends, education, health care costs, a...
On the fractal characterization of Paretian Poisson processes
NASA Astrophysics Data System (ADS)
Eliazar, Iddo I.; Sokolov, Igor M.
2012-06-01
Paretian Poisson processes are Poisson processes which are defined on the positive half-line, have maximal points, and are quantified by power-law intensities. Paretian Poisson processes are elemental in statistical physics, and are the bedrock of a host of power-law statistics ranging from Pareto's law to anomalous diffusion. In this paper we establish evenness-based fractal characterizations of Paretian Poisson processes. Considering an array of socioeconomic evenness-based measures of statistical heterogeneity, we show that: amongst the realm of Poisson processes which are defined on the positive half-line, and have maximal points, Paretian Poisson processes are the unique class of 'fractal processes' exhibiting scale-invariance. The results established in this paper are diametric to previous results asserting that the scale-invariance of Poisson processes-with respect to physical randomness-based measures of statistical heterogeneity-is characterized by exponential Poissonian intensities.
Wang, D Z; Wang, C; Shen, C F; Zhang, Y; Zhang, H; Song, G D; Xue, X D; Xu, Z L; Zhang, S; Jiang, G H
2017-05-10
We described the time trend of acute myocardial infarction (AMI) from 1999 to 2013 in Tianjin incidence rate with Cochran-Armitage trend (CAT) test and linear regression analysis, and the results were compared. Based on actual population, CAT test had much stronger statistical power than linear regression analysis for both overall incidence trend and age specific incidence trend (Cochran-Armitage trend P value