DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Statistical Power of Psychological Research: What Have We Gained in 20 Years?
ERIC Educational Resources Information Center
Rossi, Joseph S.
1990-01-01
Calculated power for 6,155 statistical tests in 221 journal articles published in 1982 volumes of "Journal of Abnormal Psychology,""Journal of Consulting and Clinical Psychology," and "Journal of Personality and Social Psychology." Power to detect small, medium, and large effects was .17, .57, and .83, respectively. Concluded that power of…
Effective field theory of statistical anisotropies for primordial bispectrum and gravitational waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rostami, Tahereh; Karami, Asieh; Firouzjahi, Hassan, E-mail: t.rostami@ipm.ir, E-mail: karami@ipm.ir, E-mail: firouz@ipm.ir
2017-06-01
We present the effective field theory studies of primordial statistical anisotropies in models of anisotropic inflation. The general action in unitary gauge is presented to calculate the leading interactions between the gauge field fluctuations, the curvature perturbations and the tensor perturbations. The anisotropies in scalar power spectrum and bispectrum are calculated and the dependence of these anisotropies to EFT couplings are presented. In addition, we calculate the statistical anisotropy in tensor power spectrum and the scalar-tensor cross correlation. Our EFT approach incorporates anisotropies generated in models with non-trivial speed for the gauge field fluctuations and sound speed for scalar perturbationsmore » such as in DBI inflation.« less
Experimental Design and Power Calculation for RNA-seq Experiments.
Wu, Zhijin; Wu, Hao
2016-01-01
Power calculation is a critical component of RNA-seq experimental design. The flexibility of RNA-seq experiment and the wide dynamic range of transcription it measures make it an attractive technology for whole transcriptome analysis. These features, in addition to the high dimensionality of RNA-seq data, bring complexity in experimental design, making an analytical power calculation no longer realistic. In this chapter we review the major factors that influence the statistical power of detecting differential expression, and give examples of power assessment using the R package PROPER.
The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2010-01-01
This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…
GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices
Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.
2014-01-01
Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688
ERIC Educational Resources Information Center
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben
2016-01-01
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Statistical power calculations for mixed pharmacokinetic study designs using a population approach.
Kloprogge, Frank; Simpson, Julie A; Day, Nicholas P J; White, Nicholas J; Tarning, Joel
2014-09-01
Simultaneous modelling of dense and sparse pharmacokinetic data is possible with a population approach. To determine the number of individuals required to detect the effect of a covariate, simulation-based power calculation methodologies can be employed. The Monte Carlo Mapped Power method (a simulation-based power calculation methodology using the likelihood ratio test) was extended in the current study to perform sample size calculations for mixed pharmacokinetic studies (i.e. both sparse and dense data collection). A workflow guiding an easy and straightforward pharmacokinetic study design, considering also the cost-effectiveness of alternative study designs, was used in this analysis. Initially, data were simulated for a hypothetical drug and then for the anti-malarial drug, dihydroartemisinin. Two datasets (sampling design A: dense; sampling design B: sparse) were simulated using a pharmacokinetic model that included a binary covariate effect and subsequently re-estimated using (1) the same model and (2) a model not including the covariate effect in NONMEM 7.2. Power calculations were performed for varying numbers of patients with sampling designs A and B. Study designs with statistical power >80% were selected and further evaluated for cost-effectiveness. The simulation studies of the hypothetical drug and the anti-malarial drug dihydroartemisinin demonstrated that the simulation-based power calculation methodology, based on the Monte Carlo Mapped Power method, can be utilised to evaluate and determine the sample size of mixed (part sparsely and part densely sampled) study designs. The developed method can contribute to the design of robust and efficient pharmacokinetic studies.
Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions
ERIC Educational Resources Information Center
Yormaz, Seha; Sünbül, Önder
2017-01-01
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
ERIC Educational Resources Information Center
Texeira, Antonio; Rosa, Alvaro; Calapez, Teresa
2009-01-01
This article presents statistical power analysis (SPA) based on the normal distribution using Excel, adopting textbook and SPA approaches. The objective is to present the latter in a comparative way within a framework that is familiar to textbook level readers, as a first step to understand SPA with other distributions. The analysis focuses on the…
NASA Astrophysics Data System (ADS)
Kumar, Jagadish; Ananthakrishna, G.
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Kumar, Jagadish; Ananthakrishna, G
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Sim, Julius; Lewis, Martyn
2012-03-01
To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.
Austin, Peter C; Schuster, Tibor; Platt, Robert W
2015-10-15
Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.
Influence of nonlinear effects on statistical properties of the radiation from SASE FEL
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1998-02-01
The paper presents analysis of statistical properties of the radiation from self-amplified spontaneous emission (SASE) free-electron laser operating in nonlinear mode. The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. It has been observed that the statistics of the instantaneous radiation power from SASE FEL operating in the nonlinear regime changes significantly with respect to the linear regime. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility under construction at DESY.
Coupling strength assumption in statistical energy analysis
Lafont, T.; Totaro, N.
2017-01-01
This paper is a discussion of the hypothesis of weak coupling in statistical energy analysis (SEA). The examples of coupled oscillators and statistical ensembles of coupled plates excited by broadband random forces are discussed. In each case, a reference calculation is compared with the SEA calculation. First, it is shown that the main SEA relation, the coupling power proportionality, is always valid for two oscillators irrespective of the coupling strength. But the case of three subsystems, consisting of oscillators or ensembles of plates, indicates that the coupling power proportionality fails when the coupling is strong. Strong coupling leads to non-zero indirect coupling loss factors and, sometimes, even to a reversal of the energy flow direction from low to high vibrational temperature. PMID:28484335
Gibson, Eli; Fenster, Aaron; Ward, Aaron D
2013-10-01
Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.
Low power and type II errors in recent ophthalmology research.
Khan, Zainab; Milko, Jordan; Iqbal, Munir; Masri, Moness; Almeida, David R P
2016-10-01
To investigate the power of unpaired t tests in prospective, randomized controlled trials when these tests failed to detect a statistically significant difference and to determine the frequency of type II errors. Systematic review and meta-analysis. We examined all prospective, randomized controlled trials published between 2010 and 2012 in 4 major ophthalmology journals (Archives of Ophthalmology, British Journal of Ophthalmology, Ophthalmology, and American Journal of Ophthalmology). Studies that used unpaired t tests were included. Power was calculated using the number of subjects in each group, standard deviations, and α = 0.05. The difference between control and experimental means was set to be (1) 20% and (2) 50% of the absolute value of the control's initial conditions. Power and Precision version 4.0 software was used to carry out calculations. Finally, the proportion of articles with type II errors was calculated. β = 0.3 was set as the largest acceptable value for the probability of type II errors. In total, 280 articles were screened. Final analysis included 50 prospective, randomized controlled trials using unpaired t tests. The median power of tests to detect a 50% difference between means was 0.9 and was the same for all 4 journals regardless of the statistical significance of the test. The median power of tests to detect a 20% difference between means ranged from 0.26 to 0.9 for the 4 journals. The median power of these tests to detect a 50% and 20% difference between means was 0.9 and 0.5 for tests that did not achieve statistical significance. A total of 14% and 57% of articles with negative unpaired t tests contained results with β > 0.3 when power was calculated for differences between means of 50% and 20%, respectively. A large portion of studies demonstrate high probabilities of type II errors when detecting small differences between means. The power to detect small difference between means varies across journals. It is, therefore, worthwhile for authors to mention the minimum clinically important difference for individual studies. Journals can consider publishing statistical guidelines for authors to use. Day-to-day clinical decisions rely heavily on the evidence base formed by the plethora of studies available to clinicians. Prospective, randomized controlled clinical trials are highly regarded as a robust study and are used to make important clinical decisions that directly affect patient care. The quality of study designs and statistical methods in major clinical journals is improving overtime, 1 and researchers and journals are being more attentive to statistical methodologies incorporated by studies. The results of well-designed ophthalmic studies with robust methodologies, therefore, have the ability to modify the ways in which diseases are managed. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Using R-Project for Free Statistical Analysis in Extension Research
ERIC Educational Resources Information Center
Mangiafico, Salvatore S.
2013-01-01
One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Climate Considerations Of The Electricity Supply Systems In Industries
NASA Astrophysics Data System (ADS)
Asset, Khabdullin; Zauresh, Khabdullina
2014-12-01
The study is focused on analysis of climate considerations of electricity supply systems in a pellet industry. The developed analysis model consists of two modules: statistical data of active power losses evaluation module and climate aspects evaluation module. The statistical data module is presented as a universal mathematical model of electrical systems and components of industrial load. It forms a basis for detailed accounting of power loss from the voltage levels. On the basis of the universal model, a set of programs is designed to perform the calculation and experimental research. It helps to obtain the statistical characteristics of the power losses and loads of the electricity supply systems and to define the nature of changes in these characteristics. Within the module, several methods and algorithms for calculating parameters of equivalent circuits of low- and high-voltage ADC and SD with a massive smooth rotor with laminated poles are developed. The climate aspects module includes an analysis of the experimental data of power supply system in pellet production. It allows identification of GHG emission reduction parameters: operation hours, type of electrical motors, values of load factor and deviation of standard value of voltage.
Power calculator for instrumental variable analysis in pharmacoepidemiology
Walker, Venexia M; Davies, Neil M; Windmeijer, Frank; Burgess, Stephen; Martin, Richard M
2017-01-01
Abstract Background Instrumental variable analysis, for example with physicians’ prescribing preferences as an instrument for medications issued in primary care, is an increasingly popular method in the field of pharmacoepidemiology. Existing power calculators for studies using instrumental variable analysis, such as Mendelian randomization power calculators, do not allow for the structure of research questions in this field. This is because the analysis in pharmacoepidemiology will typically have stronger instruments and detect larger causal effects than in other fields. Consequently, there is a need for dedicated power calculators for pharmacoepidemiological research. Methods and Results The formula for calculating the power of a study using instrumental variable analysis in the context of pharmacoepidemiology is derived before being validated by a simulation study. The formula is applicable for studies using a single binary instrument to analyse the causal effect of a binary exposure on a continuous outcome. An online calculator, as well as packages in both R and Stata, are provided for the implementation of the formula by others. Conclusions The statistical power of instrumental variable analysis in pharmacoepidemiological studies to detect a clinically meaningful treatment effect is an important consideration. Research questions in this field have distinct structures that must be accounted for when calculating power. The formula presented differs from existing instrumental variable power formulae due to its parametrization, which is designed specifically for ease of use by pharmacoepidemiologists. PMID:28575313
Imprints of magnetic power and helicity spectra on radio polarimetry statistics
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Enßlin, T. A.
2011-06-01
The statistical properties of turbulent magnetic fields in radio-synchrotron sources should be imprinted on the statistics of polarimetric observables. In search of these imprints, i.e. characteristic modifications of the polarimetry statistics caused by magnetic field properties, we calculate correlation and cross-correlation functions from a set of observables that contain total intensity I, polarized intensity P, and Faraday depth φ. The correlation functions are evaluated for all combinations of observables up to fourth order in magnetic field B. We derive these analytically as far as possible and from first principles using only some basic assumptions, such as Gaussian statistics for the underlying magnetic field in the observed region and statistical homogeneity. We further assume some simplifications to reduce the complexity of the calculations, because for a start we were interested in a proof of concept. Using this statistical approach, we show that it is possible to gain information about the helical part of the magnetic power spectrum via the correlation functions < P(kperp) φ(k'_{perp)φ(k''perp)>B} and < I(kperp) φ(k'_{perp)φ(k''perp)>B}. Using this insight, we construct an easy-to-use test for helicity called LITMUS (Local Inference Test for Magnetic fields which Uncovers heliceS), which gives a spectrally integrated measure of helicity. For now, all calculations are given in a Faraday-free case, but set up so that Faraday rotational effects can be included later.
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S
2014-12-02
Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Kirgiz, Ahmet; Atalay, Kurşat; Kaldirim, Havva; Cabuk, Kubra Serefoglu; Akdemir, Mehmet Orcun; Taskapili, Muhittin
2017-08-01
The purpose of this study was to compare the keratometry (K) values obtained by the Scheimpflug camera combined with placido-disk corneal topography (Sirius) and optical biometry (Lenstar) for intraocular lens (IOL) power calculation before the cataract surgery, and to evaluate the accuracy of postoperative refraction. 50 eyes of 40 patients were scheduled to have phacoemulsification with the implantation of a posterior chamber intraocular lens. The IOL power was calculated using the SRK/T formula with Lenstar K and K readings from Sirius. Simulated K (SimK), K at 3-, 5-, and 7-mm zones from Sirius were compared with Lenstar K readings. The accuracy of these parameters was determined by calculating the mean absolute error (MAE). The mean Lenstar K value was 44.05 diopters (D) ±1.93 (SD) and SimK, K at 3-, 5-, and 7-mm zones were 43.85 ± 1.91, 43.88 ± 1.9, 43.84 ± 1.9, 43.66 ± 1.85 D, respectively. There was no statistically significant difference between the K readings (P = 0.901). When Lenstar was used for the corneal power measurements, MAE was 0.42 ± 0.33 D, but when simK of Sirius was used, it was 0.37 ± 0.32 D (the lowest MAE (0.36 ± 0.32 D) was achieved as a result of 5 mm K measurement), but it was not statistically significant (P = 0.892). Of all the K readings of Sirius and Lenstar, Sirius 5-mm zone K readings were the best in predicting a more precise IOL power. The corneal power measurements with the Scheimpflug camera combined with placido-disk corneal topography can be safely used for IOL power calculation.
Power calculation for overall hypothesis testing with high-dimensional commensurate outcomes.
Chi, Yueh-Yun; Gribbin, Matthew J; Johnson, Jacqueline L; Muller, Keith E
2014-02-28
The complexity of system biology means that any metabolic, genetic, or proteomic pathway typically includes so many components (e.g., molecules) that statistical methods specialized for overall testing of high-dimensional and commensurate outcomes are required. While many overall tests have been proposed, very few have power and sample size methods. We develop accurate power and sample size methods and software to facilitate study planning for high-dimensional pathway analysis. With an account of any complex correlation structure between high-dimensional outcomes, the new methods allow power calculation even when the sample size is less than the number of variables. We derive the exact (finite-sample) and approximate non-null distributions of the 'univariate' approach to repeated measures test statistic, as well as power-equivalent scenarios useful to generalize our numerical evaluations. Extensive simulations of group comparisons support the accuracy of the approximations even when the ratio of number of variables to sample size is large. We derive a minimum set of constants and parameters sufficient and practical for power calculation. Using the new methods and specifying the minimum set to determine power for a study of metabolic consequences of vitamin B6 deficiency helps illustrate the practical value of the new results. Free software implementing the power and sample size methods applies to a wide range of designs, including one group pre-intervention and post-intervention comparisons, multiple parallel group comparisons with one-way or factorial designs, and the adjustment and evaluation of covariate effects. Copyright © 2013 John Wiley & Sons, Ltd.
Dingus, Cheryl A; Teuschler, Linda K; Rice, Glenn E; Simmons, Jane Ellen; Narotsky, Michael G
2011-10-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA's Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss.
Dingus, Cheryl A.; Teuschler, Linda K.; Rice, Glenn E.; Simmons, Jane Ellen; Narotsky, Michael G.
2011-01-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA’s Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss. PMID:22073030
The statistics of primordial density fluctuations
NASA Astrophysics Data System (ADS)
Barrow, John D.; Coles, Peter
1990-05-01
The statistical properties of the density fluctuations produced by power-law inflation are investigated. It is found that, even the fluctuations present in the scalar field driving the inflation are Gaussian, the resulting density perturbations need not be, due to stochastic variations in the Hubble parameter. All the moments of the density fluctuations are calculated, and is is argued that, for realistic parameter choices, the departures from Gaussian statistics are small and would have a negligible effect on the large-scale structure produced in the model. On the other hand, the model predicts a power spectrum with n not equal to 1, and this could be good news for large-scale structure.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew
Using detailed upwind and nacelle-based measurements from a General Electric (GE) 1.5sle model with a 77 m rotor diameter, we calculate power curves and annual energy production (AEP) and explore their sensitivity to different atmospheric parameters to provide guidelines for the use of stability and turbulence filters in segregating power curves. The wind measurements upwind of the turbine include anemometers mounted on a 135 m meteorological tower as well as profiles from a lidar. We calculate power curves for different regimes based on turbulence parameters such as turbulence intensity (TI) as well as atmospheric stability parameters such as the bulk Richardson number ( Rmore » B). We also calculate AEP with and without these atmospheric filters and highlight differences between the results of these calculations. The power curves for different TI regimes reveal that increased TI undermines power production at wind speeds near rated, but TI increases power production at lower wind speeds at this site, the US Department of Energy (DOE) National Wind Technology Center (NWTC). Similarly, power curves for different R B regimes reveal that periods of stable conditions produce more power at wind speeds near rated and periods of unstable conditions produce more power at lower wind speeds. AEP results suggest that calculations without filtering for these atmospheric regimes may overestimate the AEP. Because of statistically significant differences between power curves and AEP calculated with these turbulence and stability filters for this turbine at this site, we suggest implementing an additional step in analyzing power performance data to incorporate effects of atmospheric stability and turbulence across the rotor disk.« less
St. Martin, Clara M.; Lundquist, Julie K.; Clifton, Andrew; ...
2016-11-01
Using detailed upwind and nacelle-based measurements from a General Electric (GE) 1.5sle model with a 77 m rotor diameter, we calculate power curves and annual energy production (AEP) and explore their sensitivity to different atmospheric parameters to provide guidelines for the use of stability and turbulence filters in segregating power curves. The wind measurements upwind of the turbine include anemometers mounted on a 135 m meteorological tower as well as profiles from a lidar. We calculate power curves for different regimes based on turbulence parameters such as turbulence intensity (TI) as well as atmospheric stability parameters such as the bulk Richardson number ( Rmore » B). We also calculate AEP with and without these atmospheric filters and highlight differences between the results of these calculations. The power curves for different TI regimes reveal that increased TI undermines power production at wind speeds near rated, but TI increases power production at lower wind speeds at this site, the US Department of Energy (DOE) National Wind Technology Center (NWTC). Similarly, power curves for different R B regimes reveal that periods of stable conditions produce more power at wind speeds near rated and periods of unstable conditions produce more power at lower wind speeds. AEP results suggest that calculations without filtering for these atmospheric regimes may overestimate the AEP. Because of statistically significant differences between power curves and AEP calculated with these turbulence and stability filters for this turbine at this site, we suggest implementing an additional step in analyzing power performance data to incorporate effects of atmospheric stability and turbulence across the rotor disk.« less
A study of the feasibility of statistical analysis of airport performance simulation
NASA Technical Reports Server (NTRS)
Myers, R. H.
1982-01-01
The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.
Electromagnetic wave scattering from rough terrain
NASA Astrophysics Data System (ADS)
Papa, R. J.; Lennon, J. F.; Taylor, R. L.
1980-09-01
This report presents two aspects of a program designed to calculate electromagnetic scattering from rough terrain: (1) the use of statistical estimation techniques to determine topographic parameters and (2) the results of a single-roughness-scale scattering calculation based on those parameters, including comparison with experimental data. In the statistical part of the present calculation, digitized topographic maps are used to generate data bases for the required scattering cells. The application of estimation theory to the data leads to the specification of statistical parameters for each cell. The estimated parameters are then used in a hypothesis test to decide on a probability density function (PDF) that represents the height distribution in the cell. Initially, the formulation uses a single observation of the multivariate data. A subsequent approach involves multiple observations of the heights on a bivariate basis, and further refinements are being considered. The electromagnetic scattering analysis, the second topic, calculates the amount of specular and diffuse multipath power reaching a monopulse receiver from a pulsed beacon positioned over a rough Earth. The program allows for spatial inhomogeneities and multiple specular reflection points. The analysis of shadowing by the rough surface has been extended to the case where the surface heights are distributed exponentially. The calculated loss of boresight pointing accuracy attributable to diffuse multipath is then compared with the experimental results. The extent of the specular region, the use of localized height variations, and the effect of the azimuthal variation in power pattern are all assessed.
A statistical model of the wave field in a bounded domain
NASA Astrophysics Data System (ADS)
Hellsten, T.
2017-02-01
Numerical simulations of plasma heating with radiofrequency waves often require repetitive calculations of wave fields as the plasma evolves. To enable effective simulations, bench marked formulas of the power deposition have been developed. Here, a statistical model applicable to waves with short wavelengths is presented, which gives the expected amplitude of the wave field as a superposition of four wave fields with weight coefficients depending on the single pass damping, as. The weight coefficient for the wave field coherent with that calculated in the absence of reflection agrees with the coefficient for strong single pass damping of an earlier developed heuristic model, for which the weight coefficients were obtained empirically using a full wave code to calculate the wave field and power deposition. Antennas launching electromagnetic waves into bounded domains are often designed to produce localised wave fields and power depositions in the limit of strong single pass damping. The reflection of the waves changes the coupling that partly destroys the localisation of the wave field, which explains the apparent paradox arising from the earlier developed heuristic formula that only a fraction as2(2-as) and not as of the power is absorbed with a profile corresponding to the power deposition for the first pass of the rays. A method to account for the change in the coupling spectrum caused by reflection for modelling the wave field with ray tracing in bounded media is proposed, which should be applicable to wave propagation in non-uniform media in more general geometries.
Statistical aspects of quantitative real-time PCR experiment design.
Kitchen, Robert R; Kubista, Mikael; Tichopad, Ales
2010-04-01
Experiments using quantitative real-time PCR to test hypotheses are limited by technical and biological variability; we seek to minimise sources of confounding variability through optimum use of biological and technical replicates. The quality of an experiment design is commonly assessed by calculating its prospective power. Such calculations rely on knowledge of the expected variances of the measurements of each group of samples and the magnitude of the treatment effect; the estimation of which is often uninformed and unreliable. Here we introduce a method that exploits a small pilot study to estimate the biological and technical variances in order to improve the design of a subsequent large experiment. We measure the variance contributions at several 'levels' of the experiment design and provide a means of using this information to predict both the total variance and the prospective power of the assay. A validation of the method is provided through a variance analysis of representative genes in several bovine tissue-types. We also discuss the effect of normalisation to a reference gene in terms of the measured variance components of the gene of interest. Finally, we describe a software implementation of these methods, powerNest, that gives the user the opportunity to input data from a pilot study and interactively modify the design of the assay. The software automatically calculates expected variances, statistical power, and optimal design of the larger experiment. powerNest enables the researcher to minimise the total confounding variance and maximise prospective power for a specified maximum cost for the large study. Copyright 2010 Elsevier Inc. All rights reserved.
Conroy, M.J.; Samuel, M.D.; White, Joanne C.
1995-01-01
Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.
Statistical power analysis of cardiovascular safety pharmacology studies in conscious rats.
Bhatt, Siddhartha; Li, Dingzhou; Flynn, Declan; Wisialowski, Todd; Hemkens, Michelle; Steidl-Nichols, Jill
2016-01-01
Cardiovascular (CV) toxicity and related attrition are a major challenge for novel therapeutic entities and identifying CV liability early is critical for effective derisking. CV safety pharmacology studies in rats are a valuable tool for early investigation of CV risk. Thorough understanding of data analysis techniques and statistical power of these studies is currently lacking and is imperative for enabling sound decision-making. Data from 24 crossover and 12 parallel design CV telemetry rat studies were used for statistical power calculations. Average values of telemetry parameters (heart rate, blood pressure, body temperature, and activity) were logged every 60s (from 1h predose to 24h post-dose) and reduced to 15min mean values. These data were subsequently binned into super intervals for statistical analysis. A repeated measure analysis of variance was used for statistical analysis of crossover studies and a repeated measure analysis of covariance was used for parallel studies. Statistical power analysis was performed to generate power curves and establish relationships between detectable CV (blood pressure and heart rate) changes and statistical power. Additionally, data from a crossover CV study with phentolamine at 4, 20 and 100mg/kg are reported as a representative example of data analysis methods. Phentolamine produced a CV profile characteristic of alpha adrenergic receptor antagonism, evidenced by a dose-dependent decrease in blood pressure and reflex tachycardia. Detectable blood pressure changes at 80% statistical power for crossover studies (n=8) were 4-5mmHg. For parallel studies (n=8), detectable changes at 80% power were 6-7mmHg. Detectable heart rate changes for both study designs were 20-22bpm. Based on our results, the conscious rat CV model is a sensitive tool to detect and mitigate CV risk in early safety studies. Furthermore, these results will enable informed selection of appropriate models and study design for early stage CV studies. Copyright © 2016 Elsevier Inc. All rights reserved.
[A Review on the Use of Effect Size in Nursing Research].
Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae
2015-10-01
The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.
Rolland, Y; Bézy-Wendling, J; Duvauferrier, R; Coatrieux, J L
1999-03-01
To demonstrate the usefulness of a model of the parenchymous vascularization to evaluate texture analysis methods. Slices with thickness varying from 1 to 4 mm were reformatted from a 3D vascular model corresponding to either normal tissue perfusion or local hypervascularization. Parameters of statistical methods were measured on 16128x128 regions of interest, and mean values and standard deviation were calculated. For each parameter, the performances (discrimination power and stability) were evaluated. Among 11 calculated statistical parameters, three (homogeneity, entropy, mean of gradients) were found to have a good discriminating power to differentiate normal perfusion from hypervascularization, but only the gradient mean was found to have a good stability with respect to the thickness. Five parameters (run percentage, run length distribution, long run emphasis, contrast, and gray level distribution) were found to have intermediate results. In the remaining three, curtosis and correlation was found to have little discrimination power, skewness none. This 3D vascular model, which allows the generation of various examples of vascular textures, is a powerful tool to assess the performance of texture analysis methods. This improves our knowledge of the methods and should contribute to their a priori choice when designing clinical studies.
Ensor, Joie; Burke, Danielle L; Snell, Kym I E; Hemming, Karla; Riley, Richard D
2018-05-18
Researchers and funders should consider the statistical power of planned Individual Participant Data (IPD) meta-analysis projects, as they are often time-consuming and costly. We propose simulation-based power calculations utilising a two-stage framework, and illustrate the approach for a planned IPD meta-analysis of randomised trials with continuous outcomes where the aim is to identify treatment-covariate interactions. The simulation approach has four steps: (i) specify an underlying (data generating) statistical model for trials in the IPD meta-analysis; (ii) use readily available information (e.g. from publications) and prior knowledge (e.g. number of studies promising IPD) to specify model parameter values (e.g. control group mean, intervention effect, treatment-covariate interaction); (iii) simulate an IPD meta-analysis dataset of a particular size from the model, and apply a two-stage IPD meta-analysis to obtain the summary estimate of interest (e.g. interaction effect) and its associated p-value; (iv) repeat the previous step (e.g. thousands of times), then estimate the power to detect a genuine effect by the proportion of summary estimates with a significant p-value. In a planned IPD meta-analysis of lifestyle interventions to reduce weight gain in pregnancy, 14 trials (1183 patients) promised their IPD to examine a treatment-BMI interaction (i.e. whether baseline BMI modifies intervention effect on weight gain). Using our simulation-based approach, a two-stage IPD meta-analysis has < 60% power to detect a reduction of 1 kg weight gain for a 10-unit increase in BMI. Additional IPD from ten other published trials (containing 1761 patients) would improve power to over 80%, but only if a fixed-effect meta-analysis was appropriate. Pre-specified adjustment for prognostic factors would increase power further. Incorrect dichotomisation of BMI would reduce power by over 20%, similar to immediately throwing away IPD from ten trials. Simulation-based power calculations could inform the planning and funding of IPD projects, and should be used routinely.
Statistics of Fractionalized Excitations through Threshold Spectroscopy.
Morampudi, Siddhardh C; Turner, Ari M; Pollmann, Frank; Wilczek, Frank
2017-06-02
We show that neutral anyonic excitations have a signature in spectroscopic measurements of materials: The low-energy onset of spectral functions near the threshold follows universal power laws with an exponent that depends only on the statistics of the anyons. This provides a route, using experimental techniques such as neutron scattering and tunneling spectroscopy, for detecting anyonic statistics in topologically ordered states such as gapped quantum spin liquids and hypothesized fractional Chern insulators. Our calculations also explain some recent theoretical results in spin systems.
Marino, Michael J
2018-05-01
There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
NASA Technical Reports Server (NTRS)
Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.
2016-01-01
Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.
Charan, J; Saxena, D
2014-01-01
Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Systematic review. All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.
Ma, Li-Xin; Liu, Jian-Ping
2012-01-01
To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.
A generalized concept of power helped to choose optimal endpoints in clinical trials.
Borm, George F; van der Wilt, Gert J; Kremer, Jan A M; Zielhuis, Gerhard A
2007-04-01
A clinical trial may have multiple objectives. Sometimes the results for several parameters may need to be significant or meet certain other criteria. In such cases, it is important to evaluate the probability that all these objectives will be met, rather than the probability that each will be met. The purpose of this article is to introduce a definition of power that is tailored to handle this situation and that is helpful for the design of such trials. We introduce a generalized concept of power. It can handle complex situations, for example, in which there is a logical combination of partial objectives. These may be formulated not only in terms of statistical tests and of confidence intervals, but also in nonstatistical terms, such as "selecting the optimal by dose." The power of a trial was calculated for various objectives and combinations of objectives. The generalized concept of power may lead to power calculations that closely match the objectives of the trial and contribute to choosing more efficient endpoints and designs.
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Testing the non-unity of rate ratio under inverse sampling.
Tang, Man-Lai; Liao, Yi Jie; Ng, Hong Keung Tony; Chan, Ping Shing
2007-08-01
Inverse sampling is considered to be a more appropriate sampling scheme than the usual binomial sampling scheme when subjects arrive sequentially, when the underlying response of interest is acute, and when maximum likelihood estimators of some epidemiologic indices are undefined. In this article, we study various statistics for testing non-unity rate ratios in case-control studies under inverse sampling. These include the Wald, unconditional score, likelihood ratio and conditional score statistics. Three methods (the asymptotic, conditional exact, and Mid-P methods) are adopted for P-value calculation. We evaluate the performance of different combinations of test statistics and P-value calculation methods in terms of their empirical sizes and powers via Monte Carlo simulation. In general, asymptotic score and conditional score tests are preferable for their actual type I error rates are well controlled around the pre-chosen nominal level, and their powers are comparatively the largest. The exact version of Wald test is recommended if one wants to control the actual type I error rate at or below the pre-chosen nominal level. If larger power is expected and fluctuation of sizes around the pre-chosen nominal level are allowed, then the Mid-P version of Wald test is a desirable alternative. We illustrate the methodologies with a real example from a heart disease study. (c) 2007 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Got power? A systematic review of sample size adequacy in health professions education research.
Cook, David A; Hatala, Rose
2015-03-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Pounds, Stan; Cao, Xueyuan; Cheng, Cheng; Yang, Jun; Campana, Dario; Evans, William E.; Pui, Ching-Hon; Relling, Mary V.
2010-01-01
Powerful methods for integrated analysis of multiple biological data sets are needed to maximize interpretation capacity and acquire meaningful knowledge. We recently developed Projection Onto the Most Interesting Statistical Evidence (PROMISE). PROMISE is a statistical procedure that incorporates prior knowledge about the biological relationships among endpoint variables into an integrated analysis of microarray gene expression data with multiple biological and clinical endpoints. Here, PROMISE is adapted to the integrated analysis of pharmacologic, clinical, and genome-wide genotype data that incorporating knowledge about the biological relationships among pharmacologic and clinical response data. An efficient permutation-testing algorithm is introduced so that statistical calculations are computationally feasible in this higher-dimension setting. The new method is applied to a pediatric leukemia data set. The results clearly indicate that PROMISE is a powerful statistical tool for identifying genomic features that exhibit a biologically meaningful pattern of association with multiple endpoint variables. PMID:21516175
Statistical estimation of ultrasonic propagation path parameters for aberration correction.
Waag, Robert C; Astheimer, Jeffrey P
2005-05-01
Parameters in a linear filter model for ultrasonic propagation are found using statistical estimation. The model uses an inhomogeneous-medium Green's function that is decomposed into a homogeneous-transmission term and a path-dependent aberration term. Power and cross-power spectra of random-medium scattering are estimated over the frequency band of the transmit-receive system by using closely situated scattering volumes. The frequency-domain magnitude of the aberration is obtained from a normalization of the power spectrum. The corresponding phase is reconstructed from cross-power spectra of subaperture signals at adjacent receive positions by a recursion. The subapertures constrain the receive sensitivity pattern to eliminate measurement system phase contributions. The recursion uses a Laplacian-based algorithm to obtain phase from phase differences. Pulse-echo waveforms were acquired from a point reflector and a tissue-like scattering phantom through a tissue-mimicking aberration path from neighboring volumes having essentially the same aberration path. Propagation path aberration parameters calculated from the measurements of random scattering through the aberration phantom agree with corresponding parameters calculated for the same aberrator and array position by using echoes from the point reflector. The results indicate the approach describes, in addition to time shifts, waveform amplitude and shape changes produced by propagation through distributed aberration under realistic conditions.
The speed-curvature power law of movements: a reappraisal.
Zago, Myrka; Matic, Adam; Flash, Tamar; Gomez-Marin, Alex; Lacquaniti, Francesco
2018-01-01
Several types of curvilinear movements obey approximately the so called 2/3 power law, according to which the angular speed varies proportionally to the 2/3 power of the curvature. The origin of the law is debated but it is generally thought to depend on physiological mechanisms. However, a recent paper (Marken and Shaffer, Exp Brain Res 88:685-690, 2017) claims that this power law is simply a statistical artifact, being a mathematical consequence of the way speed and curvature are calculated. Here we reject this hypothesis by showing that the speed-curvature power law of biological movements is non-trivial. First, we confirm that the power exponent varies with the shape of human drawing movements and with environmental factors. Second, we report experimental data from Drosophila larvae demonstrating that the power law does not depend on how curvature is calculated. Third, we prove that the law can be violated by means of several mathematical and physical examples. Finally, we discuss biological constraints that may underlie speed-curvature power laws discovered in empirical studies.
Ha, Ahnul; Wee, Won Ryang; Kim, Mee Kum
2018-05-15
To evaluate the agreement in axial length (AL), keratometry, and anterior chamber depth measurements between AL-Scan and IOLMaster biometers and to compare the efficacy of the AL-Scan on intraocular lens (IOL) power calculations and refractive outcomes with those obtained by the IOLMaster. Medical records of 48 eyes from 48 patients who underwent uneventful phacoemulsification and IOL insertion were retrospectively reviewed. One of the two types of monofocal aspheric IOLs were implanted (Tecnis ZCB00 [Tecnis, n = 34] or CT Asphina 509M [Asphina, n = 14]). Two different partial coherence interferometers measured and compared AL, keratometry (2.4 mm), anterior chamber depth, and IOL power calculations with SRK/T, Hoffer Q, Holladay2, and Haigis formulas. The difference between expected and actual final refractive error was compared as refractive mean error (ME), refractive mean absolute error (MAE), and median absolute error (MedAE). AL measured by the AL-Scan was shorter than that measured by the IOLMaster (p = 0.029). The IOL power of Tecnis did not differ between the four formulas; however, the Asphina measurement calculated using Hoffer Q for the AL-Scan was lower (0.28 diopters, p = 0.015) than that calculated by the IOLMaster. There were no statistically significant differences between the calculations by MAE and MedAE for the four formulas in either IOL. In SRK/T, ME in Tecnis-inserted eyes measured by AL-Scan showed a tendency toward myopia (p = 0.032). Measurement by AL-Scan provides reliable biometry data and power calculations compared to the IOLMaster; however, refractive outcomes of Tecnis-inserted eyes by AL-Scan calculated using SRK/T can show a slight myopic tendency. © 2018 The Korean Ophthalmological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavinto, Mikko; Räsänen, Syksy, E-mail: mikko.lavinto@helsinki.fi, E-mail: syksy.rasanen@iki.fi
We consider a Swiss Cheese model with a random arrangement of Lemaȋtre-Tolman-Bondi holes in ΛCDM cheese. We study two kinds of holes with radius r{sub b}=50 h{sup −1} Mpc, with either an underdense or an overdense centre, called the open and closed case, respectively. We calculate the effect of the holes on the temperature, angular diameter distance and, for the first time in Swiss Cheese models, shear of the CMB . We quantify the systematic shift of the mean and the statistical scatter, and calculate the power spectra. In the open case, the temperature power spectrum is three orders of magnitude belowmore » the linear ISW spectrum. It is sensitive to the details of the hole, in the closed case the amplitude is two orders of magnitude smaller. In contrast, the power spectra of the distance and shear are more robust, and agree with perturbation theory and previous Swiss Cheese results. We do not find a statistically significant mean shift in the sky average of the angular diameter distance, and obtain the 95% limit |Δ D{sub A}/ D-bar {sub A}|∼< 10{sup −4}. We consider the argument that areas of spherical surfaces are nearly unaffected by perturbations, which is often invoked in light propagation calculations. The closed case is consistent with this at 1σ, whereas in the open case the probability is only 1.4%.« less
CMB seen through random Swiss Cheese
NASA Astrophysics Data System (ADS)
Lavinto, Mikko; Räsänen, Syksy
2015-10-01
We consider a Swiss Cheese model with a random arrangement of Lemaȋtre-Tolman-Bondi holes in ΛCDM cheese. We study two kinds of holes with radius rb=50 h-1 Mpc, with either an underdense or an overdense centre, called the open and closed case, respectively. We calculate the effect of the holes on the temperature, angular diameter distance and, for the first time in Swiss Cheese models, shear of the CMB . We quantify the systematic shift of the mean and the statistical scatter, and calculate the power spectra. In the open case, the temperature power spectrum is three orders of magnitude below the linear ISW spectrum. It is sensitive to the details of the hole, in the closed case the amplitude is two orders of magnitude smaller. In contrast, the power spectra of the distance and shear are more robust, and agree with perturbation theory and previous Swiss Cheese results. We do not find a statistically significant mean shift in the sky average of the angular diameter distance, and obtain the 95% limit |Δ DA/bar DA|lesssim 10-4. We consider the argument that areas of spherical surfaces are nearly unaffected by perturbations, which is often invoked in light propagation calculations. The closed case is consistent with this at 1σ, whereas in the open case the probability is only 1.4%.
The influence of control group reproduction on the statistical ...
Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of breeding pairs of medaka. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) will have on the statistical power of the test. A software tool, the MEOGRT Reproduction Power Analysis Tool, was developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. The manuscript illustrates how the reproductive performance of the control medaka that are used in a MEOGRT influence statistical power, and therefore the successful implementation of the protocol. Example scenarios, based upon medaka reproduction data collected at MED, are discussed that bolster the recommendation that facilities planning to implement the MEOGRT should have a culture of medaka with hi
Power-law regularities in human language
NASA Astrophysics Data System (ADS)
Mehri, Ali; Lashkari, Sahar Mohammadpour
2016-11-01
Complex structure of human language enables us to exchange very complicated information. This communication system obeys some common nonlinear statistical regularities. We investigate four important long-range features of human language. We perform our calculations for adopted works of seven famous litterateurs. Zipf's law and Heaps' law, which imply well-known power-law behaviors, are established in human language, showing a qualitative inverse relation with each other. Furthermore, the informational content associated with the words ordering, is measured by using an entropic metric. We also calculate fractal dimension of words in the text by using box counting method. The fractal dimension of each word, that is a positive value less than or equal to one, exhibits its spatial distribution in the text. Generally, we can claim that the Human language follows the mentioned power-law regularities. Power-law relations imply the existence of long-range correlations between the word types, to convey an especial idea.
Procedural considerations for CPV outdoor power ratings per IEC 62670
NASA Astrophysics Data System (ADS)
Muller, Matthew; Kurtz, Sarah; Rodriguez, Jose
2013-09-01
The IEC Working Group 7 (WG7) is in the process of developing a draft procedure for an outdoor concentrating photovoltaic (CPV) module power rating at Concentrator Standard Operating Conditions (CSOC). WG7 recently achieved some consensus that using component reference cells to monitor/limit spectral variation is the preferred path for the outdoor power rating. To build on this consensus, the community must quantify these spectral limits and select a procedure for calculating and reporting a power rating. This work focuses on statistically comparing several procedures the community is considering in context with monitoring/limiting spectral variation.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
Cheng, Dunlei; Branscum, Adam J; Stamey, James D
2010-07-01
To quantify the impact of ignoring misclassification of a response variable and measurement error in a covariate on statistical power, and to develop software for sample size and power analysis that accounts for these flaws in epidemiologic data. A Monte Carlo simulation-based procedure is developed to illustrate the differences in design requirements and inferences between analytic methods that properly account for misclassification and measurement error to those that do not in regression models for cross-sectional and cohort data. We found that failure to account for these flaws in epidemiologic data can lead to a substantial reduction in statistical power, over 25% in some cases. The proposed method substantially reduced bias by up to a ten-fold margin compared to naive estimates obtained by ignoring misclassification and mismeasurement. We recommend as routine practice that researchers account for errors in measurement of both response and covariate data when determining sample size, performing power calculations, or analyzing data from epidemiological studies. 2010 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Abramov, E. Y.; Sopov, V. I.
2017-10-01
In a given research using the example of traction network area with high asymmetry of power supply parameters, the sequence of comparative assessment of power losses in DC traction network with parallel and traditional separated operating modes of traction substation feeders was shown. Experimental measurements were carried out under these modes of operation. The calculation data results based on statistic processing showed the power losses decrease in contact network and the increase in feeders. The changes proved to be critical ones and this demonstrates the significance of potential effects when converting traction network areas into parallel feeder operation. An analytical method of calculation the average power losses for different feed schemes of the traction network was developed. On its basis, the dependences of the relative losses were obtained by varying the difference in feeder voltages. The calculation results showed unreasonableness transition to a two-sided feed scheme for the considered traction network area. A larger reduction in the total power loss can be obtained with a smaller difference of the feeders’ resistance and / or a more symmetrical sectioning scheme of contact network.
Correcting power and p-value calculations for bias in diffusion tensor imaging.
Lauzon, Carolyn B; Landman, Bennett A
2013-07-01
Diffusion tensor imaging (DTI) provides quantitative parametric maps sensitive to tissue microarchitecture (e.g., fractional anisotropy, FA). These maps are estimated through computational processes and subject to random distortions including variance and bias. Traditional statistical procedures commonly used for study planning (including power analyses and p-value/alpha-rate thresholds) specifically model variability, but neglect potential impacts of bias. Herein, we quantitatively investigate the impacts of bias in DTI on hypothesis test properties (power and alpha-rate) using a two-sided hypothesis testing framework. We present theoretical evaluation of bias on hypothesis test properties, evaluate the bias estimation technique SIMEX for DTI hypothesis testing using simulated data, and evaluate the impacts of bias on spatially varying power and alpha rates in an empirical study of 21 subjects. Bias is shown to inflame alpha rates, distort the power curve, and cause significant power loss even in empirical settings where the expected difference in bias between groups is zero. These adverse effects can be attenuated by properly accounting for bias in the calculation of power and p-values. Copyright © 2013 Elsevier Inc. All rights reserved.
Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I
2013-05-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.
Seven ways to increase power without increasing N.
Hansen, W B; Collins, L M
1994-01-01
Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.
STR data for 15 autosomal STR markers from Paraná (Southern Brazil).
Alves, Hemerson B; Leite, Fábio P N; Sotomaior, Vanessa S; Rueda, Fábio F; Silva, Rosane; Moura-Neto, Rodrigo S
2014-03-01
Allelic frequencies for 15 STR autosomal loci, using AmpFℓSTR® Identifiler™, forensic, and statistical parameters were calculated. All loci reached the Hardy-Weinberg equilibrium. The combined power of discrimination and mean power of exclusion were 0.999999999999999999 and 0.9999993, respectively. The MDS plot and NJ tree analysis, generated by FST matrix, corroborated the notion of the origins of the Paraná population as mainly European-derived. The combination of these 15 STR loci represents a powerful strategy for individual identification and parentage analyses for the Paraná population.
Statistical issues on the analysis of change in follow-up studies in dental research.
Blance, Andrew; Tu, Yu-Kang; Baelum, Vibeke; Gilthorpe, Mark S
2007-12-01
To provide an overview to the problems in study design and associated analyses of follow-up studies in dental research, particularly addressing three issues: treatment-baselineinteractions; statistical power; and nonrandomization. Our previous work has shown that many studies purport an interacion between change (from baseline) and baseline values, which is often based on inappropriate statistical analyses. A priori power calculations are essential for randomized controlled trials (RCTs), but in the pre-test/post-test RCT design it is not well known to dental researchers that the choice of statistical method affects power, and that power is affected by treatment-baseline interactions. A common (good) practice in the analysis of RCT data is to adjust for baseline outcome values using ancova, thereby increasing statistical power. However, an important requirement for ancova is there to be no interaction between the groups and baseline outcome (i.e. effective randomization); the patient-selection process should not cause differences in mean baseline values across groups. This assumption is often violated for nonrandomized (observational) studies and the use of ancova is thus problematic, potentially giving biased estimates, invoking Lord's paradox and leading to difficulties in the interpretation of results. Baseline interaction issues can be overcome by use of statistical methods; not widely practiced in dental research: Oldham's method and multilevel modelling; the latter is preferred for its greater flexibility to deal with more than one follow-up occasion as well as additional covariates To illustrate these three key issues, hypothetical examples are considered from the fields of periodontology, orthodontics, and oral implantology. Caution needs to be exercised when considering the design and analysis of follow-up studies. ancova is generally inappropriate for nonrandomized studies and causal inferences from observational data should be avoided.
NASA Astrophysics Data System (ADS)
Chang, Xiaoyen Y.; Sewell, Thomas D.; Raff, Lionel M.; Thompson, Donald L.
1992-11-01
The possibility of utilizing different types of power spectra obtained from classical trajectories as a diagnostic tool to identify the presence of nonstatistical dynamics is explored by using the unimolecular bond-fission reactions of 1,2-difluoroethane and the 2-chloroethyl radical as test cases. In previous studies, the reaction rates for these systems were calculated by using a variational transition-state theory and classical trajectory methods. A comparison of the results showed that 1,2-difluoroethane is a nonstatistical system, while the 2-chloroethyl radical behaves statistically. Power spectra for these two systems have been generated under various conditions. The characteristics of these spectra are as follows: (1) The spectra for the 2-chloroethyl radical are always broader and more coupled to other modes than is the case for 1,2-difluoroethane. This is true even at very low levels of excitation. (2) When an internal energy near or above the dissociation threshold is initially partitioned into a local C-H stretching mode, the power spectra for 1,2-difluoroethane broaden somewhat, but discrete and somewhat isolated bands are still clearly evident. In contrast, the analogous power spectra for the 2-chloroethyl radical exhibit a near complete absence of isolated bands. The general appearance of the spectrum suggests a very high level of mode-to-mode coupling, large intramolecular vibrational energy redistribution (IVR) rates, and global statistical behavior. (3) The appearance of the power spectrum for the 2-chloroethyl radical is unaltered regardless of whether the initial C-H excitation is in the CH2 or the CH2Cl group. This result also suggests statistical behavior. These results are interpreted to mean that power spectra may be used as a diagnostic tool to assess the statistical character of a system. The presence of a diffuse spectrum exhibiting a nearly complete loss of isolated structures indicates that the dissociation dynamics of the molecule will be well described by statistical theories. If, however, the power spectrum maintains its discrete, isolated character, as is the case for 1,2-difluoroethane, the opposite conclusion is suggested. Since power spectra are very easily computed, this diagnostic method may prove to be useful.
Are power calculations useful? A multicentre neuroimaging study
Suckling, John; Henty, Julian; Ecker, Christine; Deoni, Sean C; Lombardo, Michael V; Baron-Cohen, Simon; Jezzard, Peter; Barnes, Anna; Chakrabarti, Bhismadev; Ooi, Cinly; Lai, Meng-Chuan; Williams, Steven C; Murphy, Declan GM; Bullmore, Edward
2014-01-01
There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources. PMID:24644267
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-09-14
This package contains statistical routines for extracting features from multivariate time-series data which can then be used for subsequent multivariate statistical analysis to identify patterns and anomalous behavior. It calculates local linear or quadratic regression model fits to moving windows for each series and then summarizes the model coefficients across user-defined time intervals for each series. These methods are domain agnostic-but they have been successfully applied to a variety of domains, including commercial aviation and electric power grid data.
NASA Astrophysics Data System (ADS)
Aminov, R. Z.; Khrustalev, V. A.; Portyankin, A. V.
2015-02-01
The effectiveness of combining nuclear power plants equipped with water-cooled water-moderated power-generating reactors (VVER) with other sources of energy within unified power-generating complexes is analyzed. The use of such power-generating complexes makes it possible to achieve the necessary load pickup capability and flexibility in performing the mandatory selective primary and emergency control of load, as well as participation in passing the night minimums of electric load curves while retaining high values of the capacity utilization factor of the entire power-generating complex at higher levels of the steam-turbine part efficiency. Versions involving combined use of nuclear power plants with hydrogen toppings and gas turbine units for generating electricity are considered. In view of the fact that hydrogen is an unsafe energy carrier, the use of which introduces additional elements of risk, a procedure for evaluating these risks under different conditions of implementing the fuel-and-hydrogen cycle at nuclear power plants is proposed. Risk accounting technique with the use of statistical data is considered, including the characteristics of hydrogen and gas pipelines, and the process pipelines equipment tightness loss occurrence rate. The expected intensities of fires and explosions at nuclear power plants fitted with hydrogen toppings and gas turbine units are calculated. In estimating the damage inflicted by events (fires and explosions) occurred in nuclear power plant turbine buildings, the US statistical data were used. Conservative scenarios of fires and explosions of hydrogen-air mixtures in nuclear power plant turbine buildings are presented. Results from calculations of the introduced annual risk to the attained net annual profit ratio in commensurable versions are given. This ratio can be used in selecting projects characterized by the most technically attainable and socially acceptable safety.
Webster, R J; Williams, A; Marchetti, F; Yauk, C L
2018-07-01
Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Statistical characteristics of dynamics for population migration driven by the economic interests
NASA Astrophysics Data System (ADS)
Huo, Jie; Wang, Xu-Ming; Zhao, Ning; Hao, Rui
2016-06-01
Population migration typically occurs under some constraints, which can deeply affect the structure of a society and some other related aspects. Therefore, it is critical to investigate the characteristics of population migration. Data from the China Statistical Yearbook indicate that the regional gross domestic product per capita relates to the population size via a linear or power-law relation. In addition, the distribution of population migration sizes or relative migration strength introduced here is dominated by a shifted power-law relation. To reveal the mechanism that creates the aforementioned distributions, a dynamic model is proposed based on the population migration rule that migration is facilitated by higher financial gains and abated by fewer employment opportunities at the destination, considering the migration cost as a function of the migration distance. The calculated results indicate that the distribution of the relative migration strength is governed by a shifted power-law relation, and that the distribution of migration distances is dominated by a truncated power-law relation. These results suggest the use of a power-law to fit a distribution may be not always suitable. Additionally, from the modeling framework, one can infer that it is the randomness and determinacy that jointly create the scaling characteristics of the distributions. The calculation also demonstrates that the network formed by active nodes, representing the immigration and emigration regions, usually evolves from an ordered state with a non-uniform structure to a disordered state with a uniform structure, which is evidenced by the increasing structural entropy.
Simulation of laser beam reflection at the sea surface
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Repasi, Endre
2011-05-01
A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for both the calculation of images of SWIR (short wave infrared) imaging sensor and for determination of total detected power of reflected laser light for a bistatic configuration of laser source and receiver at different atmospheric conditions. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser light reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. The propagation model for water waves is applied for sea surface animation. To predict the view of a camera in the spectral band SWIR the sea surface radiance must be calculated. This is done by considering the emitted sea surface radiance and the reflected sky radiance, calculated by MODTRAN. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled in the SWIR band considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). This BRDF model considers the statistical slope statistics of waves and accounts for slope-shadowing of waves that especially occurs at flat incident angles of the laser beam and near horizontal detection angles of reflected irradiance at rough seas. Simulation results are presented showing the variation of the detected laser power dependent on the geometric configuration of laser, sensor and wind characteristics.
Landes, Reid D.; Lensing, Shelly Y.; Kodell, Ralph L.; Hauer-Jensen, Martin
2014-01-01
The dose of a substance that causes death in P% of a population is called an LDP, where LD stands for lethal dose. In radiation research, a common LDP of interest is the radiation dose that kills 50% of the population by a specified time, i.e., lethal dose 50 or LD50. When comparing LD50 between two populations, relative potency is the parameter of interest. In radiation research, this is commonly known as the dose reduction factor (DRF). Unfortunately, statistical inference on dose reduction factor is seldom reported. We illustrate how to calculate confidence intervals for dose reduction factor, which may then be used for statistical inference. Further, most dose reduction factor experiments use hundreds, rather than tens of animals. Through better dosing strategies and the use of a recently available sample size formula, we also show how animal numbers may be reduced while maintaining high statistical power. The illustrations center on realistic examples comparing LD50 values between a radiation countermeasure group and a radiation-only control. We also provide easy-to-use spreadsheets for sample size calculations and confidence interval calculations, as well as SAS® and R code for the latter. PMID:24164553
Exercise reduces depressive symptoms in adults with arthritis: Evidential value.
Kelley, George A; Kelley, Kristi S
2016-07-12
To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P -curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z -scores were calculated to examine selective-reporting bias. An alpha ( P ) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P -curve, adjusted for publication bias, was calculated. Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant ( P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified ( Z = -5.28, P < 0.0001). In addition, the included studies did not lack evidential value ( Z = 2.39, P = 0.99), nor did they lack evidential value and were P -hacked ( Z = 5.28, P > 0.99). The relative frequencies of P -values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P -curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions.
Exercise reduces depressive symptoms in adults with arthritis: Evidential value
Kelley, George A; Kelley, Kristi S
2016-01-01
AIM To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. METHODS Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P-curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z-scores were calculated to examine selective-reporting bias. An alpha (P) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P-curve, adjusted for publication bias, was calculated. RESULTS Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant (P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified (Z = −5.28, P < 0.0001). In addition, the included studies did not lack evidential value (Z = 2.39, P = 0.99), nor did they lack evidential value and were P-hacked (Z = 5.28, P > 0.99). The relative frequencies of P-values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P-curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. CONCLUSION Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions. PMID:27489782
Improved techniques for predicting spacecraft power
NASA Technical Reports Server (NTRS)
Chmielewski, A. B.
1987-01-01
Radioisotope Thermoelectric Generators (RTGs) are going to supply power for the NASA Galileo and Ulysses spacecraft now scheduled to be launched in 1989 and 1990. The duration of the Galileo mission is expected to be over 8 years. This brings the total RTG lifetime to 13 years. In 13 years, the RTG power drops more than 20 percent leaving a very small power margin over what is consumed by the spacecraft. Thus it is very important to accurately predict the RTG performance and be able to assess the magnitude of errors involved. The paper lists all the error sources involved in the RTG power predictions and describes a statistical method for calculating the tolerance.
Wind turbine sound pressure level calculations at dwellings.
Keith, Stephen E; Feder, Katya; Voicescu, Sonia A; Soukhovtsev, Victor; Denning, Allison; Tsang, Jason; Broner, Norm; Leroux, Tony; Richarz, Werner; van den Berg, Frits
2016-03-01
This paper provides calculations of outdoor sound pressure levels (SPLs) at dwellings for 10 wind turbine models, to support Health Canada's Community Noise and Health Study. Manufacturer supplied and measured wind turbine sound power levels were used to calculate outdoor SPL at 1238 dwellings using ISO [(1996). ISO 9613-2-Acoustics] and a Swedish noise propagation method. Both methods yielded statistically equivalent results. The A- and C-weighted results were highly correlated over the 1238 dwellings (Pearson's linear correlation coefficient r > 0.8). Calculated wind turbine SPLs were compared to ambient SPLs from other sources, estimated using guidance documents from the United States and Alberta, Canada.
Wu, Zheyang; Zhao, Hongyu
2012-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies.
Wu, Zheyang; Zhao, Hongyu
2013-01-01
For more fruitful discoveries of genetic variants associated with diseases in genome-wide association studies, it is important to know whether joint analysis of multiple markers is more powerful than the commonly used single-marker analysis, especially in the presence of gene-gene interactions. This article provides a statistical framework to rigorously address this question through analytical power calculations for common model search strategies to detect binary trait loci: marginal search, exhaustive search, forward search, and two-stage screening search. Our approach incorporates linkage disequilibrium, random genotypes, and correlations among score test statistics of logistic regressions. We derive analytical results under two power definitions: the power of finding all the associated markers and the power of finding at least one associated marker. We also consider two types of error controls: the discovery number control and the Bonferroni type I error rate control. After demonstrating the accuracy of our analytical results by simulations, we apply them to consider a broad genetic model space to investigate the relative performances of different model search strategies. Our analytical study provides rapid computation as well as insights into the statistical mechanism of capturing genetic signals under different genetic models including gene-gene interactions. Even though we focus on genetic association analysis, our results on the power of model selection procedures are clearly very general and applicable to other studies. PMID:23956610
Doshi, Dharmil; Limdi, Purvi; Parekh, Nilesh; Gohil, Neepa
2017-01-01
Accurate Intraocular Lens (IOL) power calculation in cataract surgery is very important for providing postoperative precise vision. Selection of most appropriate formula is difficult in high myopic and hypermetropic patients. To investigate the predictability of different IOL (Intra Ocular Lens) power calculation formulae in eyes with short and long Axial Length (AL) and to find out most accurate IOL power calculation formula in both groups. A prospective study was conducted on 80 consecutive patients who underwent phacoemulsification with monofocal IOL implantation after obtaining an informed and written consent. Preoperative keratometry was done by IOL Master. Axial length and anterior chamber depth was measured using A-scan machine ECHORULE 2 (BIOMEDIX). Patients were divided into two groups based on AL. (40 in each group). Group A with AL<22 mm and Group B with AL>24.5 mm. The IOL power calculation in each group was done by Haigis, Hoffer Q, Holladay-I, SRK/T formulae using the software of ECHORULE 2. The actual postoperative Spherical Equivalent (SE), Estimation error (E) and Absolute Error (AE) were calculated at one and half months and were used in data analysis. The predictive accuracy of each formula in each group was analyzed by comparing the Absolute Error (AE). The Kruskal Wallis test was used to compare differences in the (AE) of the formulae. A statistically significant difference was defined as p-value<0.05. In Group A, Hoffer Q, Holladay 1 and SRK/T formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm and accuracy of these three formulae was significantly higher than Haigis formula. Whereas in Group B, Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL more than 24.5 mm. Hoffer Q, Holladay 1 and SRK/T formulae were showing significantly higher accuracy than Haigis formula in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm. In eyes with AL more than 24.5 mm Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate.
Limdi, Purvi; Parekh, Nilesh; Gohil, Neepa
2017-01-01
Introduction Accurate Intraocular Lens (IOL) power calculation in cataract surgery is very important for providing postoperative precise vision. Selection of most appropriate formula is difficult in high myopic and hypermetropic patients. Aim To investigate the predictability of different IOL (Intra Ocular Lens) power calculation formulae in eyes with short and long Axial Length (AL) and to find out most accurate IOL power calculation formula in both groups. Materials and Methods A prospective study was conducted on 80 consecutive patients who underwent phacoemulsification with monofocal IOL implantation after obtaining an informed and written consent. Preoperative keratometry was done by IOL Master. Axial length and anterior chamber depth was measured using A-scan machine ECHORULE 2 (BIOMEDIX). Patients were divided into two groups based on AL. (40 in each group). Group A with AL<22 mm and Group B with AL>24.5 mm. The IOL power calculation in each group was done by Haigis, Hoffer Q, Holladay-I, SRK/T formulae using the software of ECHORULE 2. The actual postoperative Spherical Equivalent (SE), Estimation error (E) and Absolute Error (AE) were calculated at one and half months and were used in data analysis. The predictive accuracy of each formula in each group was analyzed by comparing the Absolute Error (AE). The Kruskal Wallis test was used to compare differences in the (AE) of the formulae. A statistically significant difference was defined as p-value<0.05. Results In Group A, Hoffer Q, Holladay 1 and SRK/T formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm and accuracy of these three formulae was significantly higher than Haigis formula. Whereas in Group B, Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL more than 24.5 mm. Conclusion Hoffer Q, Holladay 1 and SRK/T formulae were showing significantly higher accuracy than Haigis formula in predicting the postoperative refraction after cataract surgery (IOL power calculation) in eyes with AL less than 22.0 mm. In eyes with AL more than 24.5 mm Hoffer Q, Holladay 1, SRK/T and Haigis formulae were equally accurate. PMID:28273986
Ionospheric scintillation studies
NASA Technical Reports Server (NTRS)
Rino, C. L.; Freemouw, E. J.
1973-01-01
The diffracted field of a monochromatic plane wave was characterized by two complex correlation functions. For a Gaussian complex field, these quantities suffice to completely define the statistics of the field. Thus, one can in principle calculate the statistics of any measurable quantity in terms of the model parameters. The best data fits were achieved for intensity statistics derived under the Gaussian statistics hypothesis. The signal structure that achieved the best fit was nearly invariant with scintillation level and irregularity source (ionosphere or solar wind). It was characterized by the fact that more than 80% of the scattered signal power is in phase quadrature with the undeviated or coherent signal component. Thus, the Gaussian-statistics hypothesis is both convenient and accurate for channel modeling work.
Kaswin, Godefroy; Rousseau, Antoine; Mgarrech, Mohamed; Barreau, Emmanuel; Labetoulle, Marc
2014-04-01
To evaluate the agreement in axial length (AL), keratometry (K), anterior chamber depth (ACD) measurements; intraocular lens (IOL) power calculations; and predictability using a new partial coherence interferometry (PCI) optical biometer (AL-Scan) and a reference (gold standard) PCI optical biometer (IOLMaster 500). Service d'Ophtalmologie, Hopital Bicêtre, APHP Université, Paris, France. Evaluation of a diagnostic device. One eye of consecutive patients scheduled for cataract surgery was measured. Biometry was performed with the new biometer and the reference biometer. Comparisons were performed for AL, average K at 2.4 mm, ACD, IOL power calculations with the Haigis and SRK/T formulas, and postoperative predictability of the devices. A P value less than 0.05 was statistically significant. The study enrolled 50 patients (mean age 72.6 years±4.2 SEM). There was a good correlation between biometers for AL, K, and ACD measurements (r=0.999, r=0.933, and r=0.701, respectively) and between IOL power calculation with the Haigis formula (r=0.972) and the SRK/T formula (r=0.981). The mean absolute error (MAE) in IOL power prediction was 0.42±0.08 diopter (D) with the new biometer and 0.44±0.08 D with the reference biometer. The MAE was 0.20 D with the Haigis formula and 0.19 with the SRK/T formula (P=.36). The new PCI biometer provided valid measurements compared with the current gold standard, indicating that the new device can be used for IOL power calculations for routine cataract surgery. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Assessing exclusionary power of a paternity test involving a pair of alleged grandparents.
Scarpetta, Marco A; Staub, Rick W; Einum, David D
2007-02-01
The power of a genetic test battery to exclude a pair of individuals as grandparents is an important consideration for parentage testing laboratories. However, a reliable method to calculate such a statistic with short-tandem repeat (STR) genetic markers has not been presented. Two formulae describing the random grandparents not excluded (RGPNE) statistic at a single genetic locus were derived: RGPNE = a(4 - 6a + 4a(2)- a(3)) when the paternal obligate allele (POA) is defined and RGPNE = 2[(a + b)(2 - a - b)][1 - (a + b)(2 - a - b)] + [(a + b)(2 - a - b)] when the POA is ambiguous. A minimum number of genetic markers required to yield cumulative RGPNE values of not greater than 0.01 was calculated with weighted average allele frequencies of the CODIS STR loci. RGPNE data for actual grandparentage cases are also presented to empirically examine the exclusionary power of routine casework. A comparison of RGPNE and random man not excluded (RMNE) values demonstrates the increased difficulty involved in excluding two individuals as grandparents compared to excluding a single alleged parent. A minimum of 12 STR markers is necessary to achieve RGPNE values of not greater than 0.01 when the mother is tested; more than 25 markers are required without the mother. Cumulative RGPNE values for each of 22 nonexclusionary grandparentage cases were not more than 0.01 but were significantly weaker when calculated without data from the mother. Calculation of the RGPNE provides a simple means to help minimize the potential of false inclusions in grandparentage analyses. This study also underscores the importance of testing the mother when examining the parents of an unavailable alleged father (AF).
Alignment-free sequence comparison (II): theoretical power of comparison statistics.
Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S
2010-11-01
Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.
Akazawa, K; Nakamura, T; Moriguchi, S; Shimada, M; Nose, Y
1991-07-01
Small sample properties of the maximum partial likelihood estimates for Cox's proportional hazards model depend on the sample size, the true values of regression coefficients, covariate structure, censoring pattern and possibly baseline hazard functions. Therefore, it would be difficult to construct a formula or table to calculate the exact power of a statistical test for the treatment effect in any specific clinical trial. The simulation program, written in SAS/IML, described in this paper uses Monte-Carlo methods to provide estimates of the exact power for Cox's proportional hazards model. For illustrative purposes, the program was applied to real data obtained from a clinical trial performed in Japan. Since the program does not assume any specific function for the baseline hazard, it is, in principle, applicable to any censored survival data as long as they follow Cox's proportional hazards model.
Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.
Luh, Wei-Ming; Guo, Jiin-Huarng
2007-05-01
Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.
Blow molding electric drives of Mechanical Engineering
NASA Astrophysics Data System (ADS)
Bukhanov, S. S.; Ramazanov, M. A.; Tsirkunenko, A. T.
2018-03-01
The article considers the questions about the analysis of new possibilities, which gives the use of adjustable electric drives for blowing mechanisms of plastic production. Thus, the use of new semiconductor converters makes it possible not only to compensate the instability of the supply network by using special dynamic voltage regulators, but to improve (correct) the power factor. The calculation of economic efficiency in controlled electric drives of blowing mechanisms is given. On the basis of statistical analysis, the calculation of the reliability parameters of the regulated electric drives’ elements under consideration is given. It is shown that an increase in the reliability of adjustable electric drives is possible both due to overestimation of the electric drive’s installed power, and in simpler schemes with pulse-vector control.
Corneal Anterior Power Calculation for an IOL in Post-PRK Patients.
De Bernardo, Maddalena; Iaccarino, Stefania; Cennamo, Michela; Caliendo, Luisa; Rosa, Nicola
2015-02-01
After corneal refractive surgery, there is an overestimation of the corneal power with the devices routinely used to measure it. Therefore, the objective of this study was to determine whether, in patients who underwent photorefractive keratectomy (PRK), it is possible to predict the earlier preoperative anterior corneal power from the postoperative (PO) posterior corneal power. A comparison is made using a formula published by Saiki for laser in situ keratomileusis patients and a new one calculated specifically from PRK patients. The Saiki formula was tested in 98 eyes of 98 patients (47 women) who underwent PRK for myopia or myopic astigmatism. Moreover, anterior and posterior mean keratometry (Km) values from a Scheimpflug camera were measured to obtain a specific regression formula. The mean (±SD) preoperative Km was 43.50 (±1.39) diopters (D) (range, 39.25 to 47.05 D). The mean (±SD) Km value calculated with the Saiki formula using the 6 months PO posterior Km was 42.94 (±1.19) D (range, 40.34 to 45.98 D) with a statistically significant difference (p < 0.001). Six months after PRK in our patients, the posterior Km was correlated with the anterior preoperative one by the following regression formula: y = -4.9707x + 12.457 (R² = 0.7656), where x is PO posterior Km and y is preoperative anterior Km, similar to the one calculated by Saiki. Care should be taken in using the Saiki formula to calculate the preoperative Km in patients who underwent PRK.
Efficient calculation of the energy of a molecule in an arbitrary electric field
NASA Astrophysics Data System (ADS)
Pulay, Peter; Janowski, Tomasz
In thermodynamic (e.g., Monte Carlo) simulations with electronic embedding, the energy of the active site or solute must be calculated for millions of configurations of the environment (solvent or protein matrix) to obtain reliable statistics. This precludes the use of accurate but expensive ab initio and density functional techniques. Except for the immediate neighbors, the effect of the environment is electrostatic. We show that the energy of a molecule in the irregular field of the environment can be determined very efficiently by expanding the electric potential in known functions, and precalculating the first and second order response of the molecule to the components of the potential. These generalized multipole moments and polarizabilities allow the calculation of the energy of the system without further ab initio calculations. Several expansion functions were explored: polynomials, distributed inverse powers, and sine functions. The latter provide the numerically most stable fit but require new types of integrals. Distributed inverse powers can be simulated using dummy atoms, and energies calculated this way provide a very good approximation to the actual energies in the field of the environment.
Testing the Predictive Power of Coulomb Stress on Aftershock Sequences
NASA Astrophysics Data System (ADS)
Woessner, J.; Lombardi, A.; Werner, M. J.; Marzocchi, W.
2009-12-01
Empirical and statistical models of clustered seismicity are usually strongly stochastic and perceived to be uninformative in their forecasts, since only marginal distributions are used, such as the Omori-Utsu and Gutenberg-Richter laws. In contrast, so-called physics-based aftershock models, based on seismic rate changes calculated from Coulomb stress changes and rate-and-state friction, make more specific predictions: anisotropic stress shadows and multiplicative rate changes. We test the predictive power of models based on Coulomb stress changes against statistical models, including the popular Short Term Earthquake Probabilities and Epidemic-Type Aftershock Sequences models: We score and compare retrospective forecasts on the aftershock sequences of the 1992 Landers, USA, the 1997 Colfiorito, Italy, and the 2008 Selfoss, Iceland, earthquakes. To quantify predictability, we use likelihood-based metrics that test the consistency of the forecasts with the data, including modified and existing tests used in prospective forecast experiments within the Collaboratory for the Study of Earthquake Predictability (CSEP). Our results indicate that a statistical model performs best. Moreover, two Coulomb model classes seem unable to compete: Models based on deterministic Coulomb stress changes calculated from a given fault-slip model, and those based on fixed receiver faults. One model of Coulomb stress changes does perform well and sometimes outperforms the statistical models, but its predictive information is diluted, because of uncertainties included in the fault-slip model. Our results suggest that models based on Coulomb stress changes need to incorporate stochastic features that represent model and data uncertainty.
Prevalence of diseases and statistical power of the Japan Nurses' Health Study.
Fujita, Toshiharu; Hayashi, Kunihiko; Katanoda, Kota; Matsumura, Yasuhiro; Lee, Jung Su; Takagi, Hirofumi; Suzuki, Shosuke; Mizunuma, Hideki; Aso, Takeshi
2007-10-01
The Japan Nurses' Health Study (JNHS) is a long-term, large-scale cohort study investigating the effects of various lifestyle factors and healthcare habits on the health of Japanese women. Based on currently limited statistical data regarding the incidence of disease among Japanese women, our initial sample size was tentatively set at 50,000 during the design phase. The actual number of women who agreed to participate in follow-up surveys was approximately 18,000. Taking into account the actual sample size and new information on disease frequency obtained during the baseline component, we established the prevalence of past diagnoses of target diseases, predicted their incidence, and calculated the statistical power for JNHS follow-up surveys. For all diseases except ovarian cancer, the prevalence of a past diagnosis increased markedly with age, and incidence rates could be predicted based on the degree of increase in prevalence between two adjacent 5-yr age groups. The predicted incidence rate for uterine myoma, hypercholesterolemia, and hypertension was > or =3.0 (per 1,000 women, per year), while the rate of thyroid disease, hepatitis, gallstone disease, and benign breast tumor was predicted to be > or =1.0. For these diseases, the statistical power to detect risk factors with a relative risk of 1.5 or more within ten years, was 70% or higher.
General Framework for Meta-analysis of Rare Variants in Sequencing Association Studies
Lee, Seunggeun; Teslovich, Tanya M.; Boehnke, Michael; Lin, Xihong
2013-01-01
We propose a general statistical framework for meta-analysis of gene- or region-based multimarker rare variant association tests in sequencing association studies. In genome-wide association studies, single-marker meta-analysis has been widely used to increase statistical power by combining results via regression coefficients and standard errors from different studies. In analysis of rare variants in sequencing studies, region-based multimarker tests are often used to increase power. We propose meta-analysis methods for commonly used gene- or region-based rare variants tests, such as burden tests and variance component tests. Because estimation of regression coefficients of individual rare variants is often unstable or not feasible, the proposed method avoids this difficulty by calculating score statistics instead that only require fitting the null model for each study and then aggregating these score statistics across studies. Our proposed meta-analysis rare variant association tests are conducted based on study-specific summary statistics, specifically score statistics for each variant and between-variant covariance-type (linkage disequilibrium) relationship statistics for each gene or region. The proposed methods are able to incorporate different levels of heterogeneity of genetic effects across studies and are applicable to meta-analysis of multiple ancestry groups. We show that the proposed methods are essentially as powerful as joint analysis by directly pooling individual level genotype data. We conduct extensive simulations to evaluate the performance of our methods by varying levels of heterogeneity across studies, and we apply the proposed methods to meta-analysis of rare variant effects in a multicohort study of the genetics of blood lipid levels. PMID:23768515
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Publication bias was not a good reason to discourage trials with low power.
Borm, George F; den Heijer, Martin; Zielhuis, Gerhard A
2009-01-01
The objective was to investigate whether it is justified to discourage trials with less than 80% power. Trials with low power are unlikely to produce conclusive results, but their findings can be used by pooling then in a meta-analysis. However, such an analysis may be biased, because trials with low power are likely to have a nonsignificant result and are less likely to be published than trials with a statistically significant outcome. We simulated several series of studies with varying degrees of publication bias and then calculated the "real" one-sided type I error and the bias of meta-analyses with a "nominal" error rate (significance level) of 2.5%. In single trials, in which heterogeneity was set at zero, low, and high, the error rates were 2.3%, 4.7%, and 16.5%, respectively. In multiple trials with 80%-90% power and a publication rate of 90% when the results were nonsignificant, the error rates could be as high as 5.1%. When the power was 50% and the publication rate of non-significant results was 60%, the error rates did not exceed 5.3%, whereas the bias was at most 15% of the difference used in the power calculation. The impact of publication bias does not warrant the exclusion of trials with 50% power.
The power and promise of RNA-seq in ecology and evolution.
Todd, Erica V; Black, Michael A; Gemmell, Neil J
2016-03-01
Reference is regularly made to the power of new genomic sequencing approaches. Using powerful technology, however, is not the same as having the necessary power to address a research question with statistical robustness. In the rush to adopt new and improved genomic research methods, limitations of technology and experimental design may be initially neglected. Here, we review these issues with regard to RNA sequencing (RNA-seq). RNA-seq adds large-scale transcriptomics to the toolkit of ecological and evolutionary biologists, enabling differential gene expression (DE) studies in nonmodel species without the need for prior genomic resources. High biological variance is typical of field-based gene expression studies and means that larger sample sizes are often needed to achieve the same degree of statistical power as clinical studies based on data from cell lines or inbred animal models. Sequencing costs have plummeted, yet RNA-seq studies still underutilize biological replication. Finite research budgets force a trade-off between sequencing effort and replication in RNA-seq experimental design. However, clear guidelines for negotiating this trade-off, while taking into account study-specific factors affecting power, are currently lacking. Study designs that prioritize sequencing depth over replication fail to capitalize on the power of RNA-seq technology for DE inference. Significant recent research effort has gone into developing statistical frameworks and software tools for power analysis and sample size calculation in the context of RNA-seq DE analysis. We synthesize progress in this area and derive an accessible rule-of-thumb guide for designing powerful RNA-seq experiments relevant in eco-evolutionary and clinical settings alike. © 2016 John Wiley & Sons Ltd.
Flynn, Kevin; Swintek, Joe; Johnson, Rodney
2017-02-01
Because of various Congressional mandates to protect the environment from endocrine disrupting chemicals (EDCs), the United States Environmental Protection Agency (USEPA) initiated the Endocrine Disruptor Screening Program. In the context of this framework, the Office of Research and Development within the USEPA developed the Medaka Extended One Generation Reproduction Test (MEOGRT) to characterize the endocrine action of a suspected EDC. One important endpoint of the MEOGRT is fecundity of medaka breeding pairs. Power analyses were conducted to determine the number of replicates needed in proposed test designs and to determine the effects that varying reproductive parameters (e.g. mean fecundity, variance, and days with no egg production) would have on the statistical power of the test. The MEOGRT Reproduction Power Analysis Tool (MRPAT) is a software tool developed to expedite these power analyses by both calculating estimates of the needed reproductive parameters (e.g. population mean and variance) and performing the power analysis under user specified scenarios. Example scenarios are detailed that highlight the importance of the reproductive parameters on statistical power. When control fecundity is increased from 21 to 38 eggs per pair per day and the variance decreased from 49 to 20, the gain in power is equivalent to increasing replication by 2.5 times. On the other hand, if 10% of the breeding pairs, including controls, do not spawn, the power to detect a 40% decrease in fecundity drops to 0.54 from nearly 0.98 when all pairs have some level of egg production. Perhaps most importantly, MRPAT was used to inform the decision making process that lead to the final recommendation of the MEOGRT to have 24 control breeding pairs and 12 breeding pairs in each exposure group. Published by Elsevier Inc.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
NASA Astrophysics Data System (ADS)
Lea, D. M.; Legleiter, C. J.
2014-12-01
Stream power represents the rate of energy expenditure along a river and can be calculated using topographic data acquired via remote sensing. This study used remotely sensed data and field measurements to quantitatively relate temporal changes in the form of Soda Butte Creek, a gravel-bed river in northeastern Yellowstone National Park, to stream power gradients along an 8 km reach. Aerial photographs from 1994-2012 and cross-section surveys were used to assess lateral channel mobility and develop a morphologic sediment budget for quantifying net sediment flux for a series of budget cells. A drainage area-to-discharge relationship and digital elevation model (DEM) developed from LiDAR data were used to obtain the discharge and slope values, respectively, needed to calculate stream power. Local and lagged relationships between mean stream power gradient at median peak discharge and volumes of erosion, deposition, and net sediment flux were quantified via spatial cross-correlation analyses. Similarly, autocorrelations of locational probabilities and sediment fluxes were used to examine spatial patterns of channel mobility and sediment transfer. Energy expended above critical stream power was calculated for each time period to relate the magnitude and duration of peak flows to the total volume of sediment eroded or deposited during each time increment. Our results indicated a lack of strong correlation between stream power gradients and sediment flux, which we attributed to the geomorphic complexity of the Soda Butte Creek watershed and the inability of our relatively simple statistical approach to link sediment dynamics expressed at a sub-budget cell scale to larger-scale driving forces such as stream power gradients. Future studies should compare the moderate spatial resolution techniques used in this study to very-high resolution data acquired from new fluvial remote sensing technologies to better understand the amount of error associated with stream power, sediment transport, and channel change calculated from historical datasets.
Liu, Lu; Wei, Jianrong; Zhang, Huishu; Xin, Jianhong; Huang, Jiping
2013-01-01
Because classical music has greatly affected our life and culture in its long history, it has attracted extensive attention from researchers to understand laws behind it. Based on statistical physics, here we use a different method to investigate classical music, namely, by analyzing cumulative distribution functions (CDFs) and autocorrelation functions of pitch fluctuations in compositions. We analyze 1,876 compositions of five representative classical music composers across 164 years from Bach, to Mozart, to Beethoven, to Mendelsohn, and to Chopin. We report that the biggest pitch fluctuations of a composer gradually increase as time evolves from Bach time to Mendelsohn/Chopin time. In particular, for the compositions of a composer, the positive and negative tails of a CDF of pitch fluctuations are distributed not only in power laws (with the scale-free property), but also in symmetry (namely, the probability of a treble following a bass and that of a bass following a treble are basically the same for each composer). The power-law exponent decreases as time elapses. Further, we also calculate the autocorrelation function of the pitch fluctuation. The autocorrelation function shows a power-law distribution for each composer. Especially, the power-law exponents vary with the composers, indicating their different levels of long-range correlation of notes. This work not only suggests a way to understand and develop music from a viewpoint of statistical physics, but also enriches the realm of traditional statistical physics by analyzing music.
Statistical testing and power analysis for brain-wide association study.
Gong, Weikang; Wan, Lin; Lu, Wenlian; Ma, Liang; Cheng, Fan; Cheng, Wei; Grünewald, Stefan; Feng, Jianfeng
2018-04-05
The identification of connexel-wise associations, which involves examining functional connectivities between pairwise voxels across the whole brain, is both statistically and computationally challenging. Although such a connexel-wise methodology has recently been adopted by brain-wide association studies (BWAS) to identify connectivity changes in several mental disorders, such as schizophrenia, autism and depression, the multiple correction and power analysis methods designed specifically for connexel-wise analysis are still lacking. Therefore, we herein report the development of a rigorous statistical framework for connexel-wise significance testing based on the Gaussian random field theory. It includes controlling the family-wise error rate (FWER) of multiple hypothesis testings using topological inference methods, and calculating power and sample size for a connexel-wise study. Our theoretical framework can control the false-positive rate accurately, as validated empirically using two resting-state fMRI datasets. Compared with Bonferroni correction and false discovery rate (FDR), it can reduce false-positive rate and increase statistical power by appropriately utilizing the spatial information of fMRI data. Importantly, our method bypasses the need of non-parametric permutation to correct for multiple comparison, thus, it can efficiently tackle large datasets with high resolution fMRI images. The utility of our method is shown in a case-control study. Our approach can identify altered functional connectivities in a major depression disorder dataset, whereas existing methods fail. A software package is available at https://github.com/weikanggong/BWAS. Copyright © 2018 Elsevier B.V. All rights reserved.
Statistical analysis of measured free-space laser signal intensity over a 2.33 km optical path.
Tunick, Arnold
2007-10-17
Experimental research is conducted to determine the characteristic behavior of high frequency laser signal intensity data collected over a 2.33 km optical path. Results focus mainly on calculated power spectra and frequency distributions. In addition, a model is developed to calculate optical turbulence intensity (C(n)/2) as a function of receiving and transmitting aperture diameter, log-amplitude variance, and path length. Initial comparisons of calculated to measured C(n)/2 data are favorable. It is anticipated that this kind of signal data analysis will benefit laser communication systems development and testing at the U.S. Army Research Laboratory (ARL) and elsewhere.
Statistical models of power-combining circuits for O-type traveling-wave tube amplifiers
NASA Astrophysics Data System (ADS)
Kats, A. M.; Klinaev, Iu. V.; Gleizer, V. V.
1982-11-01
The design outlined here allows for imbalances in the power of the devices being combined and for differences in phase. It is shown that the coefficient of combination is described by a beta distribution of the first type when a small number of devices are being combined and that the coefficient is asymptotically normal in relation to both the number of devices and the phase variance of the tube's output signals. Relations are derived that make it possible to calculate the efficiency of a power-combining circuit and the reproducibility of the design parameters when standard devices are used.
ERIC Educational Resources Information Center
Vuolo, Mike; Uggen, Christopher; Lageson, Sarah
2016-01-01
Given their capacity to identify causal relationships, experimental audit studies have grown increasingly popular in the social sciences. Typically, investigators send fictitious auditors who differ by a key factor (e.g., race) to particular experimental units (e.g., employers) and then compare treatment and control groups on a dichotomous outcome…
Kleinert, H; Zatloukal, V
2013-11-01
The statistics of rare events, the so-called black-swan events, is governed by non-Gaussian distributions with heavy power-like tails. We calculate the Green functions of the associated Fokker-Planck equations and solve the related stochastic differential equations. We also discuss the subject in the framework of path integration.
Vallejo, Guillermo; Ato, Manuel; Fernández García, Paula; Livacic Rojas, Pablo E; Tuero Herrero, Ellián
2016-08-01
S. Usami (2014) describes a method to realistically determine sample size in longitudinal research using a multilevel model. The present research extends the aforementioned work to situations where it is likely that the assumption of homogeneity of the errors across groups is not met and the error term does not follow a scaled identity covariance structure. For this purpose, we followed a procedure based on transforming the variance components of the linear growth model and the parameter related to the treatment effect into specific and easily understandable indices. At the same time, we provide the appropriate statistical machinery for researchers to use when data loss is unavoidable, and changes in the expected value of the observed responses are not linear. The empirical powers based on unknown variance components were virtually the same as the theoretical powers derived from the use of statistically processed indexes. The main conclusion of the study is the accuracy of the proposed method to calculate sample size in the described situations with the stipulated power criteria.
Statistics of some atmospheric turbulence records relevant to aircraft response calculations
NASA Technical Reports Server (NTRS)
Mark, W. D.; Fischer, R. W.
1981-01-01
Methods for characterizing atmospheric turbulence are described. The methods illustrated include maximum likelihood estimation of the integral scale and intensity of records obeying the von Karman transverse power spectral form, constrained least-squares estimation of the parameters of a parametric representation of autocorrelation functions, estimation of the power spectra density of the instantaneous variance of a record with temporally fluctuating variance, and estimation of the probability density functions of various turbulence components. Descriptions of the computer programs used in the computations are given, and a full listing of these programs is included.
Anderson, Samantha F; Maxwell, Scott E
2017-01-01
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dunn, Floyd E.; Hu, Lin-wen; Wilson, Erik
The STAT code was written to automate many of the steady-state thermal hydraulic safety calculations for the MIT research reactor, both for conversion of the reactor from high enrichment uranium fuel to low enrichment uranium fuel and for future fuel re-loads after the conversion. A Monte-Carlo statistical propagation approach is used to treat uncertainties in important parameters in the analysis. These safety calculations are ultimately intended to protect against high fuel plate temperatures due to critical heat flux or departure from nucleate boiling or onset of flow instability; but additional margin is obtained by basing the limiting safety settings onmore » avoiding onset of nucleate boiling. STAT7 can simultaneously analyze all of the axial nodes of all of the fuel plates and all of the coolant channels for one stripe of a fuel element. The stripes run the length of the fuel, from the bottom to the top. Power splits are calculated for each axial node of each plate to determine how much of the power goes out each face of the plate. By running STAT7 multiple times, full core analysis has been performed by analyzing the margin to ONB for each axial node of each stripe of each plate of each element in the core.« less
Optimized design and analysis of preclinical intervention studies in vivo
Laajala, Teemu D.; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-01-01
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions. PMID:27480578
Optimized design and analysis of preclinical intervention studies in vivo.
Laajala, Teemu D; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-08-02
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions.
Higher order statistical moment application for solar PV potential analysis
NASA Astrophysics Data System (ADS)
Basri, Mohd Juhari Mat; Abdullah, Samizee; Azrulhisham, Engku Ahmad; Harun, Khairulezuan
2016-10-01
Solar photovoltaic energy could be as alternative energy to fossil fuel, which is depleting and posing a global warming problem. However, this renewable energy is so variable and intermittent to be relied on. Therefore the knowledge of energy potential is very important for any site to build this solar photovoltaic power generation system. Here, the application of higher order statistical moment model is being analyzed using data collected from 5MW grid-connected photovoltaic system. Due to the dynamic changes of skewness and kurtosis of AC power and solar irradiance distributions of the solar farm, Pearson system where the probability distribution is calculated by matching their theoretical moments with that of the empirical moments of a distribution could be suitable for this purpose. On the advantage of the Pearson system in MATLAB, a software programming has been developed to help in data processing for distribution fitting and potential analysis for future projection of amount of AC power and solar irradiance availability.
Camps, Vicente J; Piñero, David P; Caravaca-Arens, Esteban; de Fez, Dolores; Pérez-Cambrodí, Rafael J; Artola, Alberto
2014-09-01
The aim of this study was to obtain the exact value of the keratometric index (nkexact) and to clinically validate a variable keratometric index (nkadj) that minimizes this error. The nkexact value was determined by obtaining differences (ΔPc) between keratometric corneal power (Pk) and Gaussian corneal power ((Equation is included in full-text article.)) equal to 0. The nkexact was defined as the value associated with an equivalent difference in the magnitude of ΔPc for extreme values of posterior corneal radius (r2c) for each anterior corneal radius value (r1c). This nkadj was considered for the calculation of the adjusted corneal power (Pkadj). Values of r1c ∈ (4.2, 8.5) mm and r2c ∈ (3.1, 8.2) mm were considered. Differences of True Net Power with (Equation is included in full-text article.), Pkadj, and Pk(1.3375) were calculated in a clinical sample of 44 eyes with keratoconus. nkexact ranged from 1.3153 to 1.3396 and nkadj from 1.3190 to 1.3339 depending on the eye model analyzed. All the nkadj values adjusted perfectly to 8 linear algorithms. Differences between Pkadj and (Equation is included in full-text article.)did not exceed ±0.7 D (Diopter). Clinically, nk = 1.3375 was not valid in any case. Pkadj and True Net Power and Pk(1.3375) and Pkadj were statistically different (P < 0.01), whereas no differences were found between (Equation is included in full-text article.)and Pkadj (P > 0.01). The use of a single value of nk for the calculation of the total corneal power in keratoconus has been shown to be imprecise, leading to inaccuracies in the detection and classification of this corneal condition. Furthermore, our study shows the relevance of corneal thickness in corneal power calculations in keratoconus.
Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie
2013-08-01
The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the maximum SD from 10 samples were used. Greater sample size is needed to achieve a higher proportion of studies having actual power of 80%. This study only addressed sample size calculation for continuous outcome variables. We recommend using the 60% UCL of SD, maximum SD, 80th-percentile SD, and 75th-percentile SD to calculate sample size when 1 or 2 samples, 3 samples, 4-5 samples, and more than 5 samples of data are available, respectively. Using the sample SD or average SD to calculate sample size should be avoided.
Wind power error estimation in resource assessments.
Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.
Wind Power Error Estimation in Resource Assessments
Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel
2015-01-01
Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444
Wellek, Stefan
2017-02-28
In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W., Jr.
2003-01-01
A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.
Karunaratne, Nicholas
2013-12-01
To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
ERIC Educational Resources Information Center
Gray, Heewon Lee; Burgermaster, Marissa; Tipton, Elizabeth; Contento, Isobel R.; Koch, Pamela A.; Di Noia, Jennifer
2016-01-01
Objective: Sample size and statistical power calculation should consider clustering effects when schools are the unit of randomization in intervention studies. The objective of the current study was to investigate how student outcomes are clustered within schools in an obesity prevention trial. Method: Baseline data from the Food, Health &…
ERIC Educational Resources Information Center
Aragón, Sonia; Lapresa, Daniel; Arana, Javier; Anguera, M. Teresa; Garzón, Belén
2017-01-01
Polar coordinate analysis is a powerful data reduction technique based on the Zsum statistic, which is calculated from adjusted residuals obtained by lag sequential analysis. Its use has been greatly simplified since the addition of a module in the free software program HOISAN for performing the necessary computations and producing…
Multiple testing and power calculations in genetic association studies.
So, Hon-Cheong; Sham, Pak C
2011-01-01
Modern genetic association studies typically involve multiple single-nucleotide polymorphisms (SNPs) and/or multiple genes. With the development of high-throughput genotyping technologies and the reduction in genotyping cost, investigators can now assay up to a million SNPs for direct or indirect association with disease phenotypes. In addition, some studies involve multiple disease or related phenotypes and use multiple methods of statistical analysis. The combination of multiple genetic loci, multiple phenotypes, and multiple methods of evaluating associations between genotype and phenotype means that modern genetic studies often involve the testing of an enormous number of hypotheses. When multiple hypothesis tests are performed in a study, there is a risk of inflation of the type I error rate (i.e., the chance of falsely claiming an association when there is none). Several methods for multiple-testing correction are in popular use, and they all have strengths and weaknesses. Because no single method is universally adopted or always appropriate, it is important to understand the principles, strengths, and weaknesses of the methods so that they can be applied appropriately in practice. In this article, we review the three principle methods for multiple-testing correction and provide guidance for calculating statistical power.
Distribution of the Crystalline Lens Power In Vivo as a Function of Age.
Jongenelen, Sien; Rozema, Jos J; Tassignon, Marie-José
2015-11-01
To observe the age-related changes in crystalline lens power in vivo in a noncataractous European population. Data were obtained though Project Gullstrand, a multicenter population study with data from healthy phakic subjects between 20 and 85 years old. One randomly selected eye per subject was used. Lens power was calculated using the modified Bennett-Rabbetts method, using biometry data from an autorefractometer, Oculus Pentacam, and Haag-Streit Lenstar. The study included 1069 Caucasian subjects (490 men, 579 women) with a mean age of 44.2 ± 14.2 years and mean lens power of 24.96 ± 2.18 diopters (D). The average lens power showed a statistically significant decrease as a function of age, with a steeper rate of decrease after the age of 55. The highest crystalline lens power was found in emmetropic eyes and eyes with a short axial length. The correlation of lens power with different refractive components was statistically significant for axial length (r = -0.523, P < 0.01) and anterior chamber depth (r = -0.161, P < 0.01), but not for spherical equivalent and corneal power (P > 0.05). This in vivo study showed a monotonous decrease in crystalline lens power with age, with a steeper decline after 55 years. While this finding fundamentally concurs with previous in vivo studies, it is at odds with studies performed on donor eyes that reported lens power increases after the age of 55.
Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garten, C.T. Jr.; Wullschleger, S.D.
Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less
Radiation shielding quality assurance
NASA Astrophysics Data System (ADS)
Um, Dallsun
For the radiation shielding quality assurance, the validity and reliability of the neutron transport code MCNP, which is now one of the most widely used radiation shielding analysis codes, were checked with lot of benchmark experiments. And also as a practical example, follows were performed in this thesis. One integral neutron transport experiment to measure the effect of neutron streaming in iron and void was performed with Dog-Legged Void Assembly in Knolls Atomic Power Laboratory in 1991. Neutron flux was measured six different places with the methane detectors and a BF-3 detector. The main purpose of the measurements was to provide benchmark against which various neutron transport calculation tools could be compared. Those data were used in verification of Monte Carlo Neutron & Photon Transport Code, MCNP, with the modeling for that. Experimental results and calculation results were compared in both ways, as the total integrated value of neutron fluxes along neutron energy range from 10 KeV to 2 MeV and as the neutron spectrum along with neutron energy range. Both results are well matched with the statistical error +/-20%. MCNP results were also compared with those of TORT, a three dimensional discrete ordinates code which was developed by Oak Ridge National Laboratory. MCNP results are superior to the TORT results at all detector places except one. This means that MCNP is proved as a very powerful tool for the analysis of neutron transport through iron & air and further it could be used as a powerful tool for the radiation shielding analysis. For one application of the analysis of variance (ANOVA) to neutron and gamma transport problems, uncertainties for the calculated values of critical K were evaluated as in the ANOVA on statistical data.
The Role of Margin in Link Design and Optimization
NASA Technical Reports Server (NTRS)
Cheung, K.
2015-01-01
Link analysis is a system engineering process in the design, development, and operation of communication systems and networks. Link models that are mathematical abstractions representing the useful signal power and the undesirable noise and attenuation effects (including weather effects if the signal path transverses through the atmosphere) that are integrated into the link budget calculation that provides the estimates of signal power and noise power at the receiver. Then the link margin is applied which attempts to counteract the fluctuations of the signal and noise power to ensure reliable data delivery from transmitter to receiver. (Link margin is dictated by the link margin policy or requirements.) A simple link budgeting approach assumes link parameters to be deterministic values typically adopted a rule-of-thumb policy of 3 dB link margin. This policy works for most S- and X-band links due to their insensitivity to weather effects. But for higher frequency links like Ka-band, Ku-band, and optical communication links, it is unclear if a 3 dB link margin would guarantee link closure. Statistical link analysis that adopted the 2-sigma or 3-sigma link margin incorporates link uncertainties in the sigma calculation. (The Deep Space Network (DSN) link margin policies are 2-sigma for downlink and 3-sigma for uplink.) The link reliability can therefore be quantified statistically even for higher frequency links. However in the current statistical link analysis approach, link reliability is only expressed as the likelihood of exceeding the signal-to-noise ratio (SNR) threshold that corresponds to a given bit-error-rate (BER) or frame-error-rate (FER) requirement. The method does not provide the true BER or FER estimate of the link with margin, or the required signalto-noise ratio (SNR) that would meet the BER or FER requirement in the statistical sense. In this paper, we perform in-depth analysis on the relationship between BER/FER requirement, operating SNR, and coding performance curve, in the case when the channel coherence time of link fluctuation is comparable or larger than the time duration of a codeword. We compute the "true" SNR design point that would meet the BER/FER requirement by taking into account the fluctuation of signal power and noise power at the receiver, and the shape of the coding performance curve. This analysis yields a number of valuable insights on the design choices of coding scheme and link margin for the reliable data delivery of a communication system - space and ground. We illustrate the aforementioned analysis using a number of standard NASA error-correcting codes.
Piñero, David P; Caballero, María T; Nicolás-Albujer, Juan M; de Fez, Dolores; Camps, Vicent J
2018-06-01
To evaluate a new method of calculation of total corneal astigmatism based on Gaussian optics and the power design of a spherocylindrical lens (C) in the healthy eye and to compare it with keratometric (K) and power vector (PV) methods. A total of 92 healthy eyes of 92 patients (age, 17-65 years) were enrolled. Corneal astigmatism was calculated in all cases using K, PV, and our new approach C that considers the contribution of corneal thickness. An evaluation of the interchangeability of our new approach with the other 2 methods was performed using Bland-Altman analysis. Statistically significant differences between methods were found in the magnitude of astigmatism (P < 0.001), with the highest values provided by K. These differences in the magnitude of astigmatism were clinically relevant when K and C were compared [limits of agreement (LoA), -0.40 to 0.62 D), but not for the comparison between PV and C (LoA, -0.03 to 0.01 D). Differences in the axis of astigmatism between methods did not reach statistical significance (P = 0.408). However, they were clinically relevant when comparing K and C (LoA, -5.48 to 15.68 degrees) but not for the comparison between PV and C (LoA, -1.68 to 1.42 degrees). The use of our new approach for the calculation of total corneal astigmatism provides astigmatic results comparable to the PV method, which suggests that the effect of pachymetry on total corneal astigmatism is minimal in healthy eyes.
Comparison of AL-Scan and IOLMaster 500 Partial Coherence Interferometry Optical Biometers.
Hoffer, Kenneth J; Savini, Giacomo
2016-10-01
To investigate agreement between the ocular biometry measurements provided by a newer optical biometer, the AL-Scan (Nidek Co, Ltd., Gamagori, Japan) and those provided by the IOLMaster 500 (Carl Zeiss Meditec, Jena Germany), which are both based on partial coherence interferometry. Axial length, corneal power, and anterior chamber depth (corneal epithelium to lens) were measured in 86 eyes of 86 patients scheduled for cataract surgery using both biometers. All values were analyzed using a paired t test, the Pearson product moment correlation coefficient (r), and Bland-Altman plots. The mean axial length values of both instruments were exactly the same (23.46 ± 0.99 mm) for both) and showed excellent agreement and correlation. On the contrary, the AL-Scan measured a steeper mean corneal power by 0.08 diopters (D) at the 2.4-mm zone but by only 0.03 D at the 3.3-mm zone, only the former being statistically significant. The AL-Scan measured a deeper anterior chamber depth by 0.13 mm, which was statistically significant (P < .001). Agreement between the two units was good. However, the small but statistically significant difference in corneal power (at the IOLMaster-comparable 2.4-mm zone) and in the anterior chamber depth measurement make lens constant optimization necessary when calculating the intraocular lens power by means of theoretical formulas. [J Refract Surg. 2016;32(10):694-698.]. Copyright 2016, SLACK Incorporated.
Global Active Stretching (SGA®) Practice for Judo Practitioners’ Physical Performance Enhancement
ALMEIDA, HELENO; DE SOUZA, RAPHAEL F.; AIDAR, FELIPE J.; DA SILVA, ALISSON G.; REGI, RICARDO P.; BASTOS, AFRÂNIO A.
2018-01-01
In order to analyze the Global Active Stretching (SGA®) practice on the physical performance enhancement in judo-practitioner competitors, 12 male athletes from Judo Federation of Sergipe (Federação Sergipana de Judô), were divided into two groups: Experimental Group (EG) and Control Group (CG). For 10 weeks, the EG practiced SGA® self-postures and the CG practiced assorted calisthenic exercises. All of them were submitted to a variety of tests (before and after): handgrip strength, flexibility, upper limbs’ muscle power, isometric pull-up force, lower limbs’ muscle power (squat-jump – SJ and countermovement jump – CMJ) and Tokui Waza test. Due to the small number of people in the sample, the data were considered non-parametric and then we applied the Wilcoxon test using the software R version 3.3.2 (R Development Core Team, Austria). The effect size was calculated and considered statistically significant the values p ≤ 0.05. Concerning the results, the EG statistical differences were highlighted in flexibility, upper limbs’ muscle power and lower limbs’ muscle power (CMJ), with a gain of 3.00 ± (1.09) cm, 0,42 ± (0,51) m and 2.49 ± (0.63) cm, respectively. The CG only presented statistical difference in the lower limbs’ test (CMJ), with a gain of 0,55 ± 2,28 cm. Thus, the main results pointed out statistical differences before and after in the EG in the flexibility, upper limbs and lower limbs’ muscle power (CMJ), with a gain of 3.00 ± 1.09 cm, 0.42 ± 0.51 m 2.49 ± 0.63 cm, respectively. On the other hand, the CG presented a statistical difference only the lower limbs’ CMJ test, with a gain of 0.55 ± 2.28 cm. The regular 10-week practice of SGA® self-postures increased judoka practitioners’ posterior chain flexibility and vertical jumping (CMJ) performance. PMID:29795746
NASA Technical Reports Server (NTRS)
Kubota, Takuji; Iguchi, Toshio; Kojima, Masahiro; Liao, Liang; Masaki, Takeshi; Hanado, Hiroshi; Meneghini, Robert; Oki, Riko
2016-01-01
A statistical method to reduce the sidelobe clutter of the Ku-band precipitation radar (KuPR) of the Dual-Frequency Precipitation Radar (DPR) on board the Global Precipitation Measurement (GPM) Core Observatory is described and evaluated using DPR observations. The KuPR sidelobe clutter was much more severe than that of the Precipitation Radar on board the Tropical Rainfall Measuring Mission (TRMM), and it has caused the misidentification of precipitation. The statistical method to reduce sidelobe clutter was constructed by subtracting the estimated sidelobe power, based upon a multiple regression model with explanatory variables of the normalized radar cross section (NRCS) of surface, from the received power of the echo. The saturation of the NRCS at near-nadir angles, resulting from strong surface scattering, was considered in the calculation of the regression coefficients.The method was implemented in the KuPR algorithm and applied to KuPR-observed data. It was found that the received power from sidelobe clutter over the ocean was largely reduced by using the developed method, although some of the received power from the sidelobe clutter still remained. From the statistical results of the evaluations, it was shown that the number of KuPR precipitation events in the clutter region, after the method was applied, was comparable to that in the clutter-free region. This confirms the reasonable performance of the method in removing sidelobe clutter. For further improving the effectiveness of the method, it is necessary to improve the consideration of the NRCS saturation, which will be explored in future work.
HARMONIC SPACE ANALYSIS OF PULSAR TIMING ARRAY REDSHIFT MAPS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roebber, Elinore; Holder, Gilbert, E-mail: roebbere@physics.mcgill.ca
2017-01-20
In this paper, we propose a new framework for treating the angular information in the pulsar timing array (PTA) response to a gravitational wave (GW) background based on standard cosmic microwave background techniques. We calculate the angular power spectrum of the all-sky gravitational redshift pattern induced at the Earth for both a single bright source of gravitational radiation and a statistically isotropic, unpolarized Gaussian random GW background. The angular power spectrum is the harmonic transform of the Hellings and Downs curve. We use the power spectrum to examine the expected variance in the Hellings and Downs curve in both cases.more » Finally, we discuss the extent to which PTAs are sensitive to the angular power spectrum and find that the power spectrum sensitivity is dominated by the quadrupole anisotropy of the gravitational redshift map.« less
THE MURCHISON WIDEFIELD ARRAY 21 cm POWER SPECTRUM ANALYSIS METHODOLOGY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobs, Daniel C.; Beardsley, A. P.; Bowman, Judd D.
2016-07-10
We present the 21 cm power spectrum analysis approach of the Murchison Widefield Array Epoch of Reionization project. In this paper, we compare the outputs of multiple pipelines for the purpose of validating statistical limits cosmological hydrogen at redshifts between 6 and 12. Multiple independent data calibration and reduction pipelines are used to make power spectrum limits on a fiducial night of data. Comparing the outputs of imaging and power spectrum stages highlights differences in calibration, foreground subtraction, and power spectrum calculation. The power spectra found using these different methods span a space defined by the various tradeoffs between speed,more » accuracy, and systematic control. Lessons learned from comparing the pipelines range from the algorithmic to the prosaically mundane; all demonstrate the many pitfalls of neglecting reproducibility. We briefly discuss the way these different methods attempt to handle the question of evaluating a significant detection in the presence of foregrounds.« less
POWER ANALYSIS FOR COMPLEX MEDIATIONAL DESIGNS USING MONTE CARLO METHODS
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2013-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex mediational models. The approach is based on the well known technique of generating a large number of samples in a Monte Carlo study, and estimating power as the percentage of cases in which an estimate of interest is significantly different from zero. Examples of power calculation for commonly used mediational models are provided. Power analyses for the single mediator, multiple mediators, three-path mediation, mediation with latent variables, moderated mediation, and mediation in longitudinal designs are described. Annotated sample syntax for Mplus is appended and tabled values of required sample sizes are shown for some models. PMID:23935262
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.
Nonlinear wave chaos: statistics of second harmonic fields.
Zhou, Min; Ott, Edward; Antonsen, Thomas M; Anlage, Steven M
2017-10-01
Concepts from the field of wave chaos have been shown to successfully predict the statistical properties of linear electromagnetic fields in electrically large enclosures. The Random Coupling Model (RCM) describes these properties by incorporating both universal features described by Random Matrix Theory and the system-specific features of particular system realizations. In an effort to extend this approach to the nonlinear domain, we add an active nonlinear frequency-doubling circuit to an otherwise linear wave chaotic system, and we measure the statistical properties of the resulting second harmonic fields. We develop an RCM-based model of this system as two linear chaotic cavities coupled by means of a nonlinear transfer function. The harmonic field strengths are predicted to be the product of two statistical quantities and the nonlinearity characteristics. Statistical results from measurement-based calculation, RCM-based simulation, and direct experimental measurements are compared and show good agreement over many decades of power.
Collisional-radiative switching - A powerful technique for converging non-LTE calculations
NASA Technical Reports Server (NTRS)
Hummer, D. G.; Voels, S. A.
1988-01-01
A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.
Measurement of Crystalline Lens Volume During Accommodation in a Lens Stretcher.
Marussich, Lauren; Manns, Fabrice; Nankivil, Derek; Maceo Heilman, Bianca; Yao, Yue; Arrieta-Quintero, Esdras; Ho, Arthur; Augusteyn, Robert; Parel, Jean-Marie
2015-07-01
To determine if the lens volume changes during accommodation. The study used data acquired on 36 cynomolgus monkey lenses that were stretched in a stepwise fashion to simulate disaccommodation. At each step, stretching force and dioptric power were measured and a cross-sectional image of the lens was acquired using an optical coherence tomography system. Images were corrected for refractive distortions and lens volume was calculated assuming rotational symmetry. The average change in lens volume was calculated and the relation between volume change and power change, and between volume change and stretching force, were quantified. Linear regressions of volume-power and volume-force plots were calculated. The mean (± SD) volume in the unstretched (accommodated) state was 97 ± 8 mm3. On average, there was a small but statistically significant (P = 0.002) increase in measured lens volume with stretching. The mean change in lens volume was +0.8 ± 1.3 mm3. The mean volume-power and volume-load slopes were -0.018 ± 0.058 mm3/D and +0.16 ± 0.40 mm3/g. Lens volume remains effectively constant during accommodation, with changes that are less than 1% on average. This result supports a hypothesis that the change in lens shape with accommodation is accompanied by a redistribution of tissue within the capsular bag without significant compression of the lens contents or fluid exchange through the capsule.
ERIC Educational Resources Information Center
Brandon, Paul R.; Harrison, George M.; Lawton, Brian E.
2013-01-01
When evaluators plan site-randomized experiments, they must conduct the appropriate statistical power analyses. These analyses are most likely to be valid when they are based on data from the jurisdictions in which the studies are to be conducted. In this method note, we provide software code, in the form of a SAS macro, for producing statistical…
The Power of Doing: A Learning Exercise That Brings the Central Limit Theorem to Life
ERIC Educational Resources Information Center
Price, Barbara A.; Zhang, Xiaolong
2007-01-01
This article demonstrates an active learning technique for teaching the Central Limit Theorem (CLT) in an introductory undergraduate business statistics class. Groups of students carry out one of two experiments in the lab, tossing a die in sets of 5 rolls or tossing a die in sets of 10 rolls. They are asked to calculate the sample average of each…
Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence
NASA Astrophysics Data System (ADS)
Bandi, Mahesh M.; Connaughton, Colm
2008-03-01
We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig’s XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.
Craig's XY distribution and the statistics of Lagrangian power in two-dimensional turbulence.
Bandi, Mahesh M; Connaughton, Colm
2008-03-01
We examine the probability distribution function (PDF) of the energy injection rate (power) in numerical simulations of stationary two-dimensional (2D) turbulence in the Lagrangian frame. The simulation is designed to mimic an electromagnetically driven fluid layer, a well-documented system for generating 2D turbulence in the laboratory. In our simulations, the forcing and velocity fields are close to Gaussian. On the other hand, the measured PDF of injected power is very sharply peaked at zero, suggestive of a singularity there, with tails which are exponential but asymmetric. Large positive fluctuations are more probable than large negative fluctuations. It is this asymmetry of the tails which leads to a net positive mean value for the energy input despite the most probable value being zero. The main features of the power distribution are well described by Craig's XY distribution for the PDF of the product of two correlated normal variables. We show that the power distribution should exhibit a logarithmic singularity at zero and decay exponentially for large absolute values of the power. We calculate the asymptotic behavior and express the asymmetry of the tails in terms of the correlation coefficient of the force and velocity. We compare the measured PDFs with the theoretical calculations and briefly discuss how the power PDF might change with other forcing mechanisms.
skelesim: an extensible, general framework for population genetic simulation in R.
Parobek, Christian M; Archer, Frederick I; DePrenger-Levin, Michelle E; Hoban, Sean M; Liggins, Libby; Strand, Allan E
2017-01-01
Simulations are a key tool in molecular ecology for inference and forecasting, as well as for evaluating new methods. Due to growing computational power and a diversity of software with different capabilities, simulations are becoming increasingly powerful and useful. However, the widespread use of simulations by geneticists and ecologists is hindered by difficulties in understanding these softwares' complex capabilities, composing code and input files, a daunting bioinformatics barrier and a steep conceptual learning curve. skelesim (an R package) guides users in choosing appropriate simulations, setting parameters, calculating genetic summary statistics and organizing data output, in a reproducible pipeline within the R environment. skelesim is designed to be an extensible framework that can 'wrap' around any simulation software (inside or outside the R environment) and be extended to calculate and graph any genetic summary statistics. Currently, skelesim implements coalescent and forward-time models available in the fastsimcoal2 and rmetasim simulation engines to produce null distributions for multiple population genetic statistics and marker types, under a variety of demographic conditions. skelesim is intended to make simulations easier while still allowing full model complexity to ensure that simulations play a fundamental role in molecular ecology investigations. skelesim can also serve as a teaching tool: demonstrating the outcomes of stochastic population genetic processes; teaching general concepts of simulations; and providing an introduction to the R environment with a user-friendly graphical user interface (using shiny). © 2016 John Wiley & Sons Ltd.
skeleSim: an extensible, general framework for population genetic simulation in R
Parobek, Christian M.; Archer, Frederick I.; DePrenger-Levin, Michelle E.; Hoban, Sean M.; Liggins, Libby; Strand, Allan E.
2016-01-01
Simulations are a key tool in molecular ecology for inference and forecasting, as well as for evaluating new methods. Due to growing computational power and a diversity of software with different capabilities, simulations are becoming increasingly powerful and useful. However, the widespread use of simulations by geneticists and ecologists is hindered by difficulties in understanding these softwares’ complex capabilities, composing code and input files, a daunting bioinformatics barrier, and a steep conceptual learning curve. skeleSim (an R package) guides users in choosing appropriate simulations, setting parameters, calculating genetic summary statistics, and organizing data output, in a reproducible pipeline within the R environment. skeleSim is designed to be an extensible framework that can ‘wrap’ around any simulation software (inside or outside the R environment) and be extended to calculate and graph any genetic summary statistics. Currently, skeleSim implements coalescent and forward-time models available in the fastsimcoal2 and rmetasim simulation engines to produce null distributions for multiple population genetic statistics and marker types, under a variety of demographic conditions. skeleSim is intended to make simulations easier while still allowing full model complexity to ensure that simulations play a fundamental role in molecular ecology investigations. skeleSim can also serve as a teaching tool: demonstrating the outcomes of stochastic population genetic processes; teaching general concepts of simulations; and providing an introduction to the R environment with a user-friendly graphical user interface (using shiny). PMID:27736016
Calculations of proton-binding thermodynamics in proteins.
Beroza, P; Case, D A
1998-01-01
Computational models of proton binding can range from the chemically complex and statistically simple (as in the quantum calculations) to the chemically simple and statistically complex. Much progress has been made in the multiple-site titration problem. Calculations have improved with the inclusion of more flexibility in regard to both the geometry of the proton binding and the larger scale protein motions associated with titration. This article concentrated on the principles of current calculations, but did not attempt to survey their quantitative performance. This is (1) because such comparisons are given in the cited papers and (2) because continued developments in understanding conformational flexibility and interaction energies will be needed to develop robust methods with strong predictive power. Nevertheless, the advances achieved over the past few years should not be underestimated: serious calculations of protonation behavior and its coupling to conformational change can now be confidently pursued against a backdrop of increasing understanding of the strengths and limitations of such models. It is hoped that such theoretical advances will also spur renewed experimental interest in measuring both overall titration curves and individual pKa values or pKa shifts. Exploration of the shapes of individual titration curves (as measured by Hill coefficients and other parameters) would also be useful in assessing the accuracy of computations and in drawing connections to functional behavior.
Application of random match probability calculations to mixed STR profiles.
Bille, Todd; Bright, Jo-Anne; Buckleton, John
2013-03-01
Mixed DNA profiles are being encountered more frequently as laboratories analyze increasing amounts of touch evidence. If it is determined that an individual could be a possible contributor to the mixture, it is necessary to perform a statistical analysis to allow an assignment of weight to the evidence. Currently, the combined probability of inclusion (CPI) and the likelihood ratio (LR) are the most commonly used methods to perform the statistical analysis. A third method, random match probability (RMP), is available. This article compares the advantages and disadvantages of the CPI and LR methods to the RMP method. We demonstrate that although the LR method is still considered the most powerful of the binary methods, the RMP and LR methods make similar use of the observed data such as peak height, assumed number of contributors, and known contributors where the CPI calculation tends to waste information and be less informative. © 2013 American Academy of Forensic Sciences.
Bispectral analysis of equatorial spread F density irregularities
NASA Technical Reports Server (NTRS)
Labelle, J.; Lund, E. J.
1992-01-01
Bispectral analysis has been applied to density irregularities at frequencies 5-30 Hz observed with a sounding rocket launched from Peru in March 1983. Unlike the power spectrum, the bispectrum contains statistical information about the phase relations between the Fourier components which make up the waveform. In the case of spread F data from 475 km the 5-30 Hz portion of the spectrum displays overall enhanced bicoherence relative to that of the background instrumental noise and to that expected due to statistical considerations, implying that the observed f exp -2.5 power law spectrum has a significant non-Gaussian component. This is consistent with previous qualitative analyses. The bicoherence has also been calculated for simulated equatorial spread F density irregularities in approximately the same wavelength regime, and the resulting bispectrum has some features in common with that of the rocket data. The implications of this analysis for equatorial spread F are discussed, and some future investigations are suggested.
Interim analyses in 2 x 2 crossover trials.
Cook, R J
1995-09-01
A method is presented for performing interim analyses in long term 2 x 2 crossover trials with serial patient entry. The analyses are based on a linear statistic that combines data from individuals observed for one treatment period with data from individuals observed for both periods. The coefficients in this linear combination can be chosen quite arbitrarily, but we focus on variance-based weights to maximize power for tests regarding direct treatment effects. The type I error rate of this procedure is controlled by utilizing the joint distribution of the linear statistics over analysis stages. Methods for performing power and sample size calculations are indicated. A two-stage sequential design involving simultaneous patient entry and a single between-period interim analysis is considered in detail. The power and average number of measurements required for this design are compared to those of the usual crossover trial. The results indicate that, while there is minimal loss in power relative to the usual crossover design in the absence of differential carry-over effects, the proposed design can have substantially greater power when differential carry-over effects are present. The two-stage crossover design can also lead to more economical studies in terms of the expected number of measurements required, due to the potential for early stopping. Attention is directed toward normally distributed responses.
NASA Astrophysics Data System (ADS)
Lea, Devin M.; Legleiter, Carl J.
2016-01-01
Stream power represents the rate of energy expenditure along a river and can be calculated using topographic data acquired via remote sensing or field surveys. This study sought to quantitatively relate temporal changes in the form of Soda Butte Creek, a gravel-bed river in northeastern Yellowstone National Park, to stream power gradients along an 8-km reach. Aerial photographs from 1994 to 2012 and ground-based surveys were used to develop a locational probability map and morphologic sediment budget to assess lateral channel mobility and changes in net sediment flux. A drainage area-to-discharge relationship and DEM developed from LiDAR data were used to obtain the discharge and slope values needed to calculate stream power. Local and lagged relationships between mean stream power gradient at median peak discharge and volumes of erosion, deposition, and net sediment flux were quantified via spatial cross-correlation analyses. Similarly, autocorrelations of locational probabilities and sediment fluxes were used to examine spatial patterns of sediment sources and sinks. Energy expended above critical stream power was calculated for each time period to relate the magnitude and duration of peak flows to the total volumetric change in each time increment. Collectively, we refer to these methods as the stream power gradient (SPG) framework. The results of this study were compromised by methodological limitations of the SPG framework and revealed some complications likely to arise when applying this framework to small, wandering, gravel-bed rivers. Correlations between stream power gradients and sediment flux were generally weak, highlighting the inability of relatively simple statistical approaches to link sub-budget cell-scale sediment dynamics to larger-scale driving forces such as stream power gradients. Improving the moderate spatial resolution techniques used in this study and acquiring very-high resolution data from recently developed methods in fluvial remote sensing could help improve understanding of the spatial organization of stream power, sediment transport, and channel change in dynamic natural rivers.
Muzyka-Woźniak, Maria; Oleszko, Adam
2018-04-26
To compare measurements of axial length (AL), corneal curvature (K), anterior chamber depth (ACD) and white-to-white (WTW) distance on a new device combining Scheimpflug camera and partial coherence interferometry (Pentacam AXL) with a reference optical biometer (IOL Master 500). To evaluate differences between IOL power calculations based on the two biometers. Ninety-seven eyes of 97 consecutive cataract or refractive lens exchange patients were examined preoperatively on IOL Master 500 and Pentacam AXL units. Comparisons between two devices were performed for AL, K, ACD and WTW. Intraocular lens (IOL) power targeting emmetropia was calculated with SRK/T and Haigis formulas on both devices and compared. There were statistically significant differences between two devices for all measured parameters (P < 0.05), except ACD (P = 0.36). Corneal curvature measured with Pentacam AXL was significantly flatter then with IOL Master. The mean difference in AL was clinically insignificant (0.01 mm; 95% LoA 0.16 mm). Pentacam AXL yielded higher IOL power in 75% of eyes for Haigis formula and in 62% of eyes for SRK/T formula, with a mean difference within ± 0.5 D for 72 and 86% of eyes, respectively. There were statistically significant differences between AL, K and WTW measurements obtained with the compared biometers. Flatter corneal curvature measurements on Pentacam AXL necessitate formulas optimisation for Pentacam AXL.
Sound radiation from randomly vibrating beams of finite circular cross section
NASA Technical Reports Server (NTRS)
Sutterlin, M. W.; Pierce, A. D.
1976-01-01
The radiation of sound from vibrating cylindrical beams is analyzed based on the frequency of the beam vibrations and the physical characteristics of the beam and its surroundings. A statistical analysis of random beam vibrations allows this result to be independent of the boundary conditions at the ends of the beam. The acoustic power radiated by the beam can be determined from a knowledge of the frequency band vibration data without a knowledge of the individual modal vibration amplitudes. A practical example of the usefulness of this technique is provided by the application of the theoretical calculations to the prediction of the octave band acoustic power output of the picking sticks of an automatic textile loom. Calculations are made of the expected octave band sound pressure levels based on measured acceleration data. These theoretical levels are subsequently compared with actual sound pressure level measurements of loom noise.
A note on sample size calculation for mean comparisons based on noncentral t-statistics.
Chow, Shein-Chung; Shao, Jun; Wang, Hansheng
2002-11-01
One-sample and two-sample t-tests are commonly used in analyzing data from clinical trials in comparing mean responses from two drug products. During the planning stage of a clinical study, a crucial step is the sample size calculation, i.e., the determination of the number of subjects (patients) needed to achieve a desired power (e.g., 80%) for detecting a clinically meaningful difference in the mean drug responses. Based on noncentral t-distributions, we derive some sample size calculation formulas for testing equality, testing therapeutic noninferiority/superiority, and testing therapeutic equivalence, under the popular one-sample design, two-sample parallel design, and two-sample crossover design. Useful tables are constructed and some examples are given for illustration.
NASA Astrophysics Data System (ADS)
Kumar, Rohit; Puri, Rajeev K.
2018-03-01
Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.
Researchers’ Intuitions About Power in Psychological Research
Bakker, Marjan; Hartgerink, Chris H. J.; Wicherts, Jelte M.; van der Maas, Han L. J.
2016-01-01
Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers’ experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. PMID:27354203
Researchers' Intuitions About Power in Psychological Research.
Bakker, Marjan; Hartgerink, Chris H J; Wicherts, Jelte M; van der Maas, Han L J
2016-08-01
Many psychology studies are statistically underpowered. In part, this may be because many researchers rely on intuition, rules of thumb, and prior practice (along with practical considerations) to determine the number of subjects to test. In Study 1, we surveyed 291 published research psychologists and found large discrepancies between their reports of their preferred amount of power and the actual power of their studies (calculated from their reported typical cell size, typical effect size, and acceptable alpha). Furthermore, in Study 2, 89% of the 214 respondents overestimated the power of specific research designs with a small expected effect size, and 95% underestimated the sample size needed to obtain .80 power for detecting a small effect. Neither researchers' experience nor their knowledge predicted the bias in their self-reported power intuitions. Because many respondents reported that they based their sample sizes on rules of thumb or common practice in the field, we recommend that researchers conduct and report formal power analyses for their studies. © The Author(s) 2016.
Kim, Yun Hak; Jeong, Dae Cheon; Pak, Kyoungjune; Goh, Tae Sik; Lee, Chi-Seung; Han, Myoung-Eun; Kim, Ji-Young; Liangwen, Liu; Kim, Chi Dae; Jang, Jeon Yeob; Cha, Wonjae; Oh, Sae-Ock
2017-09-29
Accurate prediction of prognosis is critical for therapeutic decisions regarding cancer patients. Many previously developed prognostic scoring systems have limitations in reflecting recent progress in the field of cancer biology such as microarray, next-generation sequencing, and signaling pathways. To develop a new prognostic scoring system for cancer patients, we used mRNA expression and clinical data in various independent breast cancer cohorts (n=1214) from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO). A new prognostic score that reflects gene network inherent in genomic big data was calculated using Network-Regularized high-dimensional Cox-regression (Net-score). We compared its discriminatory power with those of two previously used statistical methods: stepwise variable selection via univariate Cox regression (Uni-score) and Cox regression via Elastic net (Enet-score). The Net scoring system showed better discriminatory power in prediction of disease-specific survival (DSS) than other statistical methods (p=0 in METABRIC training cohort, p=0.000331, 4.58e-06 in two METABRIC validation cohorts) when accuracy was examined by log-rank test. Notably, comparison of C-index and AUC values in receiver operating characteristic analysis at 5 years showed fewer differences between training and validation cohorts with the Net scoring system than other statistical methods, suggesting minimal overfitting. The Net-based scoring system also successfully predicted prognosis in various independent GEO cohorts with high discriminatory power. In conclusion, the Net-based scoring system showed better discriminative power than previous statistical methods in prognostic prediction for breast cancer patients. This new system will mark a new era in prognosis prediction for cancer patients.
Kim, Yun Hak; Jeong, Dae Cheon; Pak, Kyoungjune; Goh, Tae Sik; Lee, Chi-Seung; Han, Myoung-Eun; Kim, Ji-Young; Liangwen, Liu; Kim, Chi Dae; Jang, Jeon Yeob; Cha, Wonjae; Oh, Sae-Ock
2017-01-01
Accurate prediction of prognosis is critical for therapeutic decisions regarding cancer patients. Many previously developed prognostic scoring systems have limitations in reflecting recent progress in the field of cancer biology such as microarray, next-generation sequencing, and signaling pathways. To develop a new prognostic scoring system for cancer patients, we used mRNA expression and clinical data in various independent breast cancer cohorts (n=1214) from the Molecular Taxonomy of Breast Cancer International Consortium (METABRIC) and Gene Expression Omnibus (GEO). A new prognostic score that reflects gene network inherent in genomic big data was calculated using Network-Regularized high-dimensional Cox-regression (Net-score). We compared its discriminatory power with those of two previously used statistical methods: stepwise variable selection via univariate Cox regression (Uni-score) and Cox regression via Elastic net (Enet-score). The Net scoring system showed better discriminatory power in prediction of disease-specific survival (DSS) than other statistical methods (p=0 in METABRIC training cohort, p=0.000331, 4.58e-06 in two METABRIC validation cohorts) when accuracy was examined by log-rank test. Notably, comparison of C-index and AUC values in receiver operating characteristic analysis at 5 years showed fewer differences between training and validation cohorts with the Net scoring system than other statistical methods, suggesting minimal overfitting. The Net-based scoring system also successfully predicted prognosis in various independent GEO cohorts with high discriminatory power. In conclusion, the Net-based scoring system showed better discriminative power than previous statistical methods in prognostic prediction for breast cancer patients. This new system will mark a new era in prognosis prediction for cancer patients. PMID:29100405
NASA Astrophysics Data System (ADS)
Lea, Devin M.
Stream power represents the rate of energy expenditure along a river and can be calculated using topographic data acquired via remote sensing or field surveys. This study used remote sensing and GIS tools along with field data to quantitatively relate temporal changes in the form of Soda Butte Creek, a gravel-bed river in northeastern Yellowstone National Park, to stream power gradients along an 8 km reach. Aerial photographs from 1994-2012 and cross-section surveys were used to develop a locational probability map and morphologic sediment budget to assess lateral channel mobility and changes in net sediment flux. A drainage area-to-discharge relationship and digital elevation model (DEM) developed from light detection and ranging (LiDAR) data were used to obtain the discharge and slope values needed to calculate stream power. Local and lagged relationships between mean stream power gradient at median peak discharge and volumes of erosion, deposition, and net sediment flux were quantified via spatial cross-correlation analyses. Similarly, autocorrelations of locational probabilities and sediment fluxes were used to examine spatial patterns of sediment sources and sinks. Energy expended above critical stream power was calculated for each time period to relate the magnitude and duration of peak flows to the total volumetric change in each time increment. Results indicated a lack of strong correlation between stream power gradients and sediment response, highlighting the geomorphic complexity of Soda Butte Creek and the inability of relatively simple statistical approaches to link sub-budget cell-scale sediment dynamics to larger-scale driving forces such as stream power gradients. Improving the moderate spatial resolution techniques used in this study and acquiring very-high resolution data from recently developed methods in fluvial remote sensing could help improve understanding of the spatial organization of stream power, sediment transport, and channel change in dynamic natural rivers.
Luo, Li; Zhu, Yun
2012-01-01
Abstract The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T2, collapsing method, multivariate and collapsing (CMC) method, individual χ2 test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets. PMID:22651812
Luo, Li; Zhu, Yun; Xiong, Momiao
2012-06-01
The genome-wide association studies (GWAS) designed for next-generation sequencing data involve testing association of genomic variants, including common, low frequency, and rare variants. The current strategies for association studies are well developed for identifying association of common variants with the common diseases, but may be ill-suited when large amounts of allelic heterogeneity are present in sequence data. Recently, group tests that analyze their collective frequency differences between cases and controls shift the current variant-by-variant analysis paradigm for GWAS of common variants to the collective test of multiple variants in the association analysis of rare variants. However, group tests ignore differences in genetic effects among SNPs at different genomic locations. As an alternative to group tests, we developed a novel genome-information content-based statistics for testing association of the entire allele frequency spectrum of genomic variation with the diseases. To evaluate the performance of the proposed statistics, we use large-scale simulations based on whole genome low coverage pilot data in the 1000 Genomes Project to calculate the type 1 error rates and power of seven alternative statistics: a genome-information content-based statistic, the generalized T(2), collapsing method, multivariate and collapsing (CMC) method, individual χ(2) test, weighted-sum statistic, and variable threshold statistic. Finally, we apply the seven statistics to published resequencing dataset from ANGPTL3, ANGPTL4, ANGPTL5, and ANGPTL6 genes in the Dallas Heart Study. We report that the genome-information content-based statistic has significantly improved type 1 error rates and higher power than the other six statistics in both simulated and empirical datasets.
Jung, Hae Kyoung; Park, Ah Young; Ko, Kyung Hee; Koh, Jieun
2018-03-12
This study was performed to compare the diagnostic performance of power Doppler ultrasound (US) and a new microvascular Doppler US technique (AngioPLUS; SuperSonic Imagine, Aix-en-Provence, France) for differentiating benign and malignant breast masses. Power Doppler US and AngioPLUS findings were available in 124 breast masses with confirmed pathologic results (benign, 80 [64.5%]; malignant, 44 [35.5%]). The diagnostic performance of each tool was calculated to distinguish benign from malignant masses using a receiver operating characteristic curve analysis and compared. The area under the curve showed that AngioPLUS was superior to power Doppler US in differentiating benign from malignant breast masses, but the difference was not statistically significant. © 2018 by the American Institute of Ultrasound in Medicine.
Pataky, Todd C; Robinson, Mark A; Vanrenterghem, Jos
2018-01-03
Statistical power assessment is an important component of hypothesis-driven research but until relatively recently (mid-1990s) no methods were available for assessing power in experiments involving continuum data and in particular those involving one-dimensional (1D) time series. The purpose of this study was to describe how continuum-level power analyses can be used to plan hypothesis-driven biomechanics experiments involving 1D data. In particular, we demonstrate how theory- and pilot-driven 1D effect modeling can be used for sample-size calculations for both single- and multi-subject experiments. For theory-driven power analysis we use the minimum jerk hypothesis and single-subject experiments involving straight-line, planar reaching. For pilot-driven power analysis we use a previously published knee kinematics dataset. Results show that powers on the order of 0.8 can be achieved with relatively small sample sizes, five and ten for within-subject minimum jerk analysis and between-subject knee kinematics, respectively. However, the appropriate sample size depends on a priori justifications of biomechanical meaning and effect size. The main advantage of the proposed technique is that it encourages a priori justification regarding the clinical and/or scientific meaning of particular 1D effects, thereby robustly structuring subsequent experimental inquiry. In short, it shifts focus from a search for significance to a search for non-rejectable hypotheses. Copyright © 2017 Elsevier Ltd. All rights reserved.
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling
Wood, John
2017-01-01
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080
Gontscharuk, Veronika; Landwehr, Sandra; Finner, Helmut
2015-01-01
The higher criticism (HC) statistic, which can be seen as a normalized version of the famous Kolmogorov-Smirnov statistic, has a long history, dating back to the mid seventies. Originally, HC statistics were used in connection with goodness of fit (GOF) tests but they recently gained some attention in the context of testing the global null hypothesis in high dimensional data. The continuing interest for HC seems to be inspired by a series of nice asymptotic properties related to this statistic. For example, unlike Kolmogorov-Smirnov tests, GOF tests based on the HC statistic are known to be asymptotically sensitive in the moderate tails, hence it is favorably applied for detecting the presence of signals in sparse mixture models. However, some questions around the asymptotic behavior of the HC statistic are still open. We focus on two of them, namely, why a specific intermediate range is crucial for GOF tests based on the HC statistic and why the convergence of the HC distribution to the limiting one is extremely slow. Moreover, the inconsistency in the asymptotic and finite behavior of the HC statistic prompts us to provide a new HC test that has better finite properties than the original HC test while showing the same asymptotics. This test is motivated by the asymptotic behavior of the so-called local levels related to the original HC test. By means of numerical calculations and simulations we show that the new HC test is typically more powerful than the original HC test in normal mixture models. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Measurement of Crystalline Lens Volume During Accommodation in a Lens Stretcher
Marussich, Lauren; Manns, Fabrice; Nankivil, Derek; Maceo Heilman, Bianca; Yao, Yue; Arrieta-Quintero, Esdras; Ho, Arthur; Augusteyn, Robert; Parel, Jean-Marie
2015-01-01
Purpose To determine if the lens volume changes during accommodation. Methods The study used data acquired on 36 cynomolgus monkey lenses that were stretched in a stepwise fashion to simulate disaccommodation. At each step, stretching force and dioptric power were measured and a cross-sectional image of the lens was acquired using an optical coherence tomography system. Images were corrected for refractive distortions and lens volume was calculated assuming rotational symmetry. The average change in lens volume was calculated and the relation between volume change and power change, and between volume change and stretching force, were quantified. Linear regressions of volume-power and volume-force plots were calculated. Results The mean (±SD) volume in the unstretched (accommodated) state was 97 ± 8 mm3. On average, there was a small but statistically significant (P = 0.002) increase in measured lens volume with stretching. The mean change in lens volume was +0.8 ± 1.3 mm3. The mean volume-power and volume-load slopes were −0.018 ± 0.058 mm3/D and +0.16 ± 0.40 mm3/g. Conclusions Lens volume remains effectively constant during accommodation, with changes that are less than 1% on average. This result supports a hypothesis that the change in lens shape with accommodation is accompanied by a redistribution of tissue within the capsular bag without significant compression of the lens contents or fluid exchange through the capsule. PMID:26161985
NASA Astrophysics Data System (ADS)
Hartini, Entin; Andiwijayakusuma, Dinan
2014-09-01
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuel type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hartini, Entin, E-mail: entin@batan.go.id; Andiwijayakusuma, Dinan, E-mail: entin@batan.go.id
2014-09-30
This research was carried out on the development of code for uncertainty analysis is based on a statistical approach for assessing the uncertainty input parameters. In the butn-up calculation of fuel, uncertainty analysis performed for input parameters fuel density, coolant density and fuel temperature. This calculation is performed during irradiation using Monte Carlo N-Particle Transport. The Uncertainty method based on the probabilities density function. Development code is made in python script to do coupling with MCNPX for criticality and burn-up calculations. Simulation is done by modeling the geometry of PWR terrace, with MCNPX on the power 54 MW with fuelmore » type UO2 pellets. The calculation is done by using the data library continuous energy cross-sections ENDF / B-VI. MCNPX requires nuclear data in ACE format. Development of interfaces for obtaining nuclear data in the form of ACE format of ENDF through special process NJOY calculation to temperature changes in a certain range.« less
Statistical tests for power-law cross-correlated processes
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.
Use of power analysis to develop detectable significance criteria for sea urchin toxicity tests
Carr, R.S.; Biedenbach, J.M.
1999-01-01
When sufficient data are available, the statistical power of a test can be determined using power analysis procedures. The term “detectable significance” has been coined to refer to this criterion based on power analysis and past performance of a test. This power analysis procedure has been performed with sea urchin (Arbacia punctulata) fertilization and embryological development data from sediment porewater toxicity tests. Data from 3100 and 2295 tests for the fertilization and embryological development tests, respectively, were used to calculate the criteria and regression equations describing the power curves. Using Dunnett's test, a minimum significant difference (MSD) (β = 0.05) of 15.5% and 19% for the fertilization test, and 16.4% and 20.6% for the embryological development test, for α ≤ 0.05 and α ≤ 0.01, respectively, were determined. The use of this second criterion reduces type I (false positive) errors and helps to establish a critical level of difference based on the past performance of the test.
Evaluating and Reporting Statistical Power in Counseling Research
ERIC Educational Resources Information Center
Balkin, Richard S.; Sheperis, Carl J.
2011-01-01
Despite recommendations from the "Publication Manual of the American Psychological Association" (6th ed.) to include information on statistical power when publishing quantitative results, authors seldom include analysis or discussion of statistical power. The rationale for discussing statistical power is addressed, approaches to using "G*Power" to…
Advances in Statistical Methods for Substance Abuse Prevention Research
MacKinnon, David P.; Lockwood, Chondra M.
2010-01-01
The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467
In vivo Comet assay--statistical analysis and power calculations of mice testicular cells.
Hansen, Merete Kjær; Sharma, Anoop Kumar; Dybdahl, Marianne; Boberg, Julie; Kulahci, Murat
2014-11-01
The in vivo Comet assay is a sensitive method for evaluating DNA damage. A recurrent concern is how to analyze the data appropriately and efficiently. A popular approach is to summarize the raw data into a summary statistic prior to the statistical analysis. However, consensus on which summary statistic to use has yet to be reached. Another important consideration concerns the assessment of proper sample sizes in the design of Comet assay studies. This study aims to identify a statistic suitably summarizing the % tail DNA of mice testicular samples in Comet assay studies. A second aim is to provide curves for this statistic outlining the number of animals and gels to use. The current study was based on 11 compounds administered via oral gavage in three doses to male mice: CAS no. 110-26-9, CAS no. 512-56-1, CAS no. 111873-33-7, CAS no. 79-94-7, CAS no. 115-96-8, CAS no. 598-55-0, CAS no. 636-97-5, CAS no. 85-28-9, CAS no. 13674-87-8, CAS no. 43100-38-5 and CAS no. 60965-26-6. Testicular cells were examined using the alkaline version of the Comet assay and the DNA damage was quantified as % tail DNA using a fully automatic scoring system. From the raw data 23 summary statistics were examined. A linear mixed-effects model was fitted to the summarized data and the estimated variance components were used to generate power curves as a function of sample size. The statistic that most appropriately summarized the within-sample distributions was the median of the log-transformed data, as it most consistently conformed to the assumptions of the statistical model. Power curves for 1.5-, 2-, and 2.5-fold changes of the highest dose group compared to the control group when 50 and 100 cells were scored per gel are provided to aid in the design of future Comet assay studies on testicular cells. Copyright © 2014 Elsevier B.V. All rights reserved.
Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan
2016-03-09
Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.
A general solution strategy of modified power method for higher mode solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr
2016-01-15
A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less
A New Goodness of Fit Test for Normality with Mean and Variance Unknown.
1981-12-01
be realized, since fewer random deviates may have to be generated in order to get consistent critical values at the desired a levels . Plotting... a - levels n * -straightforward .20 .15 .10 .05 .01 * =reflection ..... 10 * .5710 .5120 .4318 .3208 .1612 10 ** .3670 .2914 .2206 .1388 .0390 25...Population Is Cauchy Actual Population: Cauchy Statistic: Kolmogorov-Smirnov Calculation method Powers at a - levels n = straightforwar .20 .15 .10 .05
Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheung, WanYin; Zhang, Jie; Florita, Anthony
2015-12-08
Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less
Durand, Casey P
2013-01-01
Statistical interactions are a common component of data analysis across a broad range of scientific disciplines. However, the statistical power to detect interactions is often undesirably low. One solution is to elevate the Type 1 error rate so that important interactions are not missed in a low power situation. To date, no study has quantified the effects of this practice on power in a linear regression model. A Monte Carlo simulation study was performed. A continuous dependent variable was specified, along with three types of interactions: continuous variable by continuous variable; continuous by dichotomous; and dichotomous by dichotomous. For each of the three scenarios, the interaction effect sizes, sample sizes, and Type 1 error rate were varied, resulting in a total of 240 unique simulations. In general, power to detect the interaction effect was either so low or so high at α = 0.05 that raising the Type 1 error rate only served to increase the probability of including a spurious interaction in the model. A small number of scenarios were identified in which an elevated Type 1 error rate may be justified. Routinely elevating Type 1 error rate when testing interaction effects is not an advisable practice. Researchers are best served by positing interaction effects a priori and accounting for them when conducting sample size calculations.
Reuschel, Anna; Bogatsch, Holger; Barth, Thomas; Wiedemann, Renate
2010-11-01
To compare the intraoperative and postoperative outcomes of conventional longitudinal phacoemulsification and torsional phacoemulsification. Department of Ophthalmology, University of Leipzig, Germany. Randomized single-center clinical trial. Eyes with senile cataract were randomized to have phacoemulsification using the Infiniti Vision System and the torsional mode (OZil) or conventional longitudinal mode. Primary outcomes were corrected distance visual acuity (CDVA) and central endothelial cell density (ECD), calculated according to the Conference on Harmonisation-E9 Guidelines in which missing values were substituted by the median in each group (primary analysis) and the loss was then calculated using actual data (secondary analysis). Secondary outcomes were ultrasound (US) time, cumulative dissipated energy (CDE), and percentage total equivalent power in position 3. Postoperative follow-up was at 3 months. The mean preoperative CDVA was 0.41 logMAR in the torsional group and 0.38 logMAR in the longitudinal group, improving to 0.07 logMAR postoperatively in both groups. The mean ECD loss was 7.2% ± 4.6% in the torsional group (72 patients) and 7.1% ± 4.4% in the longitudinal group (76 patients), with no statistically significant differences in the primary analysis (P = .342) or secondary analysis (P = .906). The mean US time, CDE, and percentage total equivalent power in position 3 were statistically significantly lower in the torsional group (98 patients) than in the longitudinal group (94 patients) (P<.001). The torsional mode was as safe as the longitudinal mode in phacoemulsification for age-related cataract. Copyright © 2010 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
A statistical survey of ultralow-frequency wave power and polarization in the Hermean magnetosphere.
James, Matthew K; Bunce, Emma J; Yeoman, Timothy K; Imber, Suzanne M; Korth, Haje
2016-09-01
We present a statistical survey of ultralow-frequency wave activity within the Hermean magnetosphere using the entire MErcury Surface, Space ENvironment, GEochemistry, and Ranging magnetometer data set. This study is focused upon wave activity with frequencies <0.5 Hz, typically below local ion gyrofrequencies, in order to determine if field line resonances similar to those observed in the terrestrial magnetosphere may be present. Wave activity is mapped to the magnetic equatorial plane of the magnetosphere and to magnetic latitude and local times on Mercury using the KT14 magnetic field model. Wave power mapped to the planetary surface indicates the average location of the polar cap boundary. Compressional wave power is dominant throughout most of the magnetosphere, while azimuthal wave power close to the dayside magnetopause provides evidence that interactions between the magnetosheath and the magnetopause such as the Kelvin-Helmholtz instability may be driving wave activity. Further evidence of this is found in the average wave polarization: left-handed polarized waves dominate the dawnside magnetosphere, while right-handed polarized waves dominate the duskside. A possible field line resonance event is also presented, where a time-of-flight calculation is used to provide an estimated local plasma mass density of ∼240 amu cm -3 .
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling.
Nord, Camilla L; Valton, Vincent; Wood, John; Roiser, Jonathan P
2017-08-23
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered-some very seriously so-but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. Copyright © 2017 Nord, Valton et al.
Predicting Constraints on Ultra-Light Axion Parameters due to LSST Observations
NASA Astrophysics Data System (ADS)
Given, Gabriel; Grin, Daniel
2018-01-01
Ultra-light axions (ULAs) are a type of dark matter or dark energy candidate (depending on the mass) that are predicted to have a mass between $10^{‑33}$ and $10^{‑18}$ eV. The Large Synoptic Survey Telescope (LSST) is expected to provide a large number of weak lensing observations, which will lower the statistical uncertainty on the convergence power spectrum. I began work with Daniel Grin to predict how accurately the data from the LSST will be able to constrain ULA properties. I wrote Python code that takes a matter power spectrum calculated by axionCAMB and converts it to a convergence power spectrum. My code then takes derivatives of the convergence power spectrum with respect to several cosmological parameters; these derivatives will be used in Fisher Matrix analysis to determine the sensitivity of LSST observations to axion parameters.
The Lyman-α power spectrum—CMB lensing convergence cross-correlation
Chiang, Chi-Ting; Slosar, Anže
2018-01-11
We investigate the three-point correlation between the Lyman-α forest and the CMB weak lensing (δ Fδ FΚ) expressed as the cross-correlation between the CMB weak lensing field and local variations in the forest power spectrum. In addition to the standard gravitational bispectrum term, we note the existence of a non-standard systematic term coming from mis-estimation of the mean flux over the finite length of Lyman-α skewers. We numerically calculate the angular cross-power spectrum and discuss its features. We integrate it into zero-lag correlation function and compare our predictions with recent results by Doux et al.. We nd that our predictionsmore » are statistically consistent with the measurement, and including the systematic term improves the agreement with the measurement. We comment on the implication of the response of the Lyman-α forest power spectrum to the long-wavelength density perturbations.« less
The Lyman-α power spectrum—CMB lensing convergence cross-correlation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiang, Chi-Ting; Slosar, Anže
We investigate the three-point correlation between the Lyman-α forest and the CMB weak lensing (δ Fδ FΚ) expressed as the cross-correlation between the CMB weak lensing field and local variations in the forest power spectrum. In addition to the standard gravitational bispectrum term, we note the existence of a non-standard systematic term coming from mis-estimation of the mean flux over the finite length of Lyman-α skewers. We numerically calculate the angular cross-power spectrum and discuss its features. We integrate it into zero-lag correlation function and compare our predictions with recent results by Doux et al.. We nd that our predictionsmore » are statistically consistent with the measurement, and including the systematic term improves the agreement with the measurement. We comment on the implication of the response of the Lyman-α forest power spectrum to the long-wavelength density perturbations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finley, Cathy
2014-04-30
This report contains the results from research aimed at improving short-range (0-6 hour) hub-height wind forecasts in the NOAA weather forecast models through additional data assimilation and model physics improvements for use in wind energy forecasting. Additional meteorological observing platforms including wind profilers, sodars, and surface stations were deployed for this study by NOAA and DOE, and additional meteorological data at or near wind turbine hub height were provided by South Dakota State University and WindLogics/NextEra Energy Resources over a large geographical area in the U.S. Northern Plains for assimilation into NOAA research weather forecast models. The resulting improvements inmore » wind energy forecasts based on the research weather forecast models (with the additional data assimilation and model physics improvements) were examined in many different ways and compared with wind energy forecasts based on the current operational weather forecast models to quantify the forecast improvements important to power grid system operators and wind plant owners/operators participating in energy markets. Two operational weather forecast models (OP_RUC, OP_RAP) and two research weather forecast models (ESRL_RAP, HRRR) were used as the base wind forecasts for generating several different wind power forecasts for the NextEra Energy wind plants in the study area. Power forecasts were generated from the wind forecasts in a variety of ways, from very simple to quite sophisticated, as they might be used by a wide range of both general users and commercial wind energy forecast vendors. The error characteristics of each of these types of forecasts were examined and quantified using bulk error statistics for both the local wind plant and the system aggregate forecasts. The wind power forecast accuracy was also evaluated separately for high-impact wind energy ramp events. The overall bulk error statistics calculated over the first six hours of the forecasts at both the individual wind plant and at the system-wide aggregate level over the one year study period showed that the research weather model-based power forecasts (all types) had lower overall error rates than the current operational weather model-based power forecasts, both at the individual wind plant level and at the system aggregate level. The bulk error statistics of the various model-based power forecasts were also calculated by season and model runtime/forecast hour as power system operations are more sensitive to wind energy forecast errors during certain times of year and certain times of day. The results showed that there were significant differences in seasonal forecast errors between the various model-based power forecasts. The results from the analysis of the various wind power forecast errors by model runtime and forecast hour showed that the forecast errors were largest during the times of day that have increased significance to power system operators (the overnight hours and the morning/evening boundary layer transition periods), but the research weather model-based power forecasts showed improvement over the operational weather model-based power forecasts at these times.« less
Zipkin, Elise F.; Kinlan, Brian P.; Sussman, Allison; Rypkema, Diana; Wimer, Mark; O'Connell, Allan F.
2015-01-01
Estimating patterns of habitat use is challenging for marine avian species because seabirds tend to aggregate in large groups and it can be difficult to locate both individuals and groups in vast marine environments. We developed an approach to estimate the statistical power of discrete survey events to identify species-specific hotspots and coldspots of long-term seabird abundance in marine environments. We illustrate our approach using historical seabird data from survey transects in the U.S. Atlantic Ocean Outer Continental Shelf (OCS), an area that has been divided into “lease blocks” for proposed offshore wind energy development. For our power analysis, we examined whether discrete lease blocks within the region could be defined as hotspots (3 × mean abundance in the OCS) or coldspots (1/3 ×) for individual species within a given season. For each of 74 species/season combinations, we determined which of eight candidate statistical distributions (ranging in their degree of skewedness) best fit the count data. We then used the selected distribution and estimates of regional prevalence to calculate and map statistical power to detect hotspots and coldspots, and estimate the p-value from Monte Carlo significance tests that specific lease blocks are in fact hotspots or coldspots relative to regional average abundance. The power to detect species-specific hotspots was higher than that of coldspots for most species because species-specific prevalence was relatively low (mean: 0.111; SD: 0.110). The number of surveys required for adequate power (> 0.6) was large for most species (tens to hundreds) using this hotspot definition. Regulators may need to accept higher proportional effect sizes, combine species into groups, and/or broaden the spatial scale by combining lease blocks in order to determine optimal placement of wind farms. Our power analysis approach provides a general framework for both retrospective analyses and future avian survey design and is applicable to a broad range of research and conservation problems.
ERIC Educational Resources Information Center
Sinharay, Sandip
2017-01-01
Karabatsos compared the power of 36 person-fit statistics using receiver operating characteristics curves and found the "H[superscript T]" statistic to be the most powerful in identifying aberrant examinees. He found three statistics, "C", "MCI", and "U3", to be the next most powerful. These four statistics,…
ecode - Electron Transport Algorithm Testing v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Franke, Brian C.; Olson, Aaron J.; Bruss, Donald Eugene
2016-10-05
ecode is a Monte Carlo code used for testing algorithms related to electron transport. The code can read basic physics parameters, such as energy-dependent stopping powers and screening parameters. The code permits simple planar geometries of slabs or cubes. Parallelization consists of domain replication, with work distributed at the start of the calculation and statistical results gathered at the end of the calculation. Some basic routines (such as input parsing, random number generation, and statistics processing) are shared with the Integrated Tiger Series codes. A variety of algorithms for uncertainty propagation are incorporated based on the stochastic collocation and stochasticmore » Galerkin methods. These permit uncertainty only in the total and angular scattering cross sections. The code contains algorithms for simulating stochastic mixtures of two materials. The physics is approximate, ranging from mono-energetic and isotropic scattering to screened Rutherford angular scattering and Rutherford energy-loss scattering (simple electron transport models). No production of secondary particles is implemented, and no photon physics is implemented.« less
The topology of large Open Connectome networks for the human brain.
Gastner, Michael T; Ódor, Géza
2016-06-07
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
The topology of large Open Connectome networks for the human brain
NASA Astrophysics Data System (ADS)
Gastner, Michael T.; Ódor, Géza
2016-06-01
The structural human connectome (i.e. the network of fiber connections in the brain) can be analyzed at ever finer spatial resolution thanks to advances in neuroimaging. Here we analyze several large data sets for the human brain network made available by the Open Connectome Project. We apply statistical model selection to characterize the degree distributions of graphs containing up to nodes and edges. A three-parameter generalized Weibull (also known as a stretched exponential) distribution is a good fit to most of the observed degree distributions. For almost all networks, simple power laws cannot fit the data, but in some cases there is statistical support for power laws with an exponential cutoff. We also calculate the topological (graph) dimension D and the small-world coefficient σ of these networks. While σ suggests a small-world topology, we found that D < 4 showing that long-distance connections provide only a small correction to the topology of the embedding three-dimensional space.
Modeling of Yb3+/Er3+-codoped microring resonators
NASA Astrophysics Data System (ADS)
Vallés, Juan A.; Gălătuş, Ramona
2015-03-01
The performance of a highly Yb3+/Er3+-codoped phosphate glass add-drop microring resonator is numerically analyzed. The model assumes resonant behaviour of both pump and signal powers and the dependences of pump intensity build-up inside the microring resonator and of the signal transfer functions to the device through and drop ports are evaluated. Detailed equations for the evolution of the rare-earth ions levels population densities and the propagation of the optical powers inside the microring resonator are included in the model. Moreover, due to the high dopant concentrations considered, the microscopic statistical formalism based on the statistical average of the excitation probability of the Er3+ ion in a microscopic level has been used to describe energy-transfer inter-atomic mechanisms. Realistic parameters and working conditions are used for the calculations. Requirements to achieve amplification and laser oscillation within these devices are obtainable as a function of rare earth ions concentration and coupling losses.
Ringham, Brandy M; Kreidler, Sarah M; Muller, Keith E; Glueck, Deborah H
2016-07-30
Multilevel and longitudinal studies are frequently subject to missing data. For example, biomarker studies for oral cancer may involve multiple assays for each participant. Assays may fail, resulting in missing data values that can be assumed to be missing completely at random. Catellier and Muller proposed a data analytic technique to account for data missing at random in multilevel and longitudinal studies. They suggested modifying the degrees of freedom for both the Hotelling-Lawley trace F statistic and its null case reference distribution. We propose parallel adjustments to approximate power for this multivariate test in studies with missing data. The power approximations use a modified non-central F statistic, which is a function of (i) the expected number of complete cases, (ii) the expected number of non-missing pairs of responses, or (iii) the trimmed sample size, which is the planned sample size reduced by the anticipated proportion of missing data. The accuracy of the method is assessed by comparing the theoretical results to the Monte Carlo simulated power for the Catellier and Muller multivariate test. Over all experimental conditions, the closest approximation to the empirical power of the Catellier and Muller multivariate test is obtained by adjusting power calculations with the expected number of complete cases. The utility of the method is demonstrated with a multivariate power analysis for a hypothetical oral cancer biomarkers study. We describe how to implement the method using standard, commercially available software products and give example code. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Measurement of CO 2, CO, SO 2, and NO emissions from coal-based thermal power plants in India
NASA Astrophysics Data System (ADS)
Chakraborty, N.; Mukherjee, I.; Santra, A. K.; Chowdhury, S.; Chakraborty, S.; Bhattacharya, S.; Mitra, A. P.; Sharma, C.
Measurements of CO 2 (direct GHG) and CO, SO 2, NO (indirect GHGs) were conducted on-line at some of the coal-based thermal power plants in India. The objective of the study was three-fold: to quantify the measured emissions in terms of emission coefficient per kg of coal and per kWh of electricity, to calculate the total possible emission from Indian thermal power plants, and subsequently to compare them with some previous studies. Instrument IMR 2800P Flue Gas Analyzer was used on-line to measure the emission rates of CO 2, CO, SO 2, and NO at 11 numbers of generating units of different ratings. Certain quality assurance (QA) and quality control (QC) techniques were also adopted to gather the data so as to avoid any ambiguity in subsequent data interpretation. For the betterment of data interpretation, the requisite statistical parameters (standard deviation and arithmetic mean) for the measured emissions have been also calculated. The emission coefficients determined for CO 2, CO, SO 2, and NO have been compared with their corresponding values as obtained in the studies conducted by other groups. The total emissions of CO 2, CO, SO 2, and NO calculated on the basis of the emission coefficients for the year 2003-2004 have been found to be 465.667, 1.583, 4.058, and 1.129 Tg, respectively.
NASA Astrophysics Data System (ADS)
Saint-Drenan, Yves-Marie; Wald, Lucien; Ranchin, Thierry; Dubus, Laurent; Troccoli, Alberto
2018-05-01
Classical approaches to the calculation of the photovoltaic (PV) power generated in a region from meteorological data require the knowledge of the detailed characteristics of the plants, which are most often not publicly available. An approach is proposed with the objective to obtain the best possible assessment of power generated in any region without having to collect detailed information on PV plants. The proposed approach is based on a model of PV plant coupled with a statistical distribution of the prominent characteristics of the configuration of the plant and is tested over Europe. The generated PV power is first calculated for each of the plant configurations frequently found in a given region and then aggregated taking into account the probability of occurrence of each configuration. A statistical distribution has been constructed from detailed information obtained for several thousands of PV plants representing approximately 2 % of the total number of PV plants in Germany and was then adapted to other European countries by taking into account changes in the optimal PV tilt angle as a function of the latitude and meteorological conditions. The model has been run with bias-adjusted ERA-interim data as meteorological inputs. The results have been compared to estimates of the total PV power generated in two countries: France and Germany, as provided by the corresponding transmission system operators. Relative RMSE of 4.2 and 3.8 % and relative biases of -2.4 and 0.1 % were found with three-hourly data for France and Germany. A validation against estimates of the country-wide PV-power generation provided by the ENTSO-E for 16 European countries has also been conducted. This evaluation is made difficult by the uncertainty on the installed capacity corresponding to the ENTSO-E data but it nevertheless allows demonstrating that the model output and TSO data are highly correlated in most countries. Given the simplicity of the proposed approach these results are very encouraging. The approach is particularly suited to climatic timescales, both historical and future climates, as demonstrated here.
Iqbal, Khursheed; Tran, Diana A; Li, Arthur X; Warden, Charles; Bai, Angela Y; Singh, Purnima; Madaj, Zach B; Winn, Mary E; Wu, Xiwei; Pfeifer, Gerd P; Szabó, Piroska E
2016-07-12
In a recent paper, we described our efforts in search for evidence supporting epigenetic transgenerational inheritance caused by endocrine disrupter chemicals. One aspect of our study was to compare genome-wide DNA methylation changes in the vinclozolin-exposed fetal male germ cells (n = 3) to control samples (n = 3), their counterparts in the next, unexposed, generation (n = 3 + 3) and also in adult spermatozoa (n = 2 + 2) in both generations. We reported finding zero common hits in the intersection of these four comparisons. In our interpretation, this result did not support the notion that DNA methylation provides a mechanism for a vinclozolin-induced transgenerational male infertility phenotype. In response to criticism by Guerrero-Bosagna regarding our statistical power in the above study, here we provide power calculations to clarify the statistical power of our study and to show the validity of our conclusions. We also explain here how our data is misinterpreted in the commentary by Guerrero-Bosagna by leaving out important data points from consideration.Please see related Correspondence article: xxx (13059_2016_982) and related Research article: http://genomebiology.biomedcentral.com/articles/10.1186/s13059-015-0619-z.
Lee, Tai-Sung; Hu, Yuan; Sherborne, Brad; Guo, Zhuyan; York, Darrin M
2017-07-11
We report the implementation of the thermodynamic integration method on the pmemd module of the AMBER 16 package on GPUs (pmemdGTI). The pmemdGTI code typically delivers over 2 orders of magnitude of speed-up relative to a single CPU core for the calculation of ligand-protein binding affinities with no statistically significant numerical differences and thus provides a powerful new tool for drug discovery applications.
Circuit analysis method for thin-film solar cell modules
NASA Technical Reports Server (NTRS)
Burger, D. R.
1985-01-01
The design of a thin-film solar cell module is dependent on the probability of occurrence of pinhole shunt defects. Using known or assumed defect density data, dichotomous population statistics can be used to calculate the number of defects expected in a module. Probability theory is then used to assign the defective cells to individual strings in a selected series-parallel circuit design. Iterative numerical calculation is used to calcuate I-V curves using cell test values or assumed defective cell values as inputs. Good and shunted cell I-V curves are added to determine the module output power and I-V curve. Different levels of shunt resistance can be selected to model different defect levels.
The structural properties of PbF2 by molecular dynamics
NASA Astrophysics Data System (ADS)
Chergui, Y.; Nehaoua, N.; Telghemti, B.; Guemid, S.; Deraddji, N. E.; Belkhir, H.; Mekki, D. E.
2010-08-01
This work presents the use of molecular dynamics (MD) and the code of Dl_Poly, in order to study the structure of fluoride glass after melting and quenching. We are realized the processing phase liquid-phase, simulating rapid quenching at different speeds to see the effect of quenching rate on the operation of the devitrification. This technique of simulation has become a powerful tool for investigating the microscopic behaviour of matter as well as for calculating macroscopic observable quantities. As basic results, we calculated the interatomic distance, angles and statistics, which help us to know the geometric form and the structure of PbF2. These results are in experimental agreement to those reported in literature.
Correlation Functions and Glass Structure
NASA Astrophysics Data System (ADS)
Chergui, Y.; Nehaoua, N.; Telghemti, B.; Guemid, S.; Deraddji, N. E.; Belkhir, H.; Mekki, D. E.
2011-04-01
This work presents the use of molecular dynamics (MD) and the code of Dl Poly, in order to study the structure of fluoride glass after melting and quenching. We are realized the processing phase liquid-phase, simulating rapid quenching at different speeds to see the effect of quenching rate on the operation of the devitrification. This technique of simulation has become a powerful tool for investigating the microscopic behaviour of matter as well as for calculating macroscopic observable quantities. As basic results, we calculated the interatomic distance, angles and statistics, which help us to know the geometric form and the structure of PbF2. These results are in experimental agreement to those reported in literature.
NASA Technical Reports Server (NTRS)
Manning, Robert M.
1986-01-01
A rain attenuation prediction model is described for use in calculating satellite communication link availability for any specific location in the world that is characterized by an extended record of rainfall. Such a formalism is necessary for the accurate assessment of such availability predictions in the case of the small user-terminal concept of the Advanced Communication Technology Satellite (ACTS) Project. The model employs the theory of extreme value statistics to generate the necessary statistical rainrate parameters from rain data in the form compiled by the National Weather Service. These location dependent rain statistics are then applied to a rain attenuation model to obtain a yearly prediction of the occurrence of attenuation on any satellite link at that location. The predictions of this model are compared to those of the Crane Two-Component Rain Model and some empirical data and found to be very good. The model is then used to calculate rain attenuation statistics at 59 locations in the United States (including Alaska and Hawaii) for the 20 GHz downlinks and 30 GHz uplinks of the proposed ACTS system. The flexibility of this modeling formalism is such that it allows a complete and unified treatment of the temporal aspects of rain attenuation that leads to the design of an optimum stochastic power control algorithm, the purpose of which is to efficiently counter such rain fades on a satellite link.
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-05-28
Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-01-01
Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
Primordial Black Holes from First Principles (Overview)
NASA Astrophysics Data System (ADS)
Lam, Casey; Bloomfield, Jolyon; Moss, Zander; Russell, Megan; Face, Stephen; Guth, Alan
2017-01-01
Given a power spectrum from inflation, our goal is to calculate, from first principles, the number density and mass spectrum of primordial black holes that form in the early universe. Previously, these have been calculated using the Press- Schechter formalism and some demonstrably dubious rules of thumb regarding predictions of black hole collapse. Instead, we use Monte Carlo integration methods to sample field configurations from a power spectrum combined with numerical relativity simulations to obtain a more accurate picture of primordial black hole formation. We demonstrate how this can be applied for both Gaussian perturbations and the more interesting (for primordial black holes) theory of hybrid inflation. One of the tools that we employ is a variant of the BBKS formalism for computing the statistics of density peaks in the early universe. We discuss the issue of overcounting due to subpeaks that can arise from this approach (the ``cloud-in-cloud'' problem). MIT UROP Office- Paul E. Gray (1954) Endowed Fund.
López-Sanromán, F J; de la Riva Andrés, S; Holmbak-Petersen, R; Pérez-Nogués, M; Forés Jackson, P; Santos González, M
2014-10-01
The locomotor pattern alterations produced after the administration of a sublingual detomidine gel was measured by an accelerometric method in horses. Using a randomized two-way crossover design, all animals (n = 6) randomly received either detomidine gel or a placebo administered sublingually. A triaxial accelerometric device was used for gait assessment 15 minutes before (baseline) and every 10 minutes after each treatment for a period of 180 minutes. Eight different parameters were calculated, including speed, stride frequency, stride length, regularity, dorsoventral, propulsion, mediolateral, and total power. Force of acceleration and the three components of power were also calculated. Significant statistical differences were observed between groups in all the parameters but stride length. The majority of significant changes started between 30 and 70 minutes after drug administration and lasted for 160 minutes. This route of administration is definitely useful in horses in which a prolonged sedation is required, with stability being a major concern. Copyright © 2014 Elsevier Ltd. All rights reserved.
Environmental impact assessment of coal power plants in operation
NASA Astrophysics Data System (ADS)
Bartan, Ayfer; Kucukali, Serhat; Ar, Irfan
2017-11-01
Coal power plants constitute an important component of the energy mix in many countries. However, coal power plants can cause several environmental risks such as: climate change and biodiversity loss. In this study, a tool has been proposed to calculate the environmental impact of a coal-fired thermal power plant in operation by using multi-criteria scoring and fuzzy logic method. We take into account the following environmental parameters in our tool: CO, SO2, NOx, particulate matter, fly ash, bottom ash, the cooling water intake impact on aquatic biota, and the thermal pollution. In the proposed tool, the boundaries of the fuzzy logic membership functions were established taking into account the threshold values of the environmental parameters which were defined in the environmental legislation. Scoring of these environmental parameters were done with the statistical analysis of the environmental monitoring data of the power plant and by using the documented evidences that were obtained during the site visits. The proposed method estimates each environmental impact factor level separately and then aggregates them by calculating the Environmental Impact Score (EIS). The proposed method uses environmental monitoring data and documented evidence instead of using simulation models. The proposed method has been applied to the 4 coal-fired power plants that have been operation in Turkey. The Environmental Impact Score was obtained for each power plant and their environmental performances were compared. It is expected that those environmental impact assessments will contribute to the decision-making process for environmental investments to those plants. The main advantage of the proposed method is its flexibility and ease of use.
Calculating solar photovoltaic potential on residential rooftops in Kailua Kona, Hawaii
NASA Astrophysics Data System (ADS)
Carl, Caroline
As carbon based fossil fuels become increasingly scarce, renewable energy sources are coming to the forefront of policy discussions around the globe. As a result, the State of Hawaii has implemented aggressive goals to achieve energy independence by 2030. Renewable electricity generation using solar photovoltaic technologies plays an important role in these efforts. This study utilizes geographic information systems (GIS) and Light Detection and Ranging (LiDAR) data with statistical analysis to identify how much solar photovoltaic potential exists for residential rooftops in the town of Kailua Kona on Hawaii Island. This study helps to quantify the magnitude of possible solar photovoltaic (PV) potential for Solar World SW260 monocrystalline panels on residential rooftops within the study area. Three main areas were addressed in the execution of this research: (1) modeling solar radiation, (2) estimating available rooftop area, and (3) calculating PV potential from incoming solar radiation. High resolution LiDAR data and Esri's solar modeling tools and were utilized to calculate incoming solar radiation on a sample set of digitized rooftops. Photovoltaic potential for the sample set was then calculated with the equations developed by Suri et al. (2005). Sample set rooftops were analyzed using a statistical model to identify the correlation between rooftop area and lot size. Least squares multiple linear regression analysis was performed to identify the influence of slope, elevation, rooftop area, and lot size on the modeled PV potential values. The equations built from these statistical analyses of the sample set were applied to the entire study region to calculate total rooftop area and PV potential. The total study area statistical analysis findings estimate photovoltaic electric energy generation potential for rooftops is approximately 190,000,000 kWh annually. This is approximately 17 percent of the total electricity the utility provided to the entire island in 2012. Based on these findings, full rooftop PV installations on the 4,460 study area homes could provide enough energy to power over 31,000 homes annually. The methods developed here suggest a means to calculate rooftop area and PV potential in a region with limited available data. The use of LiDAR point data offers a major opportunity for future research in both automating rooftop inventories and calculating incoming solar radiation and PV potential for homeowners.
Canary, Jana D; Blizzard, Leigh; Barry, Ronald P; Hosmer, David W; Quinn, Stephen J
2016-05-01
Generalized linear models (GLM) with a canonical logit link function are the primary modeling technique used to relate a binary outcome to predictor variables. However, noncanonical links can offer more flexibility, producing convenient analytical quantities (e.g., probit GLMs in toxicology) and desired measures of effect (e.g., relative risk from log GLMs). Many summary goodness-of-fit (GOF) statistics exist for logistic GLM. Their properties make the development of GOF statistics relatively straightforward, but it can be more difficult under noncanonical links. Although GOF tests for logistic GLM with continuous covariates (GLMCC) have been applied to GLMCCs with log links, we know of no GOF tests in the literature specifically developed for GLMCCs that can be applied regardless of link function chosen. We generalize the Tsiatis GOF statistic originally developed for logistic GLMCCs, (TG), so that it can be applied under any link function. Further, we show that the algebraically related Hosmer-Lemeshow (HL) and Pigeon-Heyse (J(2) ) statistics can be applied directly. In a simulation study, TG, HL, and J(2) were used to evaluate the fit of probit, log-log, complementary log-log, and log models, all calculated with a common grouping method. The TG statistic consistently maintained Type I error rates, while those of HL and J(2) were often lower than expected if terms with little influence were included. Generally, the statistics had similar power to detect an incorrect model. An exception occurred when a log GLMCC was incorrectly fit to data generated from a logistic GLMCC. In this case, TG had more power than HL or J(2) . © 2015 John Wiley & Sons Ltd/London School of Economics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werth, D.; Chen, K. F.
2013-08-22
The ability of water managers to maintain adequate supplies in coming decades depends, in part, on future weather conditions, as climate change has the potential to alter river flows from their current values, possibly rendering them unable to meet demand. Reliable climate projections are therefore critical to predicting the future water supply for the United States. These projections cannot be provided solely by global climate models (GCMs), however, as their resolution is too coarse to resolve the small-scale climate changes that can affect hydrology, and hence water supply, at regional to local scales. A process is needed to ‘downscale’ themore » GCM results to the smaller scales and feed this into a surface hydrology model to help determine the ability of rivers to provide adequate flow to meet future needs. We apply a statistical downscaling to GCM projections of precipitation and temperature through the use of a scaling method. This technique involves the correction of the cumulative distribution functions (CDFs) of the GCM-derived temperature and precipitation results for the 20{sup th} century, and the application of the same correction to 21{sup st} century GCM projections. This is done for three meteorological stations located within the Coosa River basin in northern Georgia, and is used to calculate future river flow statistics for the upper Coosa River. Results are compared to the historical Coosa River flow upstream from Georgia Power Company’s Hammond coal-fired power plant and to flows calculated with the original, unscaled GCM results to determine the impact of potential changes in meteorology on future flows.« less
Dong, Jing; Zhang, Yaqin; Zhang, Haining; Jia, Zhijie; Zhang, Suhua; Wang, Xiaogang
2018-01-01
To compare the axial length (AL), anterior chamber depth (ACD) and intraocular lens power (IOLP) of IOLMaster and Ultrasound in normal, long and short eyes. Seventy-four normal eyes (≥ 22 mm and ≤ 25 mm), 74 long eyes (> 25 mm) and 78 short eyes (< 22 mm) underwent AL and ACD measurements with both devices in the order of IOLMaster followed by Ultrasound. The IOLP were calculated using a free online LADAS IOL formula calculator. The difference in AL and IOLP between IOLMaster and Ultrasound was statistically significant when all three groups were combined. The difference in ACD between IOLMaster and Ultrasound was statistically significant in the normal group (P<0.001) and short eye group (P<0.001) but not the long eye group (P = 0.465). For the IOLP difference between IOLMaster and Ultrasound in the normal group, the percentage of IOLP differences <|0.5|D, ≥|0.5|D<|0.75|D, ≥|0.75|D<|1.0|D, and ≥|1.0|D were 90.5%, 8.1%, 1.4% and 0%, respectively. For the long eye group, they were 90.5%, 5.4%, 4.1% and 0%, respectively. For the short eye group, they were 61.5%, 23.1%, 10.3%, and 5.1%, respectively. IOLMaster and Ultrasound have statistically significant differences in AL measurements and IOLP (using LADAS formula) for normal, long eye and short eye. The two instruments agree regarding ACD measurements for the long eye group, but differ for the normal and short eye groups. Moreover, the high percentage of IOLP differences greater than |0.5|D in the short eye group is noteworthy.
TableViewer for Herschel Data Processing
NASA Astrophysics Data System (ADS)
Zhang, L.; Schulz, B.
2006-07-01
The TableViewer utility is a GUI tool written in Java to support interactive data processing and analysis for the Herschel Space Observatory (Pilbratt et al. 2001). The idea was inherited from a prototype written in IDL (Schulz et al. 2005). It allows to graphically view and analyze tabular data organized in columns with equal numbers of rows. It can be run either as a standalone application, where data access is restricted to FITS (FITS 1999) files only, or it can be run from the Quick Look Analysis(QLA) or Interactive Analysis(IA) command line, from where also objects are accessible. The graphic display is very versatile, allowing plots in either linear or log scales. Zooming, panning, and changing data columns is performed rapidly using a group of navigation buttons. Selecting and de-selecting of fields of data points controls the input to simple analysis tasks like building a statistics table, or generating power spectra. The binary data stored in a TableDataset^1, a Product or in FITS files can also be displayed as tabular data, where values in individual cells can be modified. TableViewer provides several processing utilities which, besides calculation of statistics either for all channels or for selected channels, and calculation of power spectra, allows to convert/repair datasets by changing the unit name of data columns, and by modifying data values in columns with a simple calculator tool. Interactively selected data can be separated out, and modified data sets can be saved to FITS files. The tool will be very helpful especially in the early phases of Herschel data analysis when a quick access to contents of data products is important. TableDataset and Product are Java classes defined in herschel.ia.dataset.
McLawhorn, Alexander S; Levack, Ashley E; Fields, Kara G; Sheha, Evan D; DelPizzo, Kathryn R; Sink, Ernest L
2016-03-01
Periacetabular osteotomy (PAO) reorients the acetabular cartilage through a complex series of pelvic osteotomies, which risks significant blood loss often necessitating blood transfusion. Therefore, it is important to identify effective strategies to manage blood loss and decrease morbidity after PAO. The purpose of this study was to determine the association of epsilon-aminocaproic acid (EACA), an antifibrinolytic agent, with blood loss from PAO. Ninety-three patients out of 110 consecutive patients that underwent unilateral PAO for acetabular dysplasia met inclusion criteria. Fifty patients received EACA intraoperatively. Demographics, autologous blood predonation, anesthetic type, intraoperative estimated blood loss (EBL), cell-saver utilization, and transfusions were recorded. Total blood loss was calculated. Two-sample t-test and chi-square or Fisher's exact test were used as appropriate. The associations between EACA administration and calculated EBL, cell-saver utilization, intraoperative EBL, and maximum difference in postoperative hemoglobin were assessed via multiple regression, adjusting for confounders. Post hoc power analysis demonstrated sufficient power to detect a 250-mL difference in calculated EBL between groups. Alpha level was 0.05 for all tests. No demographic differences existed between groups. Mean blood loss and allogeneic transfusion rates were not statistically significant between groups (P = .093 and .170, respectively). There were no differences in cell-saver utilization, intraoperative EBL, and/or postoperative hemoglobin. There was a higher rate of autologous blood utilization in the group not receiving EACA because of a clinical practice change. EACA administration was not associated with a statistically significant reduction in blood loss or allogeneic transfusion in patients undergoing PAO. Copyright © 2016 Elsevier Inc. All rights reserved.
Statistical Power in Meta-Analysis
ERIC Educational Resources Information Center
Liu, Jin
2015-01-01
Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…
NASA Technical Reports Server (NTRS)
Beverly, R. E., III
1982-01-01
A statistical model was developed for relating the temporal transmission parameters of a laser beam from a solar power satellite to observable meteorological data to determine the influence of weather on power reception at the earth-based receiver. Sites within 100 miles of existing high voltage transmission lines were examined and the model was developed for clear-sky and clouded conditions. The cases of total transmission through clouds at certain wavelengths, no transmission, and partial transmission were calculated for the cloud portion of the model. The study covered cirriform, stratiform, cumiliform, and mixed type clouds and the possibility of boring holes through the clouds with the beam. Utilization of weapons-quality beams for hole boring, was found to yield power availability increases of 9-33%, although no beneficial effects could be predicted in regions of persistent cloud cover. An efficiency of 80% was determined as possible if several receptor sites were available within 200-300 miles of each other, thereby allowing changes of reception point in cases of unacceptable meteorological conditions.
2016-12-01
KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination
Inferring Characteristics of Sensorimotor Behavior by Quantifying Dynamics of Animal Locomotion
NASA Astrophysics Data System (ADS)
Leung, KaWai
Locomotion is one of the most well-studied topics in animal behavioral studies. Many fundamental and clinical research make use of the locomotion of an animal model to explore various aspects in sensorimotor behavior. In the past, most of these studies focused on population average of a specific trait due to limitation of data collection and processing power. With recent advance in computer vision and statistical modeling techniques, it is now possible to track and analyze large amounts of behavioral data. In this thesis, I present two projects that aim to infer the characteristics of sensorimotor behavior by quantifying the dynamics of locomotion of nematode Caenorhabditis elegans and fruit fly Drosophila melanogaster, shedding light on statistical dependence between sensing and behavior. In the first project, I investigate the possibility of inferring noxious sensory information from the behavior of Caenorhabditis elegans. I develop a statistical model to infer the heat stimulus level perceived by individual animals from their stereotyped escape responses after stimulation by an IR laser. The model allows quantification of analgesic-like effects of chemical agents or genetic mutations in the worm. At the same time, the method is able to differentiate perturbations of locomotion behavior that are beyond affecting the sensory system. With this model I propose experimental designs that allows statistically significant identification of analgesic-like effects. In the second project, I investigate the relationship of energy budget and stability of locomotion in determining the walking speed distribution of Drosophila melanogaster during aging. The locomotion stability at different age groups is estimated from video recordings using Floquet theory. I calculate the power consumption of different locomotion speed using a biomechanics model. In conclusion, the power consumption, not stability, predicts the locomotion speed distribution at different ages.
1995-09-01
path and aircraft attitude and other flight or aircraft parameters • Calculations in the frequency domain ( Fast Fourier Transform) • Data analysis...Signal filtering Image processing of video and radar data Parameter identification Statistical analysis Power spectral density Fast Fourier Transform...airspeeds both fast and slow, altitude, load factor both above and below 1g, centers of gravity (fore and aft), and with system/subsystem failures. Whether
A circular dichroism and structural study of the inclusion complex artemisinin-β-cyclodextrin
NASA Astrophysics Data System (ADS)
Marconi, Giancarlo; Monti, Sandra; Manoli, Francesco; Degli Esposti, Alessandra; Mayer, Bernd
2004-01-01
The inclusion complex between the powerful antimalarial agent Artemisinin and β-cyclodextrin has been studied by means of Circular Dichroism and elucidated by Density Functional Theory calculations on the isolated molecule combined to a statistical Monte Carlo search of the most stable geometry of the complex. The results evidence a host-guest structure in full agreement with the almost unaffected functionality of the drug, which is found to experience a significant hydrophilic environment when complexed.
NASA Technical Reports Server (NTRS)
Hakkinen, Raimo J; Richardson, A S , Jr
1957-01-01
Sinusoidally oscillating downwash and lift produced on a simple rigid airfoil were measured and compared with calculated values. Statistically stationary random downwash and the corresponding lift on a simple rigid airfoil were also measured and the transfer functions between their power spectra determined. The random experimental values are compared with theoretically approximated values. Limitations of the experimental technique and the need for more extensive experimental data are discussed.
Two characteristic temperatures for a Bose-Einstein condensate of a finite number of particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Idziaszek, Z.; Institut fuer Theoretische Physik, Universitaet Hannover, D-30167 Hannover,; Rzazewski, K.
2003-09-01
We consider two characteristic temperatures for a Bose-Einstein condensate, which are related to certain properties of the condensate statistics. We calculate them for an ideal gas confined in power-law traps and show that they approach the critical temperature in the limit of large number of particles. The considered characteristic temperatures can be useful in the studies of Bose-Einstein condensates of a finite number of atoms indicating the point of a phase transition.
Counting your chickens before they're hatched: power analysis.
Jupiter, Daniel C
2014-01-01
How does an investigator know that he has enough subjects in his study design to have the predicted outcomes appear statistically significant? In this Investigators' Corner I discuss why such planning is necessary, give an intuitive introduction to the calculations needed to determine required sample sizes, and hint at some of the more technical difficulties inherent in this aspect of study planning. Copyright © 2014 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Gene-environment studies: any advantage over environmental studies?
Bermejo, Justo Lorenzo; Hemminki, Kari
2007-07-01
Gene-environment studies have been motivated by the likely existence of prevalent low-risk genes that interact with common environmental exposures. The present study assessed the statistical advantage of the simultaneous consideration of genes and environment to investigate the effect of environmental risk factors on disease. In particular, we contemplated the possibility that several genes modulate the environmental effect. Environmental exposures, genotypes and phenotypes were simulated according to a wide range of parameter settings. Different models of gene-gene-environment interaction were considered. For each parameter combination, we estimated the probability of detecting the main environmental effect, the power to identify the gene-environment interaction and the frequency of environmentally affected individuals at which environmental and gene-environment studies show the same statistical power. The proportion of cases in the population attributable to the modeled risk factors was also calculated. Our data indicate that environmental exposures with weak effects may account for a significant proportion of the population prevalence of the disease. A general result was that, if the environmental effect was restricted to rare genotypes, the power to detect the gene-environment interaction was higher than the power to identify the main environmental effect. In other words, when few individuals contribute to the overall environmental effect, individual contributions are large and result in easily identifiable gene-environment interactions. Moreover, when multiple genes interacted with the environment, the statistical benefit of gene-environment studies was limited to those studies that included major contributors to the gene-environment interaction. The advantage of gene-environment over plain environmental studies also depends on the inheritance mode of the involved genes, on the study design and, to some extend, on the disease prevalence.
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Menzel, Claudia; Hayn-Leichsenring, Gregor U; Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks - but not veridical face photographs - affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess - compared to face images - a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope - in contrast to the other tested image properties - did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis.
Langner, Oliver; Wiese, Holger; Redies, Christoph
2015-01-01
We investigated whether low-level processed image properties that are shared by natural scenes and artworks – but not veridical face photographs – affect the perception of facial attractiveness and age. Specifically, we considered the slope of the radially averaged Fourier power spectrum in a log-log plot. This slope is a measure of the distribution of special frequency power in an image. Images of natural scenes and artworks possess – compared to face images – a relatively shallow slope (i.e., increased high spatial frequency power). Since aesthetic perception might be based on the efficient processing of images with natural scene statistics, we assumed that the perception of facial attractiveness might also be affected by these properties. We calculated Fourier slope and other beauty-associated measurements in face images and correlated them with ratings of attractiveness and age of the depicted persons (Study 1). We found that Fourier slope – in contrast to the other tested image properties – did not predict attractiveness ratings when we controlled for age. In Study 2A, we overlaid face images with random-phase patterns with different statistics. Patterns with a slope similar to those in natural scenes and artworks resulted in lower attractiveness and higher age ratings. In Studies 2B and 2C, we directly manipulated the Fourier slope of face images and found that images with shallower slopes were rated as more attractive. Additionally, attractiveness of unaltered faces was affected by the Fourier slope of a random-phase background (Study 3). Faces in front of backgrounds with statistics similar to natural scenes and faces were rated as more attractive. We conclude that facial attractiveness ratings are affected by specific image properties. An explanation might be the efficient coding hypothesis. PMID:25835539
Topologically protected charge transfer along the edge of a chiral p -wave superconductor
NASA Astrophysics Data System (ADS)
Gnezdilov, N. V.; van Heck, B.; Diez, M.; Hutasoit, Jimmy A.; Beenakker, C. W. J.
2015-09-01
The Majorana fermions propagating along the edge of a topological superconductor with px+i py pairing deliver a shot noise power of 1/2 ×e2/h per eV of voltage bias. We calculate the full counting statistics of the transferred charge and find that it becomes trinomial in the low-temperature limit, distinct from the binomial statistics of charge-e transfer in a single-mode nanowire or charge-2 e transfer through a normal-superconductor interface. All even-order correlators of current fluctuations have a universal quantized value, insensitive to disorder and decoherence. These electrical signatures are experimentally accessible, because they persist for temperatures and voltages large compared to the Thouless energy.
Piñero, David P; Camps, Vicente J; Mateo, Verónica; Ruiz-Fortes, Pedro
2012-08-01
To validate clinically in a normal healthy population an algorithm to correct the error in the keratometric estimation of corneal power based on the use of a variable keratometric index of refraction (n(k)). Medimar International Hospital (Oftalmar) and University of Alicante, Alicante, Spain. Case series. Corneal power was measured with a Scheimpflug photography-based system (Pentacam software version 1.14r01) in healthy eyes with no previous ocular surgery. In all cases, keratometric corneal power was also estimated using an adjusted value of n(k) that is dependent on the anterior corneal radius (r(1c)) as follows: n(kadj) = -0.0064286 r(1c) +1.37688. Agreement between the Gaussian (P(c)(Gauss)) and adjusted keratometric (P(kadj)) corneal power values was evaluated. The study evaluated 92 eyes (92 patients; age range 15 to 64 years). The mean difference between P(c)(Gauss) and P(kadj) was -0.02 diopter (D) ± 0.22 (SD) (P=.43). A very strong, statistically significant correlation was found between both corneal powers (r = .994, P<.01). The range of agreement between P(c)(Gauss) and P(kadj) was 0.44 D, with limits of agreement of -0.46 and +0.42 D. In addition, a very strong, statistically significant correlation of the difference between P(c)(Gauss) and P(kadj) and the posterior corneal radius was found (r = 0.96, P<.01). The imprecision in the calculation of corneal power using keratometric estimation can be minimized in clinical practice by using a variable keratometric index that depends on the radius of the anterior corneal surface. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2012 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Overweight and pregnancy complications.
Abrams, B; Parker, J
1988-01-01
The association between increased prepregnancy weight for height and seven pregnancy complications was studied in a multi-racial sample of more than 4100 recent deliveries. Body mass indices were calculated and used to classify women as average weight (90-119 percent of ideal or BMI 19.21-25.60), moderately overweight (120-135 percent ideal or BMI 25.61-28.90), and very overweight (greater than 135 percent ideal or BMI greater than 28.91) prior to pregnancy. Compared to women of average weight for height, very overweight women had a higher risk of diabetes, hypertension, pregnancy-induced hypertension and primary cesarean section delivery. Moderately overweight women were also at higher risk than average for diabetes, pregnancy-induced hypertension and primary cesarean deliveries but the relative risks were of a smaller magnitude than for very overweight women. With women of average prepregnancy body mass as reference, moderately elevated, but not significant relative risks were found for perinatal mortality in the very overweight group, for urinary tract infections in both overweight groups, and a decreased risk for anemia was found in the very overweight group. However, post-hoc power analyses indicated that the number of overweight women in the sample did not allow adequate statistical power to detect these small differences in risk. To overcome limitations associated with low statistical power, the results of three recent studies of these outcomes in very overweight pregnant women were combined and summarized using Mantel-Haenzel techniques. This second, larger analysis suggested that very overweight women are at significantly higher risk for all seven outcomes studied. Summary results for moderately overweight women could not be calculated, since only two of the studies had evaluated moderately overweight women separately. These latter results support other findings that both moderate overweight and very overweight are risk factors during pregnancy, with the highest risk occurring in the heaviest group. Although these results indicate that moderate overweight is a risk factor during pregnancy, additional studies are needed to confirm the impact of being 20-35 percent above ideal weight prior to pregnancy. The results of this analysis also imply that since the baseline incidence of many perinatal complications is low, studies relating overweight and pregnancy complications should include large enough samples of overweight women so that there is adequate statistical power to reliably detect differences in complication rates.
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W.
2002-01-01
A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tratnyek, Paul G.; Bylaska, Eric J.; Weber, Eric J.
2017-01-01
Quantitative structure–activity relationships (QSARs) have long been used in the environmental sciences. More recently, molecular modeling and chemoinformatic methods have become widespread. These methods have the potential to expand and accelerate advances in environmental chemistry because they complement observational and experimental data with “in silico” results and analysis. The opportunities and challenges that arise at the intersection between statistical and theoretical in silico methods are most apparent in the context of properties that determine the environmental fate and effects of chemical contaminants (degradation rate constants, partition coefficients, toxicities, etc.). The main example of this is the calibration of QSARs usingmore » descriptor variable data calculated from molecular modeling, which can make QSARs more useful for predicting property data that are unavailable, but also can make them more powerful tools for diagnosis of fate determining pathways and mechanisms. Emerging opportunities for “in silico environmental chemical science” are to move beyond the calculation of specific chemical properties using statistical models and toward more fully in silico models, prediction of transformation pathways and products, incorporation of environmental factors into model predictions, integration of databases and predictive models into more comprehensive and efficient tools for exposure assessment, and extending the applicability of all the above from chemicals to biologicals and materials.« less
Quantum heat engine with coupled superconducting resonators
NASA Astrophysics Data System (ADS)
Hardal, Ali Ü. C.; Aslan, Nur; Wilson, C. M.; Müstecaplıoǧlu, Özgür E.
2017-12-01
We propose a quantum heat engine composed of two superconducting transmission line resonators interacting with each other via an optomechanical-like coupling. One resonator is periodically excited by a thermal pump. The incoherently driven resonator induces coherent oscillations in the other one due to the coupling. A limit cycle, indicating finite power output, emerges in the thermodynamical phase space. The system implements an all-electrical analog of a photonic piston. Instead of mechanical motion, the power output is obtained as a coherent electrical charging in our case. We explore the differences between the quantum and classical descriptions of our system by solving the quantum master equation and classical Langevin equations. Specifically, we calculate the mean number of excitations, second-order coherence, as well as the entropy, temperature, power, and mean energy to reveal the signatures of quantum behavior in the statistical and thermodynamic properties of the system. We find evidence of a quantum enhancement in the power output of the engine at low temperatures.
Quantum heat engine with coupled superconducting resonators.
Hardal, Ali Ü C; Aslan, Nur; Wilson, C M; Müstecaplıoğlu, Özgür E
2017-12-01
We propose a quantum heat engine composed of two superconducting transmission line resonators interacting with each other via an optomechanical-like coupling. One resonator is periodically excited by a thermal pump. The incoherently driven resonator induces coherent oscillations in the other one due to the coupling. A limit cycle, indicating finite power output, emerges in the thermodynamical phase space. The system implements an all-electrical analog of a photonic piston. Instead of mechanical motion, the power output is obtained as a coherent electrical charging in our case. We explore the differences between the quantum and classical descriptions of our system by solving the quantum master equation and classical Langevin equations. Specifically, we calculate the mean number of excitations, second-order coherence, as well as the entropy, temperature, power, and mean energy to reveal the signatures of quantum behavior in the statistical and thermodynamic properties of the system. We find evidence of a quantum enhancement in the power output of the engine at low temperatures.
Optimization of the Heat Exchangers of a Thermoelectric Generation System
NASA Astrophysics Data System (ADS)
Martínez, A.; Vián, J. G.; Astrain, D.; Rodríguez, A.; Berrio, I.
2010-09-01
The thermal resistances of the heat exchangers have a strong influence on the electric power produced by a thermoelectric generator. In this work, the heat exchangers of a thermoelectric generator have been optimized in order to maximize the electric power generated. This thermoelectric generator harnesses heat from the exhaust gas of a domestic gas boiler. Statistical design of experiments was used to assess the influence of five factors on both the electric power generated and the pressure drop in the chimney: height of the generator, number of modules per meter of generator height, length of the fins of the hot-side heat exchanger (HSHE), length of the gap between fins of the HSHE, and base thickness of the HSHE. The electric power has been calculated using a computational model, whereas Fluent computational fluid dynamics (CFD) has been used to obtain the thermal resistances of the heat exchangers and the pressure drop. Finally, the thermoelectric generator has been optimized, taking into account the restrictions on the pressure drop.
Upper limits on the 21 cm power spectrum at z = 5.9 from quasar absorption line spectroscopy
NASA Astrophysics Data System (ADS)
Pober, Jonathan C.; Greig, Bradley; Mesinger, Andrei
2016-11-01
We present upper limits on the 21 cm power spectrum at z = 5.9 calculated from the model-independent limit on the neutral fraction of the intergalactic medium of x_{H I} < 0.06 + 0.05 (1σ ) derived from dark pixel statistics of quasar absorption spectra. Using 21CMMC, a Markov chain Monte Carlo Epoch of Reionization analysis code, we explore the probability distribution of 21 cm power spectra consistent with this constraint on the neutral fraction. We present 99 per cent confidence upper limits of Δ2(k) < 10-20 mK2 over a range of k from 0.5 to 2.0 h Mpc-1, with the exact limit dependent on the sampled k mode. This limit can be used as a null test for 21 cm experiments: a detection of power at z = 5.9 in excess of this value is highly suggestive of residual foreground contamination or other systematic errors affecting the analysis.
Large-scale fluctuations in the cosmic ionizing background: the impact of beamed source emission
NASA Astrophysics Data System (ADS)
Suarez, Teresita; Pontzen, Andrew
2017-12-01
When modelling the ionization of gas in the intergalactic medium after reionization, it is standard practice to assume a uniform radiation background. This assumption is not always appropriate; models with radiative transfer show that large-scale ionization rate fluctuations can have an observable impact on statistics of the Lyman α forest. We extend such calculations to include beaming of sources, which has previously been neglected but which is expected to be important if quasars dominate the ionizing photon budget. Beaming has two effects: first, the physical number density of ionizing sources is enhanced relative to that directly observed; and secondly, the radiative transfer itself is altered. We calculate both effects in a hard-edged beaming model where each source has a random orientation, using an equilibrium Boltzmann hierarchy in terms of spherical harmonics. By studying the statistical properties of the resulting ionization rate and H I density fields at redshift z ∼ 2.3, we find that the two effects partially cancel each other; combined, they constitute a maximum 5 per cent correction to the power spectrum P_{H I}(k) at k = 0.04 h Mpc-1. On very large scales (k < 0.01 h Mpc-1) the source density renormalization dominates; it can reduce, by an order of magnitude, the contribution of ionizing shot noise to the intergalactic H I power spectrum. The effects of beaming should be considered when interpreting future observational data sets.
Low statistical power in biomedical science: a review of three human research domains.
Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R
2017-02-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.
Low statistical power in biomedical science: a review of three human research domains
Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois
2017-01-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409
Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M
2015-01-01
Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.
[Statistical analysis of German radiologic periodicals: developmental trends in the last 10 years].
Golder, W
1999-09-01
To identify which statistical tests are applied in German radiological publications, to what extent their use has changed during the last decade, and which factors might be responsible for this development. The major articles published in "ROFO" and "DER RADIOLOGE" during 1988, 1993 and 1998 were reviewed for statistical content. The contributions were classified by principal focus and radiological subspecialty. The methods used were assigned to descriptive, basal and advanced statistics. Sample size, significance level and power were established. The use of experts' assistance was monitored. Finally, we calculated the so-called cumulative accessibility of the publications. 525 contributions were found to be eligible. In 1988, 87% used descriptive statistics only, 12.5% basal, and 0.5% advanced statistics. The corresponding figures in 1993 and 1998 are 62 and 49%, 32 and 41%, and 6 and 10%, respectively. Statistical techniques were most likely to be used in research on musculoskeletal imaging and articles dedicated to MRI. Six basic categories of statistical methods account for the complete statistical analysis appearing in 90% of the articles. ROC analysis is the single most common advanced technique. Authors make increasingly use of statistical experts' opinion and programs. During the last decade, the use of statistical methods in German radiological journals has fundamentally improved, both quantitatively and qualitatively. Presently, advanced techniques account for 20% of the pertinent statistical tests. This development seems to be promoted by the increasing availability of statistical analysis software.
Francez, Pablo Abdon da Costa; Ribeiro-Rodrigues, Elzemar Martins; dos Santos, Sidney Emanuel Batista
2012-01-01
Allelic frequencies of 48 informative insert-delete (INDEL) loci were obtained from a sample set of 130 unrelated individuals living in Macapá, a city located in the northern Amazon region, in Brazil. The values of heterozygosity (H), polymorphic information content (PIC), power of discrimination (PD), power of exclusion (PE), matching probability (MP) and typical paternity index (TPI) were calculated and showed the forensic efficiency of these genetic markers. Based on the allele frequency obtained for the population of Macapá, we estimated an interethnic admixture for the three parental groups (European, Native American and African) of, respectively, 50%, 21% and 29%. Comparing these allele frequencies with those of other Brazilian populations and the parental populations, statistically significant distances were found. The interpopulation genetic distance (F(ST) coefficients) to the present database ranged from F(ST)=0.0431 (p<0.00001) between Macapá and Belém to F(ST)=0.266 (p<0.00001) between Macapá and the Native American group. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Shaikh, Alauddin; Mallick, Nazrul Islam
2012-11-01
Introduction: The aim of this study was to find out the effects of plyometrics training and weight training among university male students.Procedure: 60 male students from the different colleges of the Burdwan University were randomly selected as subjects and their age were 19-25 years served as Weight training Group (WTG), second group served as Plyometric Training Group (PTG) and the third group served as Control Group (CT). Eight weeks weight training and six weeks plyometric training were given for experiment accordingly. The control group was not given any training except of their routine. The selected subjects were measured of their motor ability components, speed, endurance, explosive power and agility. ANCOVA was calculation for statistical treatment.Finding: Plyometric training and weight training groups significantly increase speed, endurance, explosive power and agility.Conclusion: The plyometric training has significantly improved speed, explosive power, muscular endurance and agility. The weight training programme has significantly improved agility, muscular endurance, and explosive power. The plometric training is superior to weight training in improving explosive power, agility and muscular endurance.
Onshore Wind Farms: Value Creation for Stakeholders in Lithuania
NASA Astrophysics Data System (ADS)
Burinskienė, Marija; Rudzkis, Paulius; Kanopka, Adomas
With the costs of fossil fuel consistently rising worldwide over the last decade, the development of green technologies has become a major goal in many countries. Therefore the evaluation of wind power projects becomes a very important task. To estimate the value of the technologies based on renewable resources also means taking into consideration social, economic, environmental, and scientific value of such projects. This article deals with economic evaluation of electricity generation costs of onshore wind farms in Lithuania and the key factors that have influence on wind power projects and offer a better understanding of social-economic context behind wind power projects. To achieve these goals, this article makes use of empirical data of Lithuania's wind power farms as well as data about the investment environment of the country.Based on empirical data of wind power parks, the research investigates the average wind farm generation efficiency in Lithuania. Employing statistical methods the return on investments of wind farms in Lithuania is calculated. The value created for every party involved and the total value of the wind farm is estimated according to Stakeholder theory.
Toward "Constructing" the Concept of Statistical Power: An Optical Analogy.
ERIC Educational Resources Information Center
Rogers, Bruce G.
This paper presents a visual analogy that may be used by instructors to teach the concept of statistical power in statistical courses. Statistical power is mathematically defined as the probability of rejecting a null hypothesis when that null is false, or, equivalently, the probability of detecting a relationship when it exists. The analogy…
Use of meteorological information in the risk analysis of a mixed wind farm and solar
NASA Astrophysics Data System (ADS)
Mengelkamp, H.-T.; Bendel, D.
2010-09-01
Use of meteorological information in the risk analysis of a mixed wind farm and solar power plant portfolio H.-T. Mengelkamp*,** , D. Bendel** *GKSS Research Center Geesthacht GmbH **anemos Gesellschaft für Umweltmeteorologie mbH The renewable energy industry has rapidly developed during the last two decades and so have the needs for high quality comprehensive meteorological services. It is, however, only recently that international financial institutions bundle wind farms and solar power plants and offer shares in these aggregate portfolios. The monetary value of a mixed wind farm and solar power plant portfolio is determined by legal and technical aspects, the expected annual energy production of each wind farm and solar power plant and the associated uncertainty of the energy yield estimation or the investment risk. Building an aggregate portfolio will reduce the overall uncertainty through diversification in contrast to the single wind farm/solar power plant energy yield uncertainty. This is similar to equity funds based on a variety of companies or products. Meteorological aspects contribute to the diversification in various ways. There is the uncertainty in the estimation of the expected long-term mean energy production of the wind and solar power plants. Different components of uncertainty have to be considered depending on whether the power plant is already in operation or in the planning phase. The uncertainty related to a wind farm in the planning phase comprises the methodology of the wind potential estimation and the uncertainty of the site specific wind turbine power curve as well as the uncertainty of the wind farm effect calculation. The uncertainty related to a solar power plant in the pre-operational phase comprises the uncertainty of the radiation data base and that of the performance curve. The long-term mean annual energy yield of operational wind farms and solar power plants is estimated on the basis of the actual energy production and it's relation to a climatologically stable long-term reference period. These components of uncertainty are of technical nature and based on subjective estimations rather than on a statistically sound data analysis. And then there is the temporal and spatial variability of the wind speed and radiation. Their influence on the overall risk is determined by the regional distribution of the power plants. These uncertainty components are calculated on the basis of wind speed observations and simulations and satellite derived radiation data. The respective volatility (temporal variability) is calculated from the site specific time series and the influence on the portfolio through regional correlation. For an exemplary portfolio comprising fourteen wind farms and eight solar power plants the annual mean energy production to be expected is calculated, the different components of uncertainty are estimated for each single wind farm and solar power plant and for the portfolio as a whole. The reduction in uncertainty (or risk) through bundling the wind farms and the solar power plants (the portfolio effect) is calculated by Markowitz' Modern Portfolio Theory. This theory is applied separately for the wind farm and the solar power plant bundle and for the combination of both. The combination of wind and photovoltaic assets clearly shows potential for a risk reduction. Even assets with a comparably low expected return can lead to a significant risk reduction depending on their individual characteristics.
NASA Astrophysics Data System (ADS)
Yan, Yulong; Yang, Chao; Peng, Lin; Li, Rumei; Bai, Huiling
2016-10-01
Face the large electricity demand, thermal power generation still derives the main way of electricity supply in China, account for 78.19% of total electricity production in 2013. Three types of thermal power plants, including coal-fired power plant, coal gangue-fired power plant and biomass-fired power plant, were chosen to survey the source profile, chemical reactivity and emission factor of VOCs during the thermal power generation. The most abundant compounds generated during coal- and coal gangue-fired power generation were 1-Butene, Styrene, n-Hexane and Ethylene, while biomass-fired power generation were Propene, 1-Butenen, Ethyne and Ethylene. The ratios of B/T during thermal power generation in this study was 0.8-2.6, which could be consider as the characteristics of coal and biomass burning. The field tested VOCs emission factor from coal-, coal gangue- and biomass-fired power plant was determined to be 0.88, 0.38 and 3.49 g/GJ, or showed as 0.023, 0.005 and 0.057 g/kg, with the amount of VOCs emission was 44.07, 0.08, 0.45 Gg in 2013, respectively. The statistical results of previous emission inventory, which calculated the VOCs emission used previous emission factor, may overestimate the emission amount of VOCs from thermal power generation in China.
NASA Astrophysics Data System (ADS)
Bevis, Neil; Hindmarsh, Mark; Kunz, Martin; Urrestilla, Jon
2007-03-01
We present the first field-theoretic calculations of the contribution made by cosmic strings to the temperature power spectrum of the cosmic microwave background (CMB). Unlike previous work, in which strings were modeled as idealized one-dimensional objects, we evolve the simplest example of an underlying field theory containing local U(1) strings, the Abelian Higgs model. Limitations imposed by finite computational volumes are overcome using the scaling property of string networks and a further extrapolation related to the lessening of the string width in comoving coordinates. The strings and their decay products, which are automatically included in the field theory approach, source metric perturbations via their energy-momentum tensor, the unequal-time correlation functions of which are used as input into the CMB calculation phase. These calculations involve the use of a modified version of CMBEASY, with results provided over the full range of relevant scales. We find that the string tension μ required to normalize to the WMAP 3-year data at multipole ℓ=10 is Gμ=[2.04±0.06(stat.)±0.12(sys.)]×10-6, where we have quoted statistical and systematic errors separately, and G is Newton’s constant. This is a factor 2 3 higher than values in current circulation.
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
The Power of Neuroimaging Biomarkers for Screening Frontotemporal Dementia
McMillan, Corey T.; Avants, Brian B.; Cook, Philip; Ungar, Lyle; Trojanowski, John Q.; Grossman, Murray
2014-01-01
Frontotemporal dementia (FTD) is a clinically and pathologically heterogeneous neurodegenerative disease that can result from either frontotemporal lobar degeneration (FTLD) or Alzheimer’s disease (AD) pathology. It is critical to establish statistically powerful biomarkers that can achieve substantial cost-savings and increase feasibility of clinical trials. We assessed three broad categories of neuroimaging methods to screen underlying FTLD and AD pathology in a clinical FTD series: global measures (e.g., ventricular volume), anatomical volumes of interest (VOIs) (e.g., hippocampus) using a standard atlas, and data-driven VOIs using Eigenanatomy. We evaluated clinical FTD patients (N=93) with cerebrospinal fluid, gray matter (GM) MRI, and diffusion tensor imaging (DTI) to assess whether they had underlying FTLD or AD pathology. Linear regression was performed to identify the optimal VOIs for each method in a training dataset and then we evaluated classification sensitivity and specificity in an independent test cohort. Power was evaluated by calculating minimum sample sizes (mSS) required in the test classification analyses for each model. The data-driven VOI analysis using a multimodal combination of GM MRI and DTI achieved the greatest classification accuracy (89% SENSITIVE; 89% SPECIFIC) and required a lower minimum sample size (N=26) relative to anatomical VOI and global measures. We conclude that a data-driven VOI approach employing Eigenanatomy provides more accurate classification, benefits from increased statistical power in unseen datasets, and therefore provides a robust method for screening underlying pathology in FTD patients for entry into clinical trials. PMID:24687814
On damage detection in wind turbine gearboxes using outlier analysis
NASA Astrophysics Data System (ADS)
Antoniadou, Ifigeneia; Manson, Graeme; Dervilis, Nikolaos; Staszewski, Wieslaw J.; Worden, Keith
2012-04-01
The proportion of worldwide installed wind power in power systems increases over the years as a result of the steadily growing interest in renewable energy sources. Still, the advantages offered by the use of wind power are overshadowed by the high operational and maintenance costs, resulting in the low competitiveness of wind power in the energy market. In order to reduce the costs of corrective maintenance, the application of condition monitoring to gearboxes becomes highly important, since gearboxes are among the wind turbine components with the most frequent failure observations. While condition monitoring of gearboxes in general is common practice, with various methods having been developed over the last few decades, wind turbine gearbox condition monitoring faces a major challenge: the detection of faults under the time-varying load conditions prevailing in wind turbine systems. Classical time and frequency domain methods fail to detect faults under variable load conditions, due to the temporary effect that these faults have on vibration signals. This paper uses the statistical discipline of outlier analysis for the damage detection of gearbox tooth faults. A simplified two-degree-of-freedom gearbox model considering nonlinear backlash, time-periodic mesh stiffness and static transmission error, simulates the vibration signals to be analysed. Local stiffness reduction is used for the simulation of tooth faults and statistical processes determine the existence of intermittencies. The lowest level of fault detection, the threshold value, is considered and the Mahalanobis squared-distance is calculated for the novelty detection problem.
Influence of load by high power on the optical coupler
NASA Astrophysics Data System (ADS)
Bednarek, Lukas; Poboril, Radek; Vanderka, Ales; Hajek, Lukas; Nedoma, Jan; Vasinek, Vladimir
2016-12-01
Nowadays, aging of the optical components is a very current topic. Therefore, some investigations are focused on this area, so that the aging of the optical components is accelerated by thermal, high power and gamma load. This paper deals by findings of the influence of the load by laser with high optical power on the transmission parameters of the optical coupler. The investigated coupler has one input and eight outputs (1x8). Load by laser with high optical power is realized using a fiber laser with a cascade configuration EDFA amplifiers. The output power of the amplifier is approximately 250 mW. Duration of the load is moving from 104 hours to 139 hours. After each load, input power and output powers of all branches are measured. Following parameters of the optical coupler are calculated using formulas: the insertion losses of the individual branches, split ratio, total losses, homogeneity of the losses and cross-talk between different branches. All measurements are performed at wavelengths 1310 nm and 1550 nm. Individual optical powers are measured 20 times, due to the exclusion of statistical error of the measurement. After measuring, the coupler is connected to the amplifier for next cycle of the load. The paper contains an evaluation of the results of the coupler before and after four cycles of the burden.
Spatial evolution of laser filaments in turbulent air
NASA Astrophysics Data System (ADS)
Zeng, Tao; Zhu, Shiping; Zhou, Shengling; He, Yan
2018-04-01
In this study, the spatial evolution properties of laser filament clusters in turbulent air were evaluated using numerical simulations. Various statistical parameters were calculated, such as the percolation probability, filling factor, and average cluster size. The results indicate that turbulence-induced multi-filamentation can be described as a new phase transition universality class. In addition, during this process, the relationship between the average cluster size and filling factor could be fit by a power function. Our results are valuable for applications involving filamentation that can be influenced by the geometrical features of multiple filaments.
A statistical formulation of one-dimensional electron fluid turbulence
NASA Technical Reports Server (NTRS)
Fyfe, D.; Montgomery, D.
1977-01-01
A one-dimensional electron fluid model is investigated using the mathematical methods of modern fluid turbulence theory. Non-dissipative equilibrium canonical distributions are determined in a phase space whose co-ordinates are the real and imaginary parts of the Fourier coefficients for the field variables. Spectral densities are calculated, yielding a wavenumber electric field energy spectrum proportional to k to the negative second power for large wavenumbers. The equations of motion are numerically integrated and the resulting spectra are found to compare well with the theoretical predictions.
Relationship of the actual thick intraocular lens optic to the thin lens equivalent.
Holladay, J T; Maverick, K J
1998-09-01
To theoretically derive and empirically validate the relationship between the actual thick intraocular lens and the thin lens equivalent. Included in the study were 12 consecutive adult patients ranging in age from 54 to 84 years (mean +/- SD, 73.5 +/- 9.4 years) with best-corrected visual acuity better than 20/40 in each eye. Each patient had bilateral intraocular lens implants of the same style, placed in the same location (bag or sulcus) by the same surgeon. Preoperatively, axial length, keratometry, refraction, and vertex distance were measured. Postoperatively, keratometry, refraction, vertex distance, and the distance from the vertex of the cornea to the anterior vertex of the intraocular lens (AV(PC1)) were measured. Alternatively, the distance (AV(PC1)) was then back-calculated from the vergence formula used for intraocular lens power calculations. The average (+/-SD) of the absolute difference in the two methods was 0.23 +/- 0.18 mm, which would translate to approximately 0.46 diopters. There was no statistical difference between the measured and calculated values; the Pearson product-moment correlation coefficient from linear regression was 0.85 (r2 = .72, F = 56). The average intereye difference was -0.030 mm (SD, 0.141 mm; SEM, 0.043 mm) using the measurement method and +0.124 mm (SD, 0.412 mm; SEM, 0.124 mm) using the calculation method. The relationship between the actual thick intraocular lens and the thin lens equivalent has been determined theoretically and demonstrated empirically. This validation provides the manufacturer and surgeon additional confidence and utility for lens constants used in intraocular lens power calculations.
Estimating usable resources from historical industry data
Cargill, S.M.; Root, D.H.; Bailey, E.H.
1981-01-01
Historical production statistics are used to predict the quantity of remaining usable resources. The commodities considered are mercury, copper and its byproducts gold and silver, and petroleum; the production and discovery data are for the United States. The results of the study indicate that the cumulative return per unit of effort, herein measured as grade of metal ores and discovery rate of recoverable petroleum, is proportional to a negative power of total effort expended, herein measured as total ore mined and total exploratory wells or footage drilled. This power relationship can be extended to some limiting point (a lower ore grade or a maximum number of exploratory wells or footage), and the apparent quantity of available remaining resource at that limit can be calculated. For mercury ore of grades at and above 0.1 percent, the remaining usable resource in the United States is calculated to be 54 million kg (1,567,000 flasks). For copper ore of grades at and above 0.2 percent, the remaining usable copper resource is calculated to be 270 million metric tons (298 million short tons); remaining resources of its by-products gold and silver are calculated to be 3,656 metric tons (118 million troy ounces) and 64,676 metric tons (2,079 million troy ounces), respectively. The undiscovered recoverable crude oil resource in the conterminous United States, at 3 billion feet of additional exploratory drilling, is calculated to be nearly 37.6 billion barrels; the undiscovered recoverable petroleum resource in the Permian basin of western Texas and southeastern New Mexico, at 300 million feet of additional exploratory drilling or 50,000 additional exploratory wells, is calculated to be about 6.2 billion BOE (barrels of oil equivalent).
NASA Astrophysics Data System (ADS)
Wu, Di; Torres, Elizabeth B.; Jose, Jorge V.
2015-03-01
ASD is a spectrum of neurodevelopmental disorders. The high heterogeneity of the symptoms associated with the disorder impedes efficient diagnoses based on human observations. Recent advances with high-resolution MEM wearable sensors enable accurate movement measurements that may escape the naked eye. It calls for objective metrics to extract physiological relevant information from the rapidly accumulating data. In this talk we'll discuss the statistical analysis of movement data continuously collected with high-resolution sensors at 240Hz. We calculated statistical properties of speed fluctuations within the millisecond time range that closely correlate with the subjects' cognitive abilities. We computed the periodicity and synchronicity of the speed fluctuations' from their power spectrum and ensemble averaged two-point cross-correlation function. We built a two-parameter phase space from the temporal statistical analyses of the nearest neighbor fluctuations that provided a quantitative biomarker for ASD and adult normal subjects and further classified ASD severity. We also found age related developmental statistical signatures and potential ASD parental links in our movement dynamical studies. Our results may have direct clinical applications.
van Gelder, P.H.A.J.M.; Nijs, M.
2011-01-01
Decisions about pharmacotherapy are being taken by medical doctors and authorities based on comparative studies on the use of medications. In studies on fertility treatments in particular, the methodological quality is of utmost importance in the application of evidence-based medicine and systematic reviews. Nevertheless, flaws and omissions appear quite regularly in these types of studies. Current study aims to present an overview of some of the typical statistical flaws, illustrated by a number of example studies which have been published in peer reviewed journals. Based on an investigation of eleven studies at random selected on fertility treatments with cryopreservation, it appeared that the methodological quality of these studies often did not fulfil the required statistical criteria. The following statistical flaws were identified: flaws in study design, patient selection, and units of analysis or in the definition of the primary endpoints. Other errors could be found in p-value and power calculations or in critical p-value definitions. Proper interpretation of the results and/or use of these study results in a meta analysis should therefore be conducted with care. PMID:24753877
van Gelder, P H A J M; Nijs, M
2011-01-01
Decisions about pharmacotherapy are being taken by medical doctors and authorities based on comparative studies on the use of medications. In studies on fertility treatments in particular, the methodological quality is of utmost -importance in the application of evidence-based medicine and systematic reviews. Nevertheless, flaws and omissions appear quite regularly in these types of studies. Current study aims to present an overview of some of the typical statistical flaws, illustrated by a number of example studies which have been published in peer reviewed journals. Based on an investigation of eleven studies at random selected on fertility treatments with cryopreservation, it appeared that the methodological quality of these studies often did not fulfil the -required statistical criteria. The following statistical flaws were identified: flaws in study design, patient selection, and units of analysis or in the definition of the primary endpoints. Other errors could be found in p-value and power calculations or in critical p-value definitions. Proper -interpretation of the results and/or use of these study results in a meta analysis should therefore be conducted with care.
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, M; University of California, San Diego, La Jolla, CA; Graves, Y
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is ablemore » to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.« less
Statistical power for detecting trends with applications to seabird monitoring
Hatch, Shyla A.
2003-01-01
Power analysis is helpful in defining goals for ecological monitoring and evaluating the performance of ongoing efforts. I examined detection standards proposed for population monitoring of seabirds using two programs (MONITOR and TRENDS) specially designed for power analysis of trend data. Neither program models within- and among-years components of variance explicitly and independently, thus an error term that incorporates both components is an essential input. Residual variation in seabird counts consisted of day-to-day variation within years and unexplained variation among years in approximately equal parts. The appropriate measure of error for power analysis is the standard error of estimation (S.E.est) from a regression of annual means against year. Replicate counts within years are helpful in minimizing S.E.est but should not be treated as independent samples for estimating power to detect trends. Other issues include a choice of assumptions about variance structure and selection of an exponential or linear model of population change. Seabird count data are characterized by strong correlations between S.D. and mean, thus a constant CV model is appropriate for power calculations. Time series were fit about equally well with exponential or linear models, but log transformation ensures equal variances over time, a basic assumption of regression analysis. Using sample data from seabird monitoring in Alaska, I computed the number of years required (with annual censusing) to detect trends of -1.4% per year (50% decline in 50 years) and -2.7% per year (50% decline in 25 years). At ??=0.05 and a desired power of 0.9, estimated study intervals ranged from 11 to 69 years depending on species, trend, software, and study design. Power to detect a negative trend of 6.7% per year (50% decline in 10 years) is suggested as an alternative standard for seabird monitoring that achieves a reasonable match between statistical and biological significance.
The Statistical Power of Planned Comparisons.
ERIC Educational Resources Information Center
Benton, Roberta L.
Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Moros, E
Purpose: To investigate the relationship between quantitative image features (i.e. radiomics) and statistical fluctuations (i.e. electronic noise) in clinical Computed Tomography (CT) using the standardized American College of Radiology (ACR) CT accreditation phantom and patient images. Methods: Three levels of uncorrelated Gaussian noise were added to CT images of phantom and patients (20) acquired in static mode and respiratory tracking mode. We calculated the noise-power spectrum (NPS) of the original CT images of the phantom, and of the phantom images with added Gaussian noise with means of 50, 80, and 120 HU. Concurrently, on patient images (original and noise-added images),more » image features were calculated: 14 shape, 19 intensity (1st order statistics from intensity volume histograms), 18 GLCM features (2nd order statistics from grey level co-occurrence matrices) and 11 RLM features (2nd order statistics from run-length matrices). These features provide the underlying structural information of the images. GLCM (size 128x128) was calculated with a step size of 1 voxel in 13 directions and averaged. RLM feature calculation was performed in 13 directions with grey levels binning into 128 levels. Results: Adding the electronic noise to the images modified the quality of the NPS, shifting the noise from mostly correlated to mostly uncorrelated voxels. The dramatic increase in noise texture did not affect image structure/contours significantly for patient images. However, it did affect the image features and textures significantly as demonstrated by GLCM differences. Conclusion: Image features are sensitive to acquisition factors (simulated by adding uncorrelated Gaussian noise). We speculate that image features will be more difficult to detect in the presence of electronic noise (an uncorrelated noise contributor) or, for that matter, any other highly correlated image noise. This work focuses on the effect of electronic, uncorrelated, noise and future work shall examine the influence of changes in quantum noise on the features. J. Oliver was supported by NSF FGLSAMP BD award HRD #1139850 and the McKnight Doctoral Fellowship.« less
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.
Song, Chi; Tseng, George C
2014-01-01
Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.
Minimizing the IOL power error induced by keratometric power.
Camps, Vicente J; Piñero, David P; de Fez, Dolores; Mateo, Verónica
2013-07-01
To evaluate theoretically in normal eyes the influence on IOL power (PIOL) calculation of the use of a keratometric index (nk) and to analyze and validate preliminarily the use of an adjusted keratometric index (nkadj) in the IOL power calculation (PIOLadj). A model of variable keratometric index (nkadj) for corneal power calculation (Pc) was used for IOL power calculation (named PIOLadj). Theoretical differences (ΔPIOL) between the new proposed formula (PIOLadj) and which is obtained through Gaussian optics ((Equation is included in full-text article.)) were determined using Gullstrand and Le Grand eye models. The proposed new formula for IOL power calculation (PIOLadj) was prevalidated clinically in 81 eyes of 81 candidates for corneal refractive surgery and compared with Haigis, HofferQ, Holladay, and SRK/T formulas. A theoretical PIOL underestimation greater than 0.5 diopters was present in most of the cases when nk = 1.3375 was used. If nkadj was used for Pc calculation, a maximal calculated error in ΔPIOL of ±0.5 diopters at corneal vertex in most cases was observed independently from the eye model, r1c, and the desired postoperative refraction. The use of nkadj in IOL power calculation (PIOLadj) could be valid with effective lens position optimization nondependent of the corneal power. The use of a single value of nk for Pc calculation can lead to significant errors in PIOL calculation that may explain some IOL power overestimations with conventional formulas. These inaccuracies can be minimized by using the new PIOLadj based on the algorithm of nkadj.
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four…
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Electric Power Annual presents a summary of electric utility statistics at national, regional and State levels. The objective of the publication is to provide industry decisionmakers, government policymakers, analysts and the general public with historical data that may be used in understanding US electricity markets. The Electric Power Annual is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels; Energy Information Administration (EIA); US Department of Energy. ``The US Electric Power Industry at a Glance`` section presents a profile of the electric power industry ownership and performance, and a review of key statistics formore » the year. Subsequent sections present data on generating capability, including proposed capability additions; net generation; fossil-fuel statistics; retail sales; revenue; financial statistics; environmental statistics; electric power transactions; demand-side management; and nonutility power producers. In addition, the appendices provide supplemental data on major disturbances and unusual occurrences in US electricity power systems. Each section contains related text and tables and refers the reader to the appropriate publication that contains more detailed data on the subject matter. Monetary values in this publication are expressed in nominal terms.« less
Environment-based pin-power reconstruction method for homogeneous core calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-07-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOXmore » assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)« less
NASA Astrophysics Data System (ADS)
Pardo-Iguzquiza, Eulogio; Rodríguez-Tovar, Francisco J.
2011-12-01
One important handicap when working with stratigraphic sequences is the discontinuous character of the sedimentary record, especially relevant in cyclostratigraphic analysis. Uneven palaeoclimatic/palaeoceanographic time series are common, their cyclostratigraphic analysis being comparatively difficult because most spectral methodologies are appropriate only when working with even sampling. As a means to solve this problem, a program for calculating the smoothed Lomb-Scargle periodogram and cross-periodogram, which additionally evaluates the statistical confidence of the estimated power spectrum through a Monte Carlo procedure (the permutation test), has been developed. The spectral analysis of a short uneven time series calls for assessment of the statistical significance of the spectral peaks, since a periodogram can always be calculated but the main challenge resides in identifying true spectral features. To demonstrate the effectiveness of this program, two case studies are presented: the one deals with synthetic data and the other with paleoceanographic/palaeoclimatic proxies. On a simulated time series of 500 data, two uneven time series (with 100 and 25 data) were generated by selecting data at random. Comparative analysis between the power spectra from the simulated series and from the two uneven time series demonstrates the usefulness of the smoothed Lomb-Scargle periodogram for uneven sequences, making it possible to distinguish between statistically significant and spurious spectral peaks. Fragmentary time series of Cd/Ca ratios and δ18O from core AII107-131 of SPECMAP were analysed as a real case study. The efficiency of the direct and cross Lomb-Scargle periodogram in recognizing Milankovitch and sub-Milankovitch signals related to palaeoclimatic/palaeoceanographic changes is demonstrated. As implemented, the Lomb-Scargle periodogram may be applied to any palaeoclimatic/palaeoceanographic proxies, including those usually recovered from contourites, and it holds special interest in the context of centennial- to millennial-scale climatic changes affecting contouritic currents.
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-01-01
Context: Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. Objective: To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Data Sources: Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. Study Selection: The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Data Extraction: Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Results: Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, –0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Conclusions: Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up. PMID:23016017
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-05-01
Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, -0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up.
Wang, Muding; Qiu, Wusi; Qiu, Fang; Mo, Yinan; Fan, Wenhui
2015-03-16
The Injury Severity Score (ISS) and the New Injury Severity Score (NISS) are widely used for anatomic severity assessments after trauma. We present here the Tangent Injury Severity Score (TISS), which transforms the Abbreviated Injury Scale (AIS) as a predictor of mortality. The TISS is defined as the sum of the tangent function of AIS/6 to the power 3.04 multiplied by 18.67 of a patient's three most severe AIS injuries regardless of body regions. TISS values were calculated for every patient in two large independent data sets: 3,908 and 4,171 patients treated during a 6-year period at level-3 first-class comprehensive hospitals: the Affiliated Hospital of Hangzhou Normal University and Fengtian Hospital Affiliated to Shenyang Medical College, China. The power of TISS to predict mortality was compared with previously calculated NISS values for the same patients in each data set. The TISS is more predictive of survival than NISS (Hangzhou: receiver operating characteristic (ROC): NISS = 0.929, TISS = 0.949; p = 0.002; Shenyang: ROC: NISS = 0.924, TISS = 0.942; p = 0.008). Moreover, TISS provides a better fit throughout its entire range of prediction (Hosmer Lemeshow statistic for Hangzhou NISS = 29.71; p < 0.001, TISS = 19.59; p = 0.003; Hosmer Lemeshow statistic for Shenyang NISS = 33.49; p < 0.001, TISS = 21.19; p = 0.002). The TISS shows more accurate prediction of prognosis and a linear relation to mortality. The TISS might be a better injury scoring tool with simple computation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Lihua; Cui, Jingkun; Tang, Fengjiao
Purpose: Studies of the association between ataxia telangiectasia–mutated (ATM) gene polymorphisms and acute radiation injuries are often small in sample size, and the results are inconsistent. We conducted the first meta-analysis to provide a systematic review of published findings. Methods and Materials: Publications were identified by searching PubMed up to April 25, 2014. Primary meta-analysis was performed for all acute radiation injuries, and subgroup meta-analyses were based on clinical endpoint. The influence of sample size and radiation injury incidence on genetic effects was estimated in sensitivity analyses. Power calculations were also conducted. Results: The meta-analysis was conducted on the ATMmore » polymorphism rs1801516, including 5 studies with 1588 participants. For all studies, the cut-off for differentiating cases from controls was grade 2 acute radiation injuries. The primary meta-analysis showed a significant association with overall acute radiation injuries (allelic model: odds ratio = 1.33, 95% confidence interval: 1.04-1.71). Subgroup analyses detected an association between the rs1801516 polymorphism and a significant increase in urinary and lower gastrointestinal injuries and an increase in skin injury that was not statistically significant. There was no between-study heterogeneity in any meta-analyses. In the sensitivity analyses, small studies did not show larger effects than large studies. In addition, studies with high incidence of acute radiation injuries showed larger effects than studies with low incidence. Power calculations revealed that the statistical power of the primary meta-analysis was borderline, whereas there was adequate power for the subgroup analysis of studies with high incidence of acute radiation injuries. Conclusions: Our meta-analysis showed a consistency of the results from the overall and subgroup analyses. We also showed that the genetic effect of the rs1801516 polymorphism on acute radiation injuries was dependent on the incidence of the injury. These support the evidence of an association between the rs1801516 polymorphism and acute radiation injuries, encouraging further research of this topic.« less
Statistical analyses to support guidelines for marine avian sampling. Final report
Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris
2012-01-01
Interest in development of offshore renewable energy facilities has led to a need for high-quality, statistically robust information on marine wildlife distributions. A practical approach is described to estimate the amount of sampling effort required to have sufficient statistical power to identify species-specific “hotspots” and “coldspots” of marine bird abundance and occurrence in an offshore environment divided into discrete spatial units (e.g., lease blocks), where “hotspots” and “coldspots” are defined relative to a reference (e.g., regional) mean abundance and/or occurrence probability for each species of interest. For example, a location with average abundance or occurrence that is three times larger the mean (3x effect size) could be defined as a “hotspot,” and a location that is three times smaller than the mean (1/3x effect size) as a “coldspot.” The choice of the effect size used to define hot and coldspots will generally depend on a combination of ecological and regulatory considerations. A method is also developed for testing the statistical significance of possible hotspots and coldspots. Both methods are illustrated with historical seabird survey data from the USGS Avian Compendium Database. Our approach consists of five main components: 1. A review of the primary scientific literature on statistical modeling of animal group size and avian count data to develop a candidate set of statistical distributions that have been used or may be useful to model seabird counts. 2. Statistical power curves for one-sample, one-tailed Monte Carlo significance tests of differences of observed small-sample means from a specified reference distribution. These curves show the power to detect "hotspots" or "coldspots" of occurrence and abundance at a range of effect sizes, given assumptions which we discuss. 3. A model selection procedure, based on maximum likelihood fits of models in the candidate set, to determine an appropriate statistical distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
A New Algorithm to Optimize Maximal Information Coefficient
Luo, Feng; Yuan, Zheming
2016-01-01
The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001
The TT, TB, EB and BB correlations in anisotropic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Xingang; Emami, Razieh; Firouzjahi, Hassan
2014-08-01
The ongoing and future experiments will measure the B-mode from different sky coverage and frequency bands, with the potential to reveal non-trivial features in polarization map. In this work we study the TT, TB, EB and BB correlations associated with the B-mode polarization of CMB map in models of charged anisotropic inflation. The model contains a chaotic-type large field complex inflaton which is charged under the U(1) gauge field. We calculate the statistical anisotropies generated in the power spectra of the curvature perturbation, the tensor perturbation and their cross-correlation. It is shown that the asymmetry in tensor power spectrum ismore » a very sensitive probe of the gauge coupling. While the level of statistical anisotropy in temperature power spectrum can be small and satisfy the observational bounds, the interactions from the gauge coupling can induce large directional dependence in tensor modes. This will leave interesting anisotropic fingerprints in various correlations involving the B-mode polarization such as the TB cross-correlation which may be detected in upcoming Planck polarization data. In addition, the TT correlation receives an anisotropic contribution from the tensor sector which naturally decays after l ∼> 100. We expect that the mechanism of using tensor sector to induce asymmetry at low l to be generic which can also be applied to address other low l CMB anomalies.« less
Selimkhanov, Jangir; Thompson, W. Clayton; Guo, Juen; Hall, Kevin D.; Musante, Cynthia J.
2017-01-01
The design of well-powered in vivo preclinical studies is a key element in building knowledge of disease physiology for the purpose of identifying and effectively testing potential anti-obesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic endpoints such as food intake and body composition. This, combined with limitations inherent in the measurement of certain endpoints, presents challenges to study design that can have significant consequences for an anti-obesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of key metabolic endpoints. To demonstrate how conclusions can change as a function of study size, we show that a simulated pre-clinical study properly powered for one endpoint may lead to false conclusions based on secondary endpoints. We then propose guidelines for endpoint selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design. PMID:28392555
An asymptotic analysis of the logrank test.
Strawderman, R L
1997-01-01
Asymptotic expansions for the null distribution of the logrank statistic and its distribution under local proportional hazards alternatives are developed in the case of iid observations. The results, which are derived from the work of Gu (1992) and Taniguchi (1992), are easy to interpret, and provide some theoretical justification for many behavioral characteristics of the logrank test that have been previously observed in simulation studies. We focus primarily upon (i) the inadequacy of the usual normal approximation under treatment group imbalance; and, (ii) the effects of treatment group imbalance on power and sample size calculations. A simple transformation of the logrank statistic is also derived based on results in Konishi (1991) and is found to substantially improve the standard normal approximation to its distribution under the null hypothesis of no survival difference when there is treatment group imbalance.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.
Szucs, Denes; Ioannidis, John P A
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sung, Yixing; Adams, Brian M.; Witkowski, Walter R.
2011-04-01
The CASL Level 2 Milestone VUQ.Y1.03, 'Enable statistical sensitivity and UQ demonstrations for VERA,' was successfully completed in March 2011. The VUQ focus area led this effort, in close partnership with AMA, and with support from VRI. DAKOTA was coupled to VIPRE-W thermal-hydraulics simulations representing reactors of interest to address crud-related challenge problems in order to understand the sensitivity and uncertainty in simulation outputs with respect to uncertain operating and model form parameters. This report summarizes work coupling the software tools, characterizing uncertainties, selecting sensitivity and uncertainty quantification algorithms, and analyzing the results of iterative studies. These demonstration studies focusedmore » on sensitivity and uncertainty of mass evaporation rate calculated by VIPRE-W, a key predictor for crud-induced power shift (CIPS).« less
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment
Szucs, Denes; Ioannidis, John P. A.
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397
Continuation Power Flow with Variable-Step Variable-Order Nonlinear Predictor
NASA Astrophysics Data System (ADS)
Kojima, Takayuki; Mori, Hiroyuki
This paper proposes a new continuation power flow calculation method for drawing a P-V curve in power systems. The continuation power flow calculation successively evaluates power flow solutions through changing a specified value of the power flow calculation. In recent years, power system operators are quite concerned with voltage instability due to the appearance of deregulated and competitive power markets. The continuation power flow calculation plays an important role to understand the load characteristics in a sense of static voltage instability. In this paper, a new continuation power flow with a variable-step variable-order (VSVO) nonlinear predictor is proposed. The proposed method evaluates optimal predicted points confirming with the feature of P-V curves. The proposed method is successfully applied to IEEE 118-bus and IEEE 300-bus systems.
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
NASA Astrophysics Data System (ADS)
Geil, Paul M.; Mutch, Simon J.; Poole, Gregory B.; Angel, Paul W.; Duffy, Alan R.; Mesinger, Andrei; Wyithe, J. Stuart B.
2016-10-01
We use the Dark-ages, Reionization And Galaxy formation Observables from Numerical Simulations (DRAGONS) framework to investigate the effect of galaxy formation physics on the morphology and statistics of ionized hydrogen (H II) regions during the Epoch of Reioinization (EoR). DRAGONS self-consistently couples a semi-analytic galaxy formation model with the inhomogeneous ionizing UV background, and can therefore be used to study the dependence of morphology and statistics of reionization on feedback phenomena of the ionizing source galaxy population. Changes in galaxy formation physics modify the sizes of H II regions and the amplitude and shape of 21-cm power spectra. Of the galaxy physics investigated, we find that supernova feedback plays the most important role in reionization, with H II regions up to ≈20 per cent smaller and a fractional difference in the amplitude of power spectra of up to ≈17 per cent at fixed ionized fraction in the absence of this feedback. We compare our galaxy formation-based reionization models with past calculations that assume constant stellar-to-halo mass ratios and find that with the correct choice of minimum halo mass, such models can mimic the predicted reionization morphology. Reionization morphology at fixed neutral fraction is therefore not uniquely determined by the details of galaxy formation, but is sensitive to the mass of the haloes hosting the bulk of the ionizing sources. Simple EoR parametrizations are therefore accurate predictors of reionization statistics. However, a complete understanding of reionization using future 21-cm observations will require interpretation with realistic galaxy formation models, in combination with other observations.
Genetic dissection of main and epistatic effects of QTL based on augmented triple test cross design
Zhang, Zheng; Dai, Zhijun; Chen, Yuan; Yuan, Xiong; Yuan, Zheming; Tang, Wenbang; Li, Lanzhi; Hu, Zhongli
2017-01-01
The use of heterosis has considerably increased the productivity of many crops; however, the biological mechanism underpinning the technique remains elusive. The North Carolina design III (NCIII) and the triple test cross (TTC) are powerful and popular genetic mating design that can be used to decipher the genetic basis of heterosis. However, when using the NCIII design with the present quantitative trait locus (QTL) mapping method, if epistasis exists, the estimated additive or dominant effects are confounded with epistatic effects. Here, we propose a two-step approach to dissect all genetic effects of QTL and digenic interactions on a whole genome without sacrificing statistical power based on an augmented TTC (aTTC) design. Because the aTTC design has more transformation combinations than do the NCIII and TTC designs, it greatly enriches the QTL mapping for studying heterosis. When the basic population comprises recombinant inbred lines (RIL), we can use the same materials in the NCIII design for aTTC-design QTL mapping with transformation combination Z1, Z2, and Z4 to obtain genetic effect of QTL and digenic interactions. Compared with RIL-based TTC design, RIL-based aTTC design saves time, money, and labor for basic population crossed with F1. Several Monte Carlo simulation studies were carried out to confirm the proposed approach; the present genetic parameters could be identified with high statistical power, precision, and calculation speed, even at small sample size or low heritability. Additionally, two elite rice hybrid datasets for nine agronomic traits were estimated for real data analysis. We dissected the genetic effects and calculated the dominance degree of each QTL and digenic interaction. Real mapping results suggested that the dominance degree in Z2 that mainly characterize heterosis showed overdominance and dominance for QTL and digenic interactions. Dominance and overdominance were the major genetic foundations of heterosis in rice. PMID:29240818
New powerful statistics for alignment-free sequence comparison under a pattern transfer model.
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S; Sun, Fengzhu
2011-09-07
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D*2 and D(s)2 showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D*2 and D(s)2 by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. Copyright © 2011 Elsevier Ltd. All rights reserved.
New Powerful Statistics for Alignment-free Sequence Comparison Under a Pattern Transfer Model
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S.; Sun, Fengzhu
2011-01-01
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D2∗ and D2s showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D2∗ and D2s by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. PMID:21723298
Turbulence in planetary occultations. IV - Power spectra of phase and intensity fluctuations
NASA Technical Reports Server (NTRS)
Haugstad, B. S.
1979-01-01
Power spectra of phase and intensity scintillations during occultation by turbulent planetary atmospheres are significantly affected by the inhomogeneous background upon which the turbulence is superimposed. Such coupling is particularly pronounced in the intensity, where there is also a marked difference in spectral shape between a central and grazing occultation. While the former has its structural features smoothed by coupling to the inhomogeneous background, such features are enhanced in the latter. Indeed, the latter power spectrum peaks around the characteristic frequency that is determined by the size of the free-space Fresnel zone and the ray velocity in the atmosphere; at higher frequencies strong fringes develop in the power spectrum. A confrontation between the theoretical scintillation spectra computed here and those calculated from the Mariner 5 Venus mission by Woo et al. (1974) is inconclusive, mainly because of insufficient statistical resolution. Phase and/or intensity power spectra computed from occultation data may be used to deduce characteristics of the turbulence and to distinguish turbulence from other perturbations in the refractive index. Such determinations are facilitated if observations are made at two or more frequencies (radio occultation) or in two or more colors (stellar occultation).
A robust power spectrum split cancellation-based spectrum sensing method for cognitive radio systems
NASA Astrophysics Data System (ADS)
Qi, Pei-Han; Li, Zan; Si, Jiang-Bo; Gao, Rui
2014-12-01
Spectrum sensing is an essential component to realize the cognitive radio, and the requirement for real-time spectrum sensing in the case of lacking prior information, fading channel, and noise uncertainty, indeed poses a major challenge to the classical spectrum sensing algorithms. Based on the stochastic properties of scalar transformation of power spectral density (PSD), a novel spectrum sensing algorithm, referred to as the power spectral density split cancellation method (PSC), is proposed in this paper. The PSC makes use of a scalar value as a test statistic, which is the ratio of each subband power to the full band power. Besides, by exploiting the asymptotic normality and independence of Fourier transform, the distribution of the ratio and the mathematical expressions for the probabilities of false alarm and detection in different channel models are derived. Further, the exact closed-form expression of decision threshold is calculated in accordance with Neyman—Pearson criterion. Analytical and simulation results show that the PSC is invulnerable to noise uncertainty, and can achive excellent detection performance without prior knowledge in additive white Gaussian noise and flat slow fading channels. In addition, the PSC benefits from a low computational cost, which can be completed in microseconds.
Cerebral oscillatory activity during simulated driving using MEG.
Sakihara, Kotoe; Hirata, Masayuki; Ebe, Kazutoshi; Kimura, Kenji; Yi Ryu, Seong; Kono, Yoshiyuki; Muto, Nozomi; Yoshioka, Masako; Yoshimine, Toshiki; Yorifuji, Shiro
2014-01-01
We aimed to examine cerebral oscillatory differences associated with psychological processes during simulated car driving. We recorded neuromagnetic signals in 14 healthy volunteers using magnetoencephalography (MEG) during simulated driving. MEG data were analyzed using synthetic aperture magnetometry to detect the spatial distribution of cerebral oscillations. Group effects between subjects were analyzed statistically using a non-parametric permutation test. Oscillatory differences were calculated by comparison between "passive viewing" and "active driving." "Passive viewing" was the baseline, and oscillatory differences during "active driving" showed an increase or decrease in comparison with a baseline. Power increase in the theta band was detected in the superior frontal gyrus (SFG) during active driving. Power decreases in the alpha, beta, and low gamma bands were detected in the right inferior parietal lobe (IPL), left postcentral gyrus (PoCG), middle temporal gyrus (MTG), and posterior cingulate gyrus (PCiG) during active driving. Power increase in the theta band in the SFG may play a role in attention. Power decrease in the right IPL may reflect selectively divided attention and visuospatial processing, whereas that in the left PoCG reflects sensorimotor activation related to driving manipulation. Power decreases in the MTG and PCiG may be associated with object recognition.
Comparison of beam position calculation methods for application in digital acquisition systems
NASA Astrophysics Data System (ADS)
Reiter, A.; Singh, R.
2018-05-01
Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.
STILTS -- Starlink Tables Infrastructure Library Tool Set
NASA Astrophysics Data System (ADS)
Taylor, Mark
STILTS is a set of command-line tools for processing tabular data. It has been designed for, but is not restricted to, use on astronomical data such as source catalogues. It contains both generic (format-independent) table processing tools and tools for processing VOTable documents. Facilities offered include crossmatching, format conversion, format validation, column calculation and rearrangement, row selection, sorting, plotting, statistical calculations and metadata display. Calculations on cell data can be performed using a powerful and extensible expression language. The package is written in pure Java and based on STIL, the Starlink Tables Infrastructure Library. This gives it high portability, support for many data formats (including FITS, VOTable, text-based formats and SQL databases), extensibility and scalability. Where possible the tools are written to accept streamed data so the size of tables which can be processed is not limited by available memory. As well as the tutorial and reference information in this document, detailed on-line help is available from the tools themselves. STILTS is available under the GNU General Public Licence.
Statistical Analysis of Large Scale Structure by the Discrete Wavelet Transform
NASA Astrophysics Data System (ADS)
Pando, Jesus
1997-10-01
The discrete wavelet transform (DWT) is developed as a general statistical tool for the study of large scale structures (LSS) in astrophysics. The DWT is used in all aspects of structure identification including cluster analysis, spectrum and two-point correlation studies, scale-scale correlation analysis and to measure deviations from Gaussian behavior. The techniques developed are demonstrated on 'academic' signals, on simulated models of the Lymanα (Lyα) forests, and on observational data of the Lyα forests. This technique can detect clustering in the Ly-α clouds where traditional techniques such as the two-point correlation function have failed. The position and strength of these clusters in both real and simulated data is determined and it is shown that clusters exist on scales as large as at least 20 h-1 Mpc at significance levels of 2-4 σ. Furthermore, it is found that the strength distribution of the clusters can be used to distinguish between real data and simulated samples even where other traditional methods have failed to detect differences. Second, a method for measuring the power spectrum of a density field using the DWT is developed. All common features determined by the usual Fourier power spectrum can be calculated by the DWT. These features, such as the index of a power law or typical scales, can be detected even when the samples are geometrically complex, the samples are incomplete, or the mean density on larger scales is not known (the infrared uncertainty). Using this method the spectra of Ly-α forests in both simulated and real samples is calculated. Third, a method for measuring hierarchical clustering is introduced. Because hierarchical evolution is characterized by a set of rules of how larger dark matter halos are formed by the merging of smaller halos, scale-scale correlations of the density field should be one of the most sensitive quantities in determining the merging history. We show that these correlations can be completely determined by the correlations between discrete wavelet coefficients on adjacent scales and at nearly the same spatial position, Cj,j+12/cdot2. Scale-scale correlations on two samples of the QSO Ly-α forests absorption spectra are computed. Lastly, higher order statistics are developed to detect deviations from Gaussian behavior. These higher order statistics are necessary to fully characterize the Ly-α forests because the usual 2nd order statistics, such as the two-point correlation function or power spectrum, give inconclusive results. It is shown how this technique takes advantage of the locality of the DWT to circumvent the central limit theorem. A non-Gaussian spectrum is defined and this spectrum reveals not only the magnitude, but the scales of non-Gaussianity. When applied to simulated and observational samples of the Ly-α clouds, it is found that different popular models of structure formation have different spectra while two, independent observational data sets, have the same spectra. Moreover, the non-Gaussian spectra of real data sets are significantly different from the spectra of various possible random samples. (Abstract shortened by UMI.)
Reliability and performance experience with flat-plate photovoltaic modules
NASA Technical Reports Server (NTRS)
Ross, R. G., Jr.
1982-01-01
Statistical models developed to define the most likely sources of photovoltaic (PV) array failures and the optimum method of allowing for the defects in order to achieve a 20 yr lifetime with acceptable performance degradation are summarized. Significant parameters were the cost of energy, annual power output, initial cost, replacement cost, rate of module replacement, the discount rate, and the plant lifetime. Acceptable degradation allocations were calculated to be 0.0001 cell failures/yr, 0.005 module failures/yr, 0.05 power loss/yr, a 0.01 rate of power loss/yr, and a 25 yr module wear-out length. Circuit redundancy techniques were determined to offset cell failures using fault tolerant designs such as series/parallel and bypass diode arrangements. Screening processes have been devised to eliminate cells that will crack in operation, and multiple electrical contacts at each cell compensate for the cells which escape the screening test and then crack when installed. The 20 yr array lifetime is expected to be achieved in the near-term.
Can quantum coherent solar cells break detailed balance?
NASA Astrophysics Data System (ADS)
Kirk, Alexander P.
2015-07-01
Carefully engineered coherent quantum states have been proposed as a design attribute that is hypothesized to enable solar photovoltaic cells to break the detailed balance (or radiative) limit of power conversion efficiency by possibly causing radiative recombination to be suppressed. However, in full compliance with the principles of statistical mechanics and the laws of thermodynamics, specially prepared coherent quantum states do not allow a solar photovoltaic cell—a quantum threshold energy conversion device—to exceed the detailed balance limit of power conversion efficiency. At the condition given by steady-state open circuit operation with zero nonradiative recombination, the photon absorption rate (or carrier photogeneration rate) must balance the photon emission rate (or carrier radiative recombination rate) thus ensuring that detailed balance prevails. Quantum state transitions, entropy-generating hot carrier relaxation, and photon absorption and emission rate balancing are employed holistically and self-consistently along with calculations of current density, voltage, and power conversion efficiency to explain why detailed balance may not be violated in solar photovoltaic cells.
Calculation of crystalline lens power in chickens with a customized version of Bennett's equation.
Iribarren, Rafael; Rozema, Jos J; Schaeffel, Frank; Morgan, Ian G
2014-03-01
This paper customizes Bennett's equation for calculating lens power in chicken eyes from refraction, keratometry and biometry. Previously published data on refraction, corneal power, anterior chamber depth, lens thickness, lens radii of curvature, axial length and eye power in chickens aged 10-90 days were used to estimate Gullstrand's lens power and Bennett's lens power for chicken eyes, and to calculate the lens equivalent refractive index. Bennett's A and B constants for the front and back surface powers of the lens were calculated for data measured from day 10 to 90 at 10 day intervals, and mean customized constants were calculated. The mean customized constants for Bennett's equation for chicks were A=0.574±0.023 and B=0.379±0.021. As found previously, lens power decreases with age in chicks, while corneal power decreases and axial length increases. The lens equivalent refractive index decreases with age from 10 to 90 days after hatching. Bennett's equation can be used to calculate lens power in chicken eyes for studies on animal myopia, using standard biometry. Copyright © 2014 Elsevier B.V. All rights reserved.
DASS: efficient discovery and p-value calculation of substructures in unordered data.
Hollunder, Jens; Friedel, Maik; Beyer, Andreas; Workman, Christopher T; Wilhelm, Thomas
2007-01-01
Pattern identification in biological sequence data is one of the main objectives of bioinformatics research. However, few methods are available for detecting patterns (substructures) in unordered datasets. Data mining algorithms mainly developed outside the realm of bioinformatics have been adapted for that purpose, but typically do not determine the statistical significance of the identified patterns. Moreover, these algorithms do not exploit the often modular structure of biological data. We present the algorithm DASS (Discovery of All Significant Substructures) that first identifies all substructures in unordered data (DASS(Sub)) in a manner that is especially efficient for modular data. In addition, DASS calculates the statistical significance of the identified substructures, for sets with at most one element of each type (DASS(P(set))), or for sets with multiple occurrence of elements (DASS(P(mset))). The power and versatility of DASS is demonstrated by four examples: combinations of protein domains in multi-domain proteins, combinations of proteins in protein complexes (protein subcomplexes), combinations of transcription factor target sites in promoter regions and evolutionarily conserved protein interaction subnetworks. The program code and additional data are available at http://www.fli-leibniz.de/tsb/DASS
Replication Unreliability in Psychology: Elusive Phenomena or “Elusive” Statistical Power?
Tressoldi, Patrizio E.
2012-01-01
The focus of this paper is to analyze whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low statistical power. Applying the Null Hypothesis Statistical Testing (NHST), still the widest used statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out. Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the statistical power to detect the expected effect size (ES) of the typical study, is low or very low. The low power in most studies undermines the use of NHST to study phenomena with moderate or low ESs. We conclude by providing some suggestions on how to increase the statistical power or use different statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small ES. PMID:22783215
Physical fitness profile of professional Italian firefighters: differences among age groups.
Perroni, Fabrizio; Cignitti, Lamberto; Cortis, Cristina; Capranica, Laura
2014-05-01
Firefighters perform many tasks which require a high level of fitness and their personal safety may be compromised by the physiological aging process. The aim of the study was to evaluate strength (bench-press), power (countermovement jump), sprint (20 m) and endurance (with and without Self Contained Breathing Apparatus - S.C.B.A.) of 161 Italian firefighters recruits in relation to age groups (<25 yr; 26-30 yr; 31-35 yr; 36-40 yr; 41-42 yr). Descriptive statistics and an ANOVA were calculated to provide the physical fitness profile for each parameter and to assess differences (p < 0.05) among age groups. Anthropometric values showed an age-effect for height and BMI, while performances values showed statistical differences for strength, power, sprint tests and endurance test with S.C.B.A. Wearing the S.C.B.A., 14% of all recruits failed to complete the endurance test. We propose that the firefighters should participate in an assessment of work capacity and specific fitness programs aimed to maintain an optimal fitness level for all ages. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Enhanced Component Performance Study: Motor-Driven Pumps 1998–2014
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schroeder, John Alton
2016-02-01
This report presents an enhanced performance evaluation of motor-driven pumps at U.S. commercial nuclear power plants. The data used in this study are based on the operating experience failure reports from fiscal year 1998 through 2014 for the component reliability as reported in the Institute of Nuclear Power Operations (INPO) Consolidated Events Database (ICES). The motor-driven pump failure modes considered for standby systems are failure to start, failure to run less than or equal to one hour, and failure to run more than one hour; for normally running systems, the failure modes considered are failure to start and failure tomore » run. An eight hour unreliability estimate is also calculated and trended. The component reliability estimates and the reliability data are trended for the most recent 10-year period while yearly estimates for reliability are provided for the entire active period. Statistically significant increasing trends were identified in pump run hours per reactor year. Statistically significant decreasing trends were identified for standby systems industry-wide frequency of start demands, and run hours per reactor year for runs of less than or equal to one hour.« less
Usami, Satoshi
2017-03-01
Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.
NASA Technical Reports Server (NTRS)
Billingham, J.; Tarter, J.
1992-01-01
This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.
Billingham, J; Tarter, J
1992-01-01
This paper estimates the maximum range at which radar signals from the Earth could be detected by a search system similar to the NASA Search for Extraterrestrial Intelligence Microwave Observing Project (SETI MOP) assumed to be operating out in the galaxy. Figures are calculated for the Targeted Search, and for the Sky Survey parts of the MOP, both operating, as currently planned, in the second half of the decade of the 1990s. Only the most powerful terrestrial transmitters are considered, namely, the planetary radar at Arecibo in Puerto Rico, and the ballistic missile early warning systems (BMEWS). In each case the probabilities of detection over the life of the MOP are also calculated. The calculation assumes that we are only in the eavesdropping mode. Transmissions intended to be detected by SETI systems are likely to be much stronger and would of course be found with higher probability to a greater range. Also, it is assumed that the transmitting civilization is at the same level of technological evolution as ours on Earth. This is very improbable. If we were to detect another technological civilization, it would, on statistical grounds, be much older than we are and might well have much more powerful transmitters. Both factors would make detection by the NASA MOP a much more likely outcome.
Statistical averaging of marine magnetic anomalies and the aging of oceanic crust.
Blakely, R.J.
1983-01-01
Visual comparison of Mesozoic and Cenozoic magnetic anomalies in the North Pacific suggests that older anomalies contain less short-wavelength information than younger anomalies in this area. To test this observation, magnetic profiles from the North Pacific are examined from crust of three ages: 0-2.1, 29.3-33.1, and 64.9-70.3Ma. For each time period, at least nine profiles were analyzed by 1) calculating the power density spectrum of each profile, 2) averaging the spectra together, and 3) computing a 'recording filter' for each time period by assuming a hypothetical seafloor model. The model assumes that the top of the source is acoustic basement, the source thickness is 0.5km, and the time scale of geomagnetic reversals is according to Ness et al. (1980). The calculated power density spectra of the three recording filters are complex in shape but show an increase of attenuation of short-wavelength information as the crust ages. These results are interpreted using a multilayer model for marine magnetic anomalies in which the upper layer, corresponding to pillow basalt of seismic layer 2A, acts as a source of noise to the magnetic anomalies. As the ocean crust ages, this noisy contribution by the pillow basalts becomes less significant to the anomalies. Consequently, magnetic sources below layer 2A must be faithful recorders of geomagnetic reversals.-AuthorPacific power density spectrum
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Statistics. 1065.602 Section 1065.602... PROCEDURES Calculations and Data Requirements § 1065.602 Statistics. (a) Overview. This section contains equations and example calculations for statistics that are specified in this part. In this section we use...
Calculation of streamflow statistics for Ontario and the Great Lakes states
Piggott, Andrew R.; Neff, Brian P.
2005-01-01
Basic, flow-duration, and n-day frequency statistics were calculated for 779 current and historical streamflow gages in Ontario and 3,157 streamflow gages in the Great Lakes states with length-of-record daily mean streamflow data ending on December 31, 2000 and September 30, 2001, respectively. The statistics were determined using the U.S. Geological Survey’s SWSTAT and IOWDM, ANNIE, and LIBANNE software and Linux shell and PERL programming that enabled the mass processing of the data and calculation of the statistics. Verification exercises were performed to assess the accuracy of the processing and calculations. The statistics and descriptions, longitudes and latitudes, and drainage areas for each of the streamflow gages are summarized in ASCII text files and ESRI shapefiles.
SurfKin: an ab initio kinetic code for modeling surface reactions.
Le, Thong Nguyen-Minh; Liu, Bin; Huynh, Lam K
2014-10-05
In this article, we describe a C/C++ program called SurfKin (Surface Kinetics) to construct microkinetic mechanisms for modeling gas-surface reactions. Thermodynamic properties of reaction species are estimated based on density functional theory calculations and statistical mechanics. Rate constants for elementary steps (including adsorption, desorption, and chemical reactions on surfaces) are calculated using the classical collision theory and transition state theory. Methane decomposition and water-gas shift reaction on Ni(111) surface were chosen as test cases to validate the code implementations. The good agreement with literature data suggests this is a powerful tool to facilitate the analysis of complex reactions on surfaces, and thus it helps to effectively construct detailed microkinetic mechanisms for such surface reactions. SurfKin also opens a possibility for designing nanoscale model catalysts. Copyright © 2014 Wiley Periodicals, Inc.
Sharma, Kanishka; Chandra, Sushil; Dubey, Ashok Kumar
2018-01-01
Background: Rajyoga meditation is taught by Prajapita Brahmakumaris World Spiritual University (Brahmakumaris) and has been followed by more than one million followers across the globe. However, rare studies were conducted on physiological aspects of rajyoga meditation using electroencephalography (EEG). Band power and cortical asymmetry were not studied with Rajyoga meditators. Aims: This study aims to investigate the effect of regular meditation practice on EEG brain dynamics in low-frequency bands of long-term Rajyoga meditators. Settings and Design: Subjects were matched for age in both groups. Lower frequency EEG bands were analyzed in resting and during meditation. Materials and Methods: Twenty-one male long-term meditators (LTMs) and same number of controls were selected to participate in study as par inclusion criteria. Semi high-density EEG was recorded before and during meditation in LTM group and resting in control group. The main outcome of the study was spectral power of alpha and theta bands and cortical (hemispherical) asymmetry calculated using band power. Statistical Analysis: One-way ANOVA was performed to find the significant difference between EEG spectral properties of groups. Pearson's Chi-square test was used to find difference among demographics data. Results: Results reveal high-band power in alpha and theta spectra in meditators. Cortical asymmetry calculated through EEG power was also found to be high in frontal as well as parietal channels. However, no correlation was seen between the experience of meditation (years, hours) practice and EEG indices. Conclusion: Overall findings indicate contribution of smaller frequencies (alpha and theta) while maintaining meditative experience. This suggests a positive impact of meditation on frontal and parietal areas of brain, involved in the processes of regulation of selective and sustained attention as well as provide evidence about their involvement in emotion and cognitive processing. PMID:29343928
A new u-statistic with superior design sensitivity in matched observational studies.
Rosenbaum, Paul R
2011-09-01
In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
Compton-Scattering Cross Section on the Proton at High Momentum Transfer
NASA Astrophysics Data System (ADS)
Danagoulian, A.; Mamyan, V. H.; Roedelbronn, M.; Aniol, K. A.; Annand, J. R. M.; Bertin, P. Y.; Bimbot, L.; Bosted, P.; Calarco, J. R.; Camsonne, A.; Chang, C. C.; Chang, T.-H.; Chen, J.-P.; Choi, Seonho; Chudakov, E.; Degtyarenko, P.; de Jager, C. W.; Deur, A.; Dutta, D.; Egiyan, K.; Gao, H.; Garibaldi, F.; Gayou, O.; Gilman, R.; Glamazdin, A.; Glashausser, C.; Gomez, J.; Hamilton, D. J.; Hansen, J.-O.; Hayes, D.; Higinbotham, D. W.; Hinton, W.; Horn, T.; Howell, C.; Hunyady, T.; Hyde, C. E.; Jiang, X.; Jones, M. K.; Khandaker, M.; Ketikyan, A.; Kubarovsky, V.; Kramer, K.; Kumbartzki, G.; Laveissière, G.; Lerose, J.; Lindgren, R. A.; Margaziotis, D. J.; Markowitz, P.; McCormick, K.; Meekins, D. G.; Meziani, Z.-E.; Michaels, R.; Moussiegt, P.; Nanda, S.; Nathan, A. M.; Nikolenko, D. M.; Nelyubin, V.; Norum, B. E.; Paschke, K.; Pentchev, L.; Perdrisat, C. F.; Piasetzky, E.; Pomatsalyuk, R.; Punjabi, V. A.; Rachek, I.; Radyushkin, A.; Reitz, B.; Roche, R.; Ron, G.; Sabatié, F.; Saha, A.; Savvinov, N.; Shahinyan, A.; Shestakov, Y.; Širca, S.; Slifer, K.; Solvignon, P.; Stoler, P.; Tajima, S.; Sulkosky, V.; Todor, L.; Vlahovic, B.; Weinstein, L. B.; Wang, K.; Wojtsekhowski, B.; Voskanyan, H.; Xiang, H.; Zheng, X.; Zhu, L.
2007-04-01
Cross-section values for Compton scattering on the proton were measured at 25 kinematic settings over the range s=5 11 and -t=2 7GeV2 with a statistical accuracy of a few percent. The scaling power for the s dependence of the cross section at fixed center-of-mass angle was found to be 8.0±0.2, strongly inconsistent with the prediction of perturbative QCD. The observed cross-section values are in fair agreement with the calculations using the handbag mechanism, in which the external photons couple to a single quark.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
KARHUNEN-LOÈVE Basis Functions of Kolmogorov Turbulence in the Sphere
NASA Astrophysics Data System (ADS)
Mathar, Richard J.
In support of modeling atmospheric turbulence, the statistically independent Karhunen-Loève modes of refractive indices with isotropic Kolmogorov spectrum of the covariance are calculated inside a sphere of fixed radius, rendered as series of 3D Zernike functions. Many of the symmetry arguments of the well-known associated 2D problem for the circular input pupil remain valid. The technique of efficient diagonalization of the eigenvalue problem in wavenumber space is founded on the Fourier representation of the 3D Zernike basis, and extensible to the von-Kármán power spectrum.
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
NASA Astrophysics Data System (ADS)
Verduzco, Laura E.
The use of hydrogen as an energy carrier has the potential to decrease the amount of pollutants emitted to the atmosphere, significantly reduce our dependence on imported oil and resolve geopolitical issues related to energy consumption. The current status of hydrogen technology makes it prohibitive and financially risky for most investors to commit the money required for large-scale hydrogen production. Therefore, alternative strategies such as small and medium-scale hydrogen applications should be implemented during the early stages of the transition to the hydrogen economy in order to test potential markets and technology readiness. While many analysis tools have been built to estimate the requirements of the transition to a hydrogen economy, few have focused on small and medium-scale hydrogen production and none has paired financial with socioeconomic costs at the residential level. The computer-based tool (H2POWER) presented in this study calculates the capacity, cost and socioeconomic impact of the systems needed to meet the energy demands of a home or a community using home and neighborhood refueling units, which are systems that can provide electricity and heat to meet the energy demands of either (1) a home and automobile or (2) a cluster of homes and a number of automobiles. The financial costs of the production, processing and delivery sub-systems that conform the refueling units are calculated using cost data of existing technology and normalizing them to calculate capital and net present cost. The monetary value of the externalities (socioeconomic analysis) caused by each system is calculated by H2POWER through a statistical analysis of the cost associated to various externalities. Additionally, H2POWER calculates the financial impact of different penalties and incentives (such as net metering, low interest loans, fuel taxes, and emission penalties) on the cost of the system from the point of view of a developer and a homeowner. In order to assess the benefits and costs of hydrogen-based alternatives, H2POWER compares the financial and socioeconomic costs of home and neighborhood refueling units to a baseline of "conventional" sources of residential electricity, space heat, water heat, and vehicle fuel. The model can also calculate the "gap" between the financial cost of the technology and the environmental cost of the externalities that are generated using conventional energy sources. H2POWER is a flexible, user-friendly tool that allows the user to specify different production pathways, supplemental power sources (renewable and non-renewable), component characteristics, electricity mixes, and other analysis parameters in order to customize the results to specific projects. The model has also built-in default values for each of the input fields based on national averages, standard technology specifications and input from experts.
NASA Astrophysics Data System (ADS)
Petrova, I. R.; Bochkarev, V. V.; Latipov, R. R.
2009-09-01
We present results of the spectral analysis of data series of Doppler frequency shifted signals reflected from the ionosphere, using experimental data received at Kazan University, Russia. Spectra of variations with periods from 1 min to 60 days have been calculated and analyzed for different scales of periods. The power spectral density for spring and winter differs by a factor of 3-4. Local maxima of variation amplitude are detected, which are statistically significant. The periods of these amplitude increases range from 6 to 12 min for winter, and from 24 to 48 min for autumn. Properties of spectra for variations with the periods of 1-72 h have been analyzed. The maximum of variation intensity for all seasons and frequencies corresponds to the period of 24 h. Spectra of variations with periods from 3 to 60 days have been calculated. The maxima periods of power spectral density have been detected by the MUSIC method for the high spectral resolution. The detected periods correspond to planetary wave periods. Analysis of spectra for days with different level of geomagnetic activity shows that the intensity of variations for days with a high level of geomagnetic activity is higher.
The effect of Gonioscopy on keratometry and corneal surface topography.
George, Mathew K; Kuriakose, Thomas; DeBroff, Brian M; Emerson, John W
2006-06-17
Biometric procedures such as keratometry performed shortly after contact procedures like gonioscopy and applanation tonometry could affect the validity of the measurement. This study was conducted to understand the short-term effect of gonioscopy on corneal curvature measurements and surface topography based Simulated Keratometry and whether this would alter the power of an intraocular lens implant calculated using post-gonioscopy measurements. We further compared the effect of the 2-mirror (Goldmann) and the 4-mirror (Sussman) Gonioscopes. A prospective clinic-based self-controlled comparative study. 198 eyes of 99 patients, above 50 years of age, were studied. Exclusion criteria included documented dry eye, history of ocular surgery or trauma, diabetes mellitus and connective tissue disorders. Auto-Keratometry and corneal topography measurements were obtained at baseline and at three follow-up times - within the first 5 minutes, between the 10th-15th minute and between the 20th-25th minute after intervention. One eye was randomized for intervention with the 2-mirror gonioscope and the other underwent the 4-mirror after baseline measurements. t-tests were used to examine differences between interventions and between the measurement methods. The sample size was calculated using an estimate of clinically significant lens implant power changes based on the SRK-II formula. Clinically and statistically significant steepening was observed in the first 5 minutes and in the 10-15 minute interval using topography-based Sim K. These changes were not present with the Auto-Keratometer measurements. Although changes from baseline were noted between 20 and 25 minutes topographically, these were not clinically or statistically significant. There was no significant difference between the two types of gonioscopes. There was greater variability in the changes from baseline using the topography-based Sim K readings. Reversible steepening of the central corneal surface is produced by the act of gonioscopy as measured by Sim K, whereas no significant differences were present with Auto-K measurements. The type of Gonioscope used does not appear to influence these results. If topographically derived Sim K is used to calculate the power of the intraocular lens implant, we recommend waiting a minimum of 20 minutes before measuring the corneal curvature after gonioscopy with either Goldmann or Sussman contact lenses.
The effect of Gonioscopy on keratometry and corneal surface topography
George, Mathew K; Kuriakose, Thomas; DeBroff, Brian M; Emerson, John W
2006-01-01
Background Biometric procedures such as keratometry performed shortly after contact procedures like gonioscopy and applanation tonometry could affect the validity of the measurement. This study was conducted to understand the short-term effect of gonioscopy on corneal curvature measurements and surface topography based Simulated Keratometry and whether this would alter the power of an intraocular lens implant calculated using post-gonioscopy measurements. We further compared the effect of the 2-mirror (Goldmann) and the 4-mirror (Sussman) Gonioscopes. Methods A prospective clinic-based self-controlled comparative study. 198 eyes of 99 patients, above 50 years of age, were studied. Exclusion criteria included documented dry eye, history of ocular surgery or trauma, diabetes mellitus and connective tissue disorders. Auto-Keratometry and corneal topography measurements were obtained at baseline and at three follow-up times – within the first 5 minutes, between the 10th-15th minute and between the 20th-25th minute after intervention. One eye was randomized for intervention with the 2-mirror gonioscope and the other underwent the 4-mirror after baseline measurements. t-tests were used to examine differences between interventions and between the measurement methods. The sample size was calculated using an estimate of clinically significant lens implant power changes based on the SRK-II formula. Results Clinically and statistically significant steepening was observed in the first 5 minutes and in the 10–15 minute interval using topography-based Sim K. These changes were not present with the Auto-Keratometer measurements. Although changes from baseline were noted between 20 and 25 minutes topographically, these were not clinically or statistically significant. There was no significant difference between the two types of gonioscopes. There was greater variability in the changes from baseline using the topography-based Sim K readings. Conclusion Reversible steepening of the central corneal surface is produced by the act of gonioscopy as measured by Sim K, whereas no significant differences were present with Auto-K measurements. The type of Gonioscope used does not appear to influence these results. If topographically derived Sim K is used to calculate the power of the intraocular lens implant, we recommend waiting a minimum of 20 minutes before measuring the corneal curvature after gonioscopy with either Goldmann or Sussman contact lenses. PMID:16780595
Extensions of the MCNP5 and TRIPOLI4 Monte Carlo Codes for Transient Reactor Analysis
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Sjenitzer, Bart L.
2014-06-01
To simulate reactor transients for safety analysis with the Monte Carlo method the generation and decay of delayed neutron precursors is implemented in the MCNP5 and TRIPOLI4 general purpose Monte Carlo codes. Important new variance reduction techniques like forced decay of precursors in each time interval and the branchless collision method are included to obtain reasonable statistics for the power production per time interval. For simulation of practical reactor transients also the feedback effect from the thermal-hydraulics must be included. This requires coupling of the Monte Carlo code with a thermal-hydraulics (TH) code, providing the temperature distribution in the reactor, which affects the neutron transport via the cross section data. The TH code also provides the coolant density distribution in the reactor, directly influencing the neutron transport. Different techniques for this coupling are discussed. As a demonstration a 3x3 mini fuel assembly with a moving control rod is considered for MCNP5 and a mini core existing of 3x3 PWR fuel assemblies with control rods and burnable poisons for TRIPOLI4. Results are shown for reactor transients due to control rod movement or withdrawal. The TRIPOLI4 transient calculation is started at low power and includes thermal-hydraulic feedback. The power rises about 10 decades and finally stabilises the reactor power at a much higher level than initial. The examples demonstrate that the modified Monte Carlo codes are capable of performing correct transient calculations, taking into account all geometrical and cross section detail.
Reid, Lee B; Pagnozzi, Alex M; Fiori, Simona; Boyd, Roslyn N; Dowson, Nicholas; Rose, Stephen E
2017-05-01
Researchers in the field of child neurology are increasingly looking to supplement clinical trials of motor rehabilitation with neuroimaging in order to better understand the relationship between behavioural training, brain changes, and clinical improvements. Randomised controlled trials are typically accompanied by sample size calculations to detect clinical improvements but, despite the large cost of neuroimaging, not equivalent calculations for concurrently acquired imaging neuroimaging measures of changes in response to intervention. To aid in this regard, a power analysis was conducted for two measures of brain changes that may be indexed in a trial of rehabilitative therapy for cerebral palsy: cortical thickness of the impaired primary sensorimotor cortex, and fractional anisotropy of the impaired, delineated corticospinal tract. Power for measuring fractional anisotropy was assessed for both region-of-interest-seeded and fMRI-seeded diffusion tractography. Taking into account practical limitations, as well as data loss due to behavioural and image-processing issues, estimated required participant numbers were 101, 128 and 59 for cortical thickness, region-of-interest-based tractography, and fMRI-seeded tractography, respectively. These numbers are not adjusted for study attrition. Although these participant numbers may be out of reach of many trials, several options are available to improve statistical power, including careful preparation of participants for scanning using mock simulators, careful consideration of image processing options, and enrolment of as homogeneous a cohort as possible. This work suggests that smaller and moderate sized studies give genuine consideration to harmonising scanning protocols between groups to allow the pooling of data. Copyright © 2017 ISDN. All rights reserved.
Relative risk estimates from spatial and space-time scan statistics: Are they biased?
Prates, Marcos O.; Kulldorff, Martin; Assunção, Renato M.
2014-01-01
The purely spatial and space-time scan statistics have been successfully used by many scientists to detect and evaluate geographical disease clusters. Although the scan statistic has high power in correctly identifying a cluster, no study has considered the estimates of the cluster relative risk in the detected cluster. In this paper we evaluate whether there is any bias on these estimated relative risks. Intuitively, one may expect that the estimated relative risks has upward bias, since the scan statistic cherry picks high rate areas to include in the cluster. We show that this intuition is correct for clusters with low statistical power, but with medium to high power the bias becomes negligible. The same behaviour is not observed for the prospective space-time scan statistic, where there is an increasing conservative downward bias of the relative risk as the power to detect the cluster increases. PMID:24639031
Discriminatory power of water polo game-related statistics at the 2008 Olympic Games.
Escalante, Yolanda; Saavedra, Jose M; Mansilla, Mirella; Tella, Victor
2011-02-01
The aims of this study were (1) to compare water polo game-related statistics by context (winning and losing teams) and sex (men and women), and (2) to identify characteristics discriminating the performances for each sex. The game-related statistics of the 64 matches (44 men's and 20 women's) played in the final phase of the Olympic Games held in Beijing in 2008 were analysed. Unpaired t-tests compared winners and losers and men and women, and confidence intervals and effect sizes of the differences were calculated. The results were subjected to a discriminant analysis to identify the differentiating game-related statistics of the winning and losing teams. The results showed the differences between winning and losing men's teams to be in both defence and offence, whereas in women's teams they were only in offence. In men's games, passing (assists), aggressive play (exclusions), centre position effectiveness (centre shots), and goalkeeper defence (goalkeeper-blocked 5-m shots) predominated, whereas in women's games the play was more dynamic (possessions). The variable that most discriminated performance in men was goalkeeper-blocked shots, and in women shooting effectiveness (shots). These results should help coaches when planning training and competition.
On the statistical properties and tail risk of violent conflicts
NASA Astrophysics Data System (ADS)
Cirillo, Pasquale; Taleb, Nassim Nicholas
2016-06-01
We examine statistical pictures of violent conflicts over the last 2000 years, providing techniques for dealing with the unreliability of historical data. We make use of a novel approach to deal with fat-tailed random variables with a remote but nonetheless finite upper bound, by defining a corresponding unbounded dual distribution (given that potential war casualties are bounded by the world population). This approach can also be applied to other fields of science where power laws play a role in modeling, like geology, hydrology, statistical physics and finance. We apply methods from extreme value theory on the dual distribution and derive its tail properties. The dual method allows us to calculate the real tail mean of war casualties, which proves to be considerably larger than the corresponding sample mean for large thresholds, meaning severe underestimation of the tail risks of conflicts from naive observation. We analyze the robustness of our results to errors in historical reports. We study inter-arrival times between tail events and find that no particular trend can be asserted. All the statistical pictures obtained are at variance with the prevailing claims about ;long peace;, namely that violence has been declining over time.
Statistical properties of radiation from VUV and X-ray free electron laser
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1998-03-01
The paper presents a comprehensive analysis of the statistical properties of the radiation from a self-amplified spontaneous emission (SASE) free electron laser operating in linear and nonlinear mode. The investigation has been performed in a one-dimensional approximation assuming the electron pulse length to be much larger than a coherence length of the radiation. The following statistical properties of the SASE FEL radiation have been studied in detail: time and spectral field correlations, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after the monochromator installed at the FEL amplifier exit and radiation spectrum. The linear high gain limit is studied analytically. It is shown that the radiation from a SASE FEL operating in the linear regime possesses all the features corresponding to completely chaotic polarized radiation. A detailed study of statistical properties of the radiation from a SASE FEL operating in linear and nonlinear regime has been performed by means of time-dependent simulation codes. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility being under construction at DESY.
Spezia, Riccardo; Martínez-Nuñez, Emilio; Vazquez, Saulo; Hase, William L
2017-04-28
In this Introduction, we show the basic problems of non-statistical and non-equilibrium phenomena related to the papers collected in this themed issue. Over the past few years, significant advances in both computing power and development of theories have allowed the study of larger systems, increasing the time length of simulations and improving the quality of potential energy surfaces. In particular, the possibility of using quantum chemistry to calculate energies and forces 'on the fly' has paved the way to directly study chemical reactions. This has provided a valuable tool to explore molecular mechanisms at given temperatures and energies and to see whether these reactive trajectories follow statistical laws and/or minimum energy pathways. This themed issue collects different aspects of the problem and gives an overview of recent works and developments in different contexts, from the gas phase to the condensed phase to excited states.This article is part of the themed issue 'Theoretical and computational studies of non-equilibrium and non-statistical dynamics in the gas phase, in the condensed phase and at interfaces'. © 2017 The Author(s).
SCIDAC Center for simulation of wave particle interactions CompX participation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harvey, R.W.
Harnessing the energy that is released in fusion reactions would provide a safe and abundant source of power to meet the growing energy needs of the world population. The next step toward the development of fusion as a practical energy source is the construction of ITER, a device capable of producing and controlling the high performance plasma required for self-sustaining fusion reactions, or “burning” plasma. The input power required to drive the ITER plasma into the burning regime will be supplied primarily with a combination of external power from radio frequency waves in the ion cyclotron range of frequencies andmore » energetic ions from neutral beam injection sources, in addition to internally generated Ohmic heating from the induced plasma current that also serves to create the magnetic equilibrium for the discharge. The ITER project is a large multi-billion dollar international project in which the US participates. The success of the ITER project depends critically on the ability to create and maintain burning plasma conditions, it is absolutely necessary to have physics-based models that can accurately simulate the RF processes that affect the dynamical evolution of the ITER discharge. The Center for Simulation of WavePlasma Interactions (CSWPI), also known as RF-SciDAC, is a multi-institutional collaboration that has conducted ongoing research aimed at developing: (1) Coupled core-to-edge simulations that will lead to an increased understanding of parasitic losses of the applied RF power in the boundary plasma between the RF antenna and the core plasma; (2) Development of models for core interactions of RF waves with energetic electrons and ions (including fusion alpha particles and fast neutral beam ions) that include a more accurate representation of the particle dynamics in the combined equilibrium and wave fields; and (3) Development of improved algorithms that will take advantage of massively parallel computing platforms at the petascale level and beyond to achieve the needed physics, resolution, and/or statistics to address these issues. CompX provides computer codes and analysis for the calculation of the electron and ion distributions in velocity-space and plasma radius which are necessary for reliable calculations of power deposition and toroidal current drive due to combined radiofrequency and neutral beam at high injected powers. It has also contributed to ray tracing modeling of injected radiofrequency powers, and to coupling between full-wave radiofrequency wave models and the distribution function calculations. In the course of this research, the Fokker-Planck distribution function calculation was made substantially more realistic by inclusion of finite-width drift-orbit effects (FOW). FOW effects were also implemented in a calculation of the phase-space diffusion resulting from radiofrequency full-wave models. Average level of funding for CompX was approximately three man-months per year.« less
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2010 CFR
2010-07-01
... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...
Difference of refraction values between standard autorefractometry and Plusoptix.
Bogdănici, Camelia Margareta; Săndulache, Codrina Maria; Vasiliu, Rodica; Obadă, Otilia
2016-01-01
Aim: Comparison between the objective refraction measurement results determined with Topcon KR-8900 standard autorefractometer and Plusoptix A09 photo-refractometer in children. Material and methods: A prospective transversal study was performed in the Department of Ophthalmology of "Sf. Spiridon" Hospital in Iași on 90 eyes of 45 pediatric patients, with a mean age of 8,82 ± 3,52 years, examined with noncycloplegic measurements provided by Plusoptix A09 and cycloplegic and noncycloplegic measurements provided by Topcon KR-8900 standard autorefractometer. The clinical parameters compared were the following: spherical equivalent (SE), spherical and cylindrical values, and cylinder axis. Astigmatism was recorded and evaluated with the cylindrical value on minus after transposition. The statistical calculation was performed with paired t-tests and Pearson's correlation analysis. All the data were analyzed with SPSS statistical package 19 (SPSS for Windows, Chicago, IL). Results: Plusoptix A09 noncycloplegic values were relatively equal between the eyes, with slightly lower values compared to noncycloplegic auto refractometry. Mean (± SD) measurements provided by Plusoptix AO9 were the following: spherical power 1.11 ± 1.52, cylindrical power 0.80 ± 0.80, and spherical equivalent 0.71 ± 1.39. The noncycloplegic auto refractometer mean (± SD) measurements were spherical power 1.12 ± 1.63, cylindrical power 0.79 ± 0,77 and spherical equivalent 0.71 ± 1.58. The cycloplegic auto refractometer mean (± SD) measurements were spherical power 2.08 ± 1.95, cylindrical power 0,82 ± 0.85 and spherical equivalent 1.68 ± 1.87. 32% of the eyes were hyperopic, 2.67% were myopic, 65.33% had astigmatism, and 30% eyes had amblyopia. Conclusions: Noncycloplegic objective refraction values were similar with those determined by autorefractometry. Plusoptix had an important role in the ophthalmological screening, but did not detect higher refractive errors, justifying the cycloplegic autorefractometry.
Percolation technique for galaxy clustering
NASA Technical Reports Server (NTRS)
Klypin, Anatoly; Shandarin, Sergei F.
1993-01-01
We study percolation in mass and galaxy distributions obtained in 3D simulations of the CDM, C + HDM, and the power law (n = -1) models in the Omega = 1 universe. Percolation statistics is used here as a quantitative measure of the degree to which a mass or galaxy distribution is of a filamentary or cellular type. The very fast code used calculates the statistics of clusters along with the direct detection of percolation. We found that the two parameters mu(infinity), characterizing the size of the largest cluster, and mu-squared, characterizing the weighted mean size of all clusters excluding the largest one, are extremely useful for evaluating the percolation threshold. An advantage of using these parameters is their low sensitivity to boundary effects. We show that both the CDM and the C + HDM models are extremely filamentary both in mass and galaxy distribution. The percolation thresholds for the mass distributions are determined.
Fatal falls in the US construction industry, 1990 to 1999.
Derr, J; Forst, L; Chen, H Y; Conroy, L
2001-10-01
The Occupational Safety and Health Administration's (OSHA's) Integrated Management Information System (IMIS) database allows for the detailed analysis of risk factors surrounding fatal occupational events. This study used IMIS data to (1) perform a risk factor analysis of fatal construction falls, and (2) assess the impact of the February 1995 29 CFR Part 1926 Subpart M OSHA fall protection regulations for construction by calculating trends in fatal fall rates. In addition, IMIS data on fatal construction falls were compared with data from other occupational fatality surveillance systems. For falls in construction, the study identified several demographic factors that may indicate increased risk. A statistically significant downward trend in fatal falls was evident in all construction and within several construction categories during the decade. Although the study failed to show a statistically significant intervention effect from the new OSHA regulations, it may have lacked the power to do so.
[Health for All-Italia: an indicator system on health].
Burgio, Alessandra; Crialesi, Roberta; Loghi, Marzia
2003-01-01
The Health for All - Italia information system collects health data from several sources. It is intended to be a cornerstone for the achievement of an overview about health in Italy. Health is analyzed at different levels, ranging from health services, health needs, lifestyles, demographic, social, economic and environmental contexts. The database associated software allows to pin down statistical data into graphs and tables, and to carry out simple statistical analysis. It is therefore possible to view the indicators' time series, make simple projections and compare the various indicators over the years for each territorial unit. This is possible by means of tables, graphs (histograms, line graphs, frequencies, linear regression with calculation of correlation coefficients, etc) and maps. These charts can be exported to other programs (i.e. Word, Excel, Power Point), or they can be directly printed in color or black and white.
Are EUR and GBP different words for the same currency?
NASA Astrophysics Data System (ADS)
Ivanova, K.; Ausloos, M.
2002-05-01
The British Pound (GBP) is not part of the Euro (EUR) monetary system. In order to find out arguments on whether GBP should join the EUR or not correlations are calculated between GBP exchange rates with respect to various currencies: USD, JPY, CHF, DKK, the currencies forming EUR and a reconstructed EUR for the time interval from 1993 till June 30, 2000. The distribution of fluctuations of the exchange rates is Gaussian for the central part of the distribution, but has fat tails for the large size fluctuations. Within the Detrended Fluctuation Analysis (DFA) statistical method the power law behavior describing the root-mean-square deviation from a linear trend of the exchange rate fluctuations is obtained as a function of time for the time interval of interest. The time-dependent exponent evolution of the exchange rate fluctuations is given. Statistical considerations imply that the GBP is already behaving as a true EUR.
Schenk, Emily R; Nau, Frederic; Fernandez-Lima, Francisco
2015-06-01
The ability to correlate experimental ion mobility data with candidate structures from theoretical modeling provides a powerful analytical and structural tool for the characterization of biomolecules. In the present paper, a theoretical workflow is described to generate and assign candidate structures for experimental trapped ion mobility and H/D exchange (HDX-TIMS-MS) data following molecular dynamics simulations and statistical filtering. The applicability of the theoretical predictor is illustrated for a peptide and protein example with multiple conformations and kinetic intermediates. The described methodology yields a low computational cost and a simple workflow by incorporating statistical filtering and molecular dynamics simulations. The workflow can be adapted to different IMS scenarios and CCS calculators for a more accurate description of the IMS experimental conditions. For the case of the HDX-TIMS-MS experiments, molecular dynamics in the "TIMS box" accounts for a better sampling of the molecular intermediates and local energy minima.
Vorticity and divergence in the solar photosphere
NASA Technical Reports Server (NTRS)
Wang, YI; Noyes, Robert W.; Tarbell, Theodore D.; Title, Alan M.
1995-01-01
We have studied an outstanding sequence of continuum images of the solar granulation from Pic du Midi Observatory. We have calculated the horizontal vector flow field using a correlation tracking algorithm, and from this determined three scalar field: the vertical component of the curl; the horizontal divergence; and the horizontal flow speed. The divergence field has substantially longer coherence time and more power than does the curl field. Statistically, curl is better correlated with regions of negative divergence - that is, the vertical vorticity is higher in downflow regions, suggesting excess vorticity in intergranular lanes. The average value of the divergence is largest (i.e., outflow is largest) where the horizontal speed is large; we associate these regions with exploding granules. A numerical simulation of general convection also shows similar statistical differences between curl and divergence. Some individual small bright points in the granulation pattern show large local vorticities.
Heterodyne efficiency for a coherent laser radar with diffuse or aerosol targets
NASA Technical Reports Server (NTRS)
Frehlich, R. G.
1993-01-01
The performance of a Coherent Laser Radar is determined by the statistics of the coherent Doppler signal. The heterodyne efficiency is an excellent indication of performance because it is an absolute measure of beam alignment and is independent of the transmitter power, the target backscatter coefficient, the atmospheric attenuation, and the detector quantum efficiency and gain. The theoretical calculation of heterodyne efficiency for an optimal monostatic lidar with a circular aperture and Gaussian transmit laser is presented including beam misalignment in the far-field and near-field regimes. The statistical behavior of estimates of the heterodyne efficiency using a calibration hard target are considered. For space based applications, a biased estimate of heterodyne efficiency is proposed that removes the variability due to the random surface return but retains the sensitivity to misalignment. Physical insight is provided by simulation of the fields on the detector surface. The required detector calibration is also discussed.
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Statistics of baryon correlation functions in lattice QCD
NASA Astrophysics Data System (ADS)
Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration
2017-12-01
A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.
NASA Astrophysics Data System (ADS)
Wang, Yaping; Lin, Shunjiang; Yang, Zhibin
2017-05-01
In the traditional three-phase power flow calculation of the low voltage distribution network, the load model is described as constant power. Since this model cannot reflect the characteristics of actual loads, the result of the traditional calculation is always different from the actual situation. In this paper, the load model in which dynamic load represented by air conditioners parallel with static load represented by lighting loads is used to describe characteristics of residents load, and the three-phase power flow calculation model is proposed. The power flow calculation model includes the power balance equations of three-phase (A,B,C), the current balance equations of phase 0, and the torque balancing equations of induction motors in air conditioners. And then an alternating iterative algorithm of induction motor torque balance equations with each node balance equations is proposed to solve the three-phase power flow model. This method is applied to an actual low voltage distribution network of residents load, and by the calculation of three different operating states of air conditioners, the result demonstrates the effectiveness of the proposed model and the algorithm.
Quantum fluctuation theorems and power measurements
NASA Astrophysics Data System (ADS)
Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter
2015-07-01
Work in the paradigm of the quantum fluctuation theorems of Crooks and Jarzynski is determined by projective measurements of energy at the beginning and end of the force protocol. In analogy to classical systems, we consider an alternative definition of work given by the integral of the supplied power determined by integrating up the results of repeated measurements of the instantaneous power during the force protocol. We observe that such a definition of work, in spite of taking account of the process dependence, has different possible values and statistics from the work determined by the conventional two energy measurement approach (TEMA). In the limit of many projective measurements of power, the system’s dynamics is frozen in the power measurement basis due to the quantum Zeno effect leading to statistics only trivially dependent on the force protocol. In general the Jarzynski relation is not satisfied except for the case when the instantaneous power operator commutes with the total Hamiltonian at all times. We also consider properties of the joint statistics of power-based definition of work and TEMA work in protocols where both values are determined. This allows us to quantify their correlations. Relaxing the projective measurement condition, weak continuous measurements of power are considered within the stochastic master equation formalism. Even in this scenario the power-based work statistics is in general not able to reproduce qualitative features of the TEMA work statistics.
Neuroimaging Identifies Patients Most Likely to Respond to a Restorative Stroke Therapy.
Cassidy, Jessica M; Tran, George; Quinlan, Erin B; Cramer, Steven C
2018-02-01
Patient heterogeneity reduces statistical power in clinical trials of restorative therapies. Valid predictors of treatment responsiveness are needed, and several have been studied with a focus on corticospinal tract (CST) injury. We studied performance of 4 such measures for predicting behavioral gains in response to motor training therapy. Patients with subacute-chronic hemiparetic stroke (n=47) received standardized arm motor therapy, and change in arm Fugl-Meyer score was calculated from baseline to 1 month post-therapy. Injury measures calculated from baseline magnetic resonance imaging included (1) percent CST overlap with stroke, (2) CST-related atrophy (cerebral peduncle area), (3) CST integrity (fractional anisotropy) in the cerebral peduncle, and (4) CST integrity in the posterior limb of internal capsule. Percent CST overlap with stroke, CST-related atrophy, and CST integrity did not correlate with one another, indicating that these 3 measures captured independent features of CST injury. Percent injury to CST significantly predicted treatment-related behavioral gains ( r =-0.41; P =0.004). The other CST injury measures did not, neither did total infarct volume nor baseline behavioral deficits. When directly comparing patients with mild versus severe injury using the percent CST injury measure, the odds ratio was 15.0 (95% confidence interval, 1.54-147; P <0.005) for deriving clinically important treatment-related gains. Percent CST injury is useful for predicting motor gains in response to therapy in the setting of subacute-chronic stroke. This measure can be used as an entry criterion or a stratifying variable in restorative stroke trials to increase statistical power, reduce sample size, and reduce the cost of such trials. © 2018 American Heart Association, Inc.
Volpov, Beth L; Rosen, David A S; Trites, Andrew W; Arnould, John P Y
2015-08-01
We tested the ability of overall dynamic body acceleration (ODBA) to predict the rate of oxygen consumption ([Formula: see text]) in freely diving Steller sea lions (Eumetopias jubatus) while resting at the surface and diving. The trained sea lions executed three dive types-single dives, bouts of multiple long dives with 4-6 dives per bout, or bouts of multiple short dives with 10-12 dives per bout-to depths of 40 m, resulting in a range of activity and oxygen consumption levels. Average metabolic rate (AMR) over the dive cycle or dive bout calculated was calculated from [Formula: see text]. We found that ODBA could statistically predict AMR when data from all dive types were combined, but that dive type was a significant model factor. However, there were no significant linear relationships between AMR and ODBA when data for each dive type were analyzed separately. The potential relationships between AMR and ODBA were not improved by including dive duration, food consumed, proportion of dive cycle spent submerged, or number of dives per bout. It is not clear whether the lack of predictive power within dive type was due to low statistical power, or whether it reflected a true absence of a relationship between ODBA and AMR. The average percent error for predicting AMR from ODBA was 7-11 %, and standard error of the estimated AMR was 5-32 %. Overall, the extensive range of dive behaviors and physiological conditions we tested indicated that ODBA was not suitable for estimating AMR in the field due to considerable error and the inconclusive effects of dive type.
Crans, Gerald G; Shuster, Jonathan J
2008-08-15
The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd
Wagner, Márcia Helena; Barletta, Fernando Branco; Reis, Magda de Souza; Mello, Luciano Loureiro; Ferreira, Ronise; Fernandes, Antônio Luiz Rocha
2006-01-01
The purpose of this study was to assess dentin removal during root canal preparation by different operators using a NSK reciprocating handpiece. Eighty-four human single-rooted mandibular premolars were hand instrumented using Triple-Flex stainless-steel files (Kerr) up to #30, weighed in analytical balance and randomly assigned to 4 groups (n=21). All specimens were mechanically prepared at the working length with #35 to #45 Triple-Flex files (Kerr) coupled to a NSK (TEP-E10R, Nakanishi Inc.) reciprocating handpiece powered by an electric motor (Endo Plus; VK Driller). Groups 1 to 4 were prepared by a professor of Endodontics, an endodontist, a third-year dental student and a general dentist, respectively. Teeth were reweighed after root canal preparation. The difference between weights was calculated and the means of dentin removal in each group were analyzed statistically by ANOVA and Tukey's test at 5 % significance level. The greatest amount of dentin removal was found in group 4, followed by groups 2, 3 and 1. Group 4 differed statistically from the other groups regarding dentin removal means [p<0.001 (group 1); p=0.005 (group 2); and p=0.001 (group 3)]. No statistically significant difference was found between groups 1 and 2 (p=0.608), 1 and 3 (p=0.914) and 2 and 3 (p=0.938). In conclusion, although the group prepared by a general dentist differed statistically from the other groups in terms of amount of dentin removal, this difference was clinically irrelevant. The NSK reciprocating handpiece powered by an electric engine was proved an effective auxiliary tool in root canal preparation, regardless of the operator's skills.
An analysis of science versus pseudoscience
NASA Astrophysics Data System (ADS)
Hooten, James T.
2011-12-01
This quantitative study identified distinctive features in archival datasets commissioned by the National Science Foundation (NSF) for Science and Engineering Indicators reports. The dependent variables included education level, and scores for science fact knowledge, science process knowledge, and pseudoscience beliefs. The dependent variables were aggregated into nine NSF-defined geographic regions and examined for the years 2004 and 2006. The variables were also examined over all years available in the dataset. Descriptive statistics were determined and tests for normality and homogeneity of variances were performed using Statistical Package for the Social Sciences. Analysis of Variance was used to test for statistically significant differences between the nine geographic regions for each of the four dependent variables. Statistical significance of 0.05 was used. Tukey post-hoc analysis was used to compute practical significance of differences between regions. Post-hoc power analysis using G*Power was used to calculate the probability of Type II errors. Tests for correlations across all years of the dependent variables were also performed. Pearson's r was used to indicate the strength of the relationship between the dependent variables. Small to medium differences in science literacy and education level were observed between many of the nine U.S. geographic regions. The most significant differences occurred when the West South Central region was compared to the New England and the Pacific regions. Belief in pseudoscience appeared to be distributed evenly across all U.S. geographic regions. Education level was a strong indicator of science literacy regardless of a respondent's region of residence. Recommendations for further study include more in-depth investigation to uncover the nature of the relationship between education level and belief in pseudoscience.
Functional Regression Models for Epistasis Analysis of Multiple Quantitative Traits.
Zhang, Futao; Xie, Dan; Liang, Meimei; Xiong, Momiao
2016-04-01
To date, most genetic analyses of phenotypes have focused on analyzing single traits or analyzing each phenotype independently. However, joint epistasis analysis of multiple complementary traits will increase statistical power and improve our understanding of the complicated genetic structure of the complex diseases. Despite their importance in uncovering the genetic structure of complex traits, the statistical methods for identifying epistasis in multiple phenotypes remains fundamentally unexplored. To fill this gap, we formulate a test for interaction between two genes in multiple quantitative trait analysis as a multiple functional regression (MFRG) in which the genotype functions (genetic variant profiles) are defined as a function of the genomic position of the genetic variants. We use large-scale simulations to calculate Type I error rates for testing interaction between two genes with multiple phenotypes and to compare the power with multivariate pairwise interaction analysis and single trait interaction analysis by a single variate functional regression model. To further evaluate performance, the MFRG for epistasis analysis is applied to five phenotypes of exome sequence data from the NHLBI's Exome Sequencing Project (ESP) to detect pleiotropic epistasis. A total of 267 pairs of genes that formed a genetic interaction network showed significant evidence of epistasis influencing five traits. The results demonstrate that the joint interaction analysis of multiple phenotypes has a much higher power to detect interaction than the interaction analysis of a single trait and may open a new direction to fully uncovering the genetic structure of multiple phenotypes.
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
Perraton, Luke G.; Bower, Kelly J.; Adair, Brooke; Pua, Yong-Hao; Williams, Gavin P.; McGaw, Rebekah
2015-01-01
Introduction Hand-held dynamometry (HHD) has never previously been used to examine isometric muscle power. Rate of force development (RFD) is often used for muscle power assessment, however no consensus currently exists on the most appropriate method of calculation. The aim of this study was to examine the reliability of different algorithms for RFD calculation and to examine the intra-rater, inter-rater, and inter-device reliability of HHD as well as the concurrent validity of HHD for the assessment of isometric lower limb muscle strength and power. Methods 30 healthy young adults (age: 23±5yrs, male: 15) were assessed on two sessions. Isometric muscle strength and power were measured using peak force and RFD respectively using two HHDs (Lafayette Model-01165 and Hoggan microFET2) and a criterion-reference KinCom dynamometer. Statistical analysis of reliability and validity comprised intraclass correlation coefficients (ICC), Pearson correlations, concordance correlations, standard error of measurement, and minimal detectable change. Results Comparison of RFD methods revealed that a peak 200ms moving window algorithm provided optimal reliability results. Intra-rater, inter-rater, and inter-device reliability analysis of peak force and RFD revealed mostly good to excellent reliability (coefficients ≥ 0.70) for all muscle groups. Concurrent validity analysis showed moderate to excellent relationships between HHD and fixed dynamometry for the hip and knee (ICCs ≥ 0.70) for both peak force and RFD, with mostly poor to good results shown for the ankle muscles (ICCs = 0.31–0.79). Conclusions Hand-held dynamometry has good to excellent reliability and validity for most measures of isometric lower limb strength and power in a healthy population, particularly for proximal muscle groups. To aid implementation we have created freely available software to extract these variables from data stored on the Lafayette device. Future research should examine the reliability and validity of these variables in clinical populations. PMID:26509265
Applying Probabilistic Decision Models to Clinical Trial Design
Smith, Wade P; Phillips, Mark H
2018-01-01
Clinical trial design most often focuses on a single or several related outcomes with corresponding calculations of statistical power. We consider a clinical trial to be a decision problem, often with competing outcomes. Using a current controversy in the treatment of HPV-positive head and neck cancer, we apply several different probabilistic methods to help define the range of outcomes given different possible trial designs. Our model incorporates the uncertainties in the disease process and treatment response and the inhomogeneities in the patient population. Instead of expected utility, we have used a Markov model to calculate quality adjusted life expectancy as a maximization objective. Monte Carlo simulations over realistic ranges of parameters are used to explore different trial scenarios given the possible ranges of parameters. This modeling approach can be used to better inform the initial trial design so that it will more likely achieve clinical relevance. PMID:29888075
Characteristics and verification of a car-borne survey system for dose rates in air: KURAMA-II.
Tsuda, S; Yoshida, T; Tsutsumi, M; Saito, K
2015-01-01
The car-borne survey system KURAMA-II, developed by the Kyoto University Research Reactor Institute, has been used for air dose rate mapping after the Fukushima Dai-ichi Nuclear Power Plant accident. KURAMA-II consists of a CsI(Tl) scintillation detector, a GPS device, and a control device for data processing. The dose rates monitored by KURAMA-II are based on the G(E) function (spectrum-dose conversion operator), which can precisely calculate dose rates from measured pulse-height distribution even if the energy spectrum changes significantly. The characteristics of KURAMA-II have been investigated with particular consideration to the reliability of the calculated G(E) function, dose rate dependence, statistical fluctuation, angular dependence, and energy dependence. The results indicate that 100 units of KURAMA-II systems have acceptable quality for mass monitoring of dose rates in the environment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Density-functional theory for fluid-solid and solid-solid phase transitions.
Bharadwaj, Atul S; Singh, Yashwant
2017-03-01
We develop a theory to describe solid-solid phase transitions. The density functional formalism of classical statistical mechanics is used to find an exact expression for the difference in the grand thermodynamic potentials of the two coexisting phases. The expression involves both the symmetry conserving and the symmetry broken parts of the direct pair correlation function. The theory is used to calculate phase diagram of systems of soft spheres interacting via inverse power potentials u(r)=ε(σ/r)^{n}, where parameter n measures softness of the potential. We find that for 1/n<0.154 systems freeze into the face centered cubic (fcc) structure while for 1/n≥0.154 the body-centred-cubic (bcc) structure is preferred. The bcc structure transforms into the fcc structure upon increasing the density. The calculated phase diagram is in good agreement with the one found from molecular simulations.
Bs and Ds decay constants in three-flavor lattice QCD.
Wingate, Matthew; Davies, Christine T H; Gray, Alan; Lepage, G Peter; Shigemitsu, Junko
2004-04-23
Capitalizing on recent advances in lattice QCD, we present a calculation of the leptonic decay constants f(B(s)) and f(D(s)) that includes effects of one strange sea quark and two light sea quarks via an improved staggered action. By shedding the quenched approximation and the associated lattice scale uncertainty, lattice QCD greatly increases its predictive power. Nonrelativistic QCD is used to simulate heavy quarks with masses between 1.5m(c) and m(b). We arrive at the following results: f(B(s))=260+/-7+/-26+/-8+/-5 and f(D(s))=290+/-20+/-29+/-29+/-6 MeV. The first quoted error is the statistical uncertainty, and the rest estimate the sizes of higher order terms neglected in this calculation. All of these uncertainties are systematically improvable by including another order in the weak coupling expansion, the nonrelativistic expansion, or the Symanzik improvement program.
Genetic data for 15 STR loci in a Kadazan-Dusun population from East Malaysia.
Kee, B P; Lian, L H; Lee, P C; Lai, T X; Chua, K H
2011-04-26
Allele frequencies of 15 short tandem repeat (STR) loci, namely D5S818, D7S820, D13S317, D16S539, TH01, TPOX, Penta D, Penta E, D3S1358, D8S1179, D18S51, D21S11, CSF1PO, vWA, and FGA, were determined for 154 individuals from the Kadazan-Dusun tribe, an indigenous population of East Malaysia. All loci were amplified by polymerase chain reaction, using the Powerplex 16 system. Alleles were typed using a gene analyzer and the Genemapper ID software. Various statistical parameters were calculated and the combined power of discrimination for the 15 loci in the population was calculated as 0.999999999999999. These loci are thus, informative and can be used effectively in forensic and genetic studies of this indigenous population.
A method for the microlensed flux variance of QSOs
NASA Astrophysics Data System (ADS)
Goodman, Jeremy; Sun, Ai-Lei
2014-06-01
A fast and practical method is described for calculating the microlensed flux variance of an arbitrary source by uncorrelated stars. The required inputs are the mean convergence and shear due to the smoothed potential of the lensing galaxy, the stellar mass function, and the absolute square of the Fourier transform of the surface brightness in the source plane. The mathematical approach follows previous authors but has been generalized, streamlined, and implemented in publicly available code. Examples of its application are given for Dexter and Agol's inhomogeneous-disc models as well as the usual Gaussian sources. Since the quantity calculated is a second moment of the magnification, it is only logarithmically sensitive to the sizes of very compact sources. However, for the inferred sizes of actual quasi-stellar objects (QSOs), it has some discriminatory power and may lend itself to simple statistical tests. At the very least, it should be useful for testing the convergence of microlensing simulations.
Achievable flatness in a large microwave power transmitting antenna
NASA Technical Reports Server (NTRS)
Ried, R. C.
1980-01-01
A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.
NASA Astrophysics Data System (ADS)
Rosato, J.; Capes, H.; Catoire, F.; Kadomtsev, M. B.; Levashova, M. G.; Lisitsa, V. S.; Marandet, Y.; Rosmej, F. B.; Stamm, R.
2011-08-01
In lithium-wall-conditioned tokamaks, the line radiation due to the intrinsic impurities (Li/Li+/Li++) plays a significant role on the power balance. Calculations of the radiation losses are usually performed using a stationary collisional-radiative model, assuming constant values for the plasma parameters (Ne, Te,…). Such an approach is not suitable for turbulent plasmas where the various parameters are time-dependent. This is critical especially for the edge region, where the fluctuation rates can reach several tens of percents [e.g. J.A. Boedo, J. Nucl. Mater. 390-391 (2009) 29-37]. In this work, the role of turbulence on the radiated power is investigated with a statistical formalism. A special emphasis is devoted to the role of temperature fluctuations, successively for low-frequency fluctuations and in the general case where the characteristic turbulence frequencies can be comparable to the collisional and radiative rates.
Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach
NASA Astrophysics Data System (ADS)
Lo, Min-Tzu; Lee, Wen-Chung
2014-05-01
Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.
How Many Studies Do You Need? A Primer on Statistical Power for Meta-Analysis
ERIC Educational Resources Information Center
Valentine, Jeffrey C.; Pigott, Therese D.; Rothstein, Hannah R.
2010-01-01
In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide…
Selimkhanov, J; Thompson, W C; Guo, J; Hall, K D; Musante, C J
2017-08-01
The design of well-powered in vivo preclinical studies is a key element in building the knowledge of disease physiology for the purpose of identifying and effectively testing potential antiobesity drug targets. However, as a result of the complexity of the obese phenotype, there is limited understanding of the variability within and between study animals of macroscopic end points such as food intake and body composition. This, combined with limitations inherent in the measurement of certain end points, presents challenges to study design that can have significant consequences for an antiobesity program. Here, we analyze a large, longitudinal study of mouse food intake and body composition during diet perturbation to quantify the variability and interaction of the key metabolic end points. To demonstrate how conclusions can change as a function of study size, we show that a simulated preclinical study properly powered for one end point may lead to false conclusions based on secondary end points. We then propose the guidelines for end point selection and study size estimation under different conditions to facilitate proper power calculation for a more successful in vivo study design.
Monitoring Statistics Which Have Increased Power over a Reduced Time Range.
ERIC Educational Resources Information Center
Tang, S. M.; MacNeill, I. B.
1992-01-01
The problem of monitoring trends for changes at unknown times is considered. Statistics that permit one to focus high power on a segment of the monitored period are studied. Numerical procedures are developed to compute the null distribution of these statistics. (Author)
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
Cerebral oscillatory activity during simulated driving using MEG
Sakihara, Kotoe; Hirata, Masayuki; Ebe, Kazutoshi; Kimura, Kenji; Yi Ryu, Seong; Kono, Yoshiyuki; Muto, Nozomi; Yoshioka, Masako; Yoshimine, Toshiki; Yorifuji, Shiro
2014-01-01
We aimed to examine cerebral oscillatory differences associated with psychological processes during simulated car driving. We recorded neuromagnetic signals in 14 healthy volunteers using magnetoencephalography (MEG) during simulated driving. MEG data were analyzed using synthetic aperture magnetometry to detect the spatial distribution of cerebral oscillations. Group effects between subjects were analyzed statistically using a non-parametric permutation test. Oscillatory differences were calculated by comparison between “passive viewing” and “active driving.” “Passive viewing” was the baseline, and oscillatory differences during “active driving” showed an increase or decrease in comparison with a baseline. Power increase in the theta band was detected in the superior frontal gyrus (SFG) during active driving. Power decreases in the alpha, beta, and low gamma bands were detected in the right inferior parietal lobe (IPL), left postcentral gyrus (PoCG), middle temporal gyrus (MTG), and posterior cingulate gyrus (PCiG) during active driving. Power increase in the theta band in the SFG may play a role in attention. Power decrease in the right IPL may reflect selectively divided attention and visuospatial processing, whereas that in the left PoCG reflects sensorimotor activation related to driving manipulation. Power decreases in the MTG and PCiG may be associated with object recognition. PMID:25566017
A Ratiometric Method for Johnson Noise Thermometry Using a Quantized Voltage Noise Source
NASA Astrophysics Data System (ADS)
Nam, S. W.; Benz, S. P.; Martinis, J. M.; Dresselhaus, P.; Tew, W. L.; White, D. R.
2003-09-01
Johnson Noise Thermometry (JNT) involves the measurement of the statistical variance of a fluctuating voltage across a resistor in thermal equilibrium. Modern digital techniques make it now possible to perform many functions required for JNT in highly efficient and predictable ways. We describe the operational characteristics of a prototype JNT system which uses digital signal processing for filtering, real-time spectral cross-correlation for noise power measurement, and a digitally synthesized Quantized Voltage Noise Source (QVNS) as an AC voltage reference. The QVNS emulates noise with a constant spectral density that is stable, programmable, and calculable in terms of known parameters using digital synthesis techniques. Changes in analog gain are accounted for by alternating the inputs between the Johnson noise sensor and the QVNS. The Johnson noise power at a known temperature is first balanced with a synthesized noise power from the QVNS. The process is then repeated by balancing the noise power from the same resistor at an unknown temperature. When the two noise power ratios are combined, a thermodynamic temperature is derived using the ratio of the two QVNS spectral densities. We present preliminary results where the ratio between the gallium triple point and the water triple point is used to demonstrate the accuracy of the measurement system with a standard uncertainty of 0.04 %.
The 1993 Mississippi river flood: A one hundred or a one thousand year event?
Malamud, B.D.; Turcotte, D.L.; Barton, C.C.
1996-01-01
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763
Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.
The effect of macro-bending on power confinement factor in single mode fibers
NASA Astrophysics Data System (ADS)
Waluyo, T. B.; Bayuwati, D.; Mulyanto, I.
2018-03-01
One of the methods to determine the macro-bending effect in a single mode fiber is by calculating its power loss coefficient. We describe an alternative method by using the equation of fractional power in the fiber core. Knowing the fiber parameters such as its core radius, refractive indexes, and operating wavelength; we can calculate the V-number and the fractional power in the core. Because the value of the fiber refractive indexes and the propagation constant are affected by bending, we can calculate the value of the fractional power in the core as a function of the bending radius. We calculate the fractional power in the core of an SMF28 and SM600 fiber and, to verify our calculation, we measure its transmission loss using an optical spectrum analyzer. Our calculations and experimental results showed that for SMF28 fiber, there is about 4% power loss due to bending at 633 nm, about 8% at 1310 nm, about 20% at 1550 nm, and about 60% at 1064 nm. For SM600 fiber, there is about 6% power loss due to bending at 633 nm, about 11% at 850 nm, and this fiber is not suitable for operating wavelength beyond 1000 nm.
Power flows and Mechanical Intensities in structural finite element analysis
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.
1989-01-01
The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Landry, Guillaume, E-mail: g.landry@lmu.de; Nijhuis, Reinoud; Thieke, Christian
2015-03-15
Purpose: Intensity modulated proton therapy (IMPT) of head and neck (H and N) cancer patients may be improved by plan adaptation. The decision to adapt the treatment plan based on a dose recalculation on the current anatomy requires a diagnostic quality computed tomography (CT) scan of the patient. As gantry-mounted cone beam CT (CBCT) scanners are currently being offered by vendors, they may offer daily or weekly updates of patient anatomy. CBCT image quality may not be sufficient for accurate proton dose calculation and it is likely necessary to perform CBCT CT number correction. In this work, the authors investigatedmore » deformable image registration (DIR) of the planning CT (pCT) to the CBCT to generate a virtual CT (vCT) to be used for proton dose recalculation. Methods: Datasets of six H and N cancer patients undergoing photon intensity modulated radiation therapy were used in this study to validate the vCT approach. Each dataset contained a CBCT acquired within 3 days of a replanning CT (rpCT), in addition to a pCT. The pCT and rpCT were delineated by a physician. A Morphons algorithm was employed in this work to perform DIR of the pCT to CBCT following a rigid registration of the two images. The contours from the pCT were deformed using the vector field resulting from DIR to yield a contoured vCT. The DIR accuracy was evaluated with a scale invariant feature transform (SIFT) algorithm comparing automatically identified matching features between vCT and CBCT. The rpCT was used as reference for evaluation of the vCT. The vCT and rpCT CT numbers were converted to stopping power ratio and the water equivalent thickness (WET) was calculated. IMPT dose distributions from treatment plans optimized on the pCT were recalculated with a Monte Carlo algorithm on the rpCT and vCT for comparison in terms of gamma index, dose volume histogram (DVH) statistics as well as proton range. The DIR generated contours on the vCT were compared to physician-drawn contours on the rpCT. Results: The DIR accuracy was better than 1.4 mm according to the SIFT evaluation. The mean WET differences between vCT (pCT) and rpCT were below 1 mm (2.6 mm). The amount of voxels passing 3%/3 mm gamma criteria were above 95% for the vCT vs rpCT. When using the rpCT contour set to derive DVH statistics from dose distributions calculated on the rpCT and vCT the differences, expressed in terms of 30 fractions of 2 Gy, were within [−4, 2 Gy] for parotid glands (D{sub mean}), spinal cord (D{sub 2%}), brainstem (D{sub 2%}), and CTV (D{sub 95%}). When using DIR generated contours for the vCT, those differences ranged within [−8, 11 Gy]. Conclusions: In this work, the authors generated CBCT based stopping power distributions using DIR of the pCT to a CBCT scan. DIR accuracy was below 1.4 mm as evaluated by the SIFT algorithm. Dose distributions calculated on the vCT agreed well to those calculated on the rpCT when using gamma index evaluation as well as DVH statistics based on the same contours. The use of DIR generated contours introduced variability in DVH statistics.« less
Lee, F K-H; Chan, C C-L; Law, C-K
2009-02-01
Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.
NASA Astrophysics Data System (ADS)
Li, Hai; Kumavor, Patrick D.; Alqasemi, Umar; Zhu, Quing
2014-03-01
Human ovarian tissue features extracted from photoacoustic spectra data, beam envelopes and co-registered ultrasound and photoacoustic images are used to characterize cancerous vs. normal processes using a support vector machine (SVM) classifier. The centers of suspicious tumor areas are estimated from the Gaussian fitting of the mean Radon transforms of the photoacoustic image along 0 and 90 degrees. Normalized power spectra are calculated using the Fourier transform of the photoacoustic beamformed data across these suspicious areas, where the spectral slope and 0-MHz intercepts are extracted. Image statistics, envelope histogram fitting and maximum output of 6 composite filters of cancerous or normal patterns along with other previously used features are calculated to compose a total of 17 features. These features are extracted from 169 datasets of 19 ex vivo ovaries. Half of the cancerous and normal datasets are randomly chosen to train a SVM classifier with polynomial kernel and the remainder is used for testing. With 50 times data resampling, the SVM classifier, for the training group, gives 100% sensitivity and 100% specificity. For the testing group, it gives 89.68+/- 6.37% sensitivity and 93.16+/- 3.70% specificity. These results are superior to those obtained earlier by our group using features extracted from photoacoustic raw data or image statistics only.
Herbert, Vanessa; Kyle, Simon D; Pratt, Daniel
2018-06-01
Individuals with insomnia report difficulties pertaining to their cognitive functioning. Cognitive behavioural therapy for insomnia (CBT-I) is associated with robust, long-term improvements in sleep parameters, however less is known about the impact of CBT-I on the daytime correlates of the disorder. A systematic review and narrative synthesis was conducted in order to summarise and evaluate the evidence regarding the impact of CBT-I on cognitive functioning. Reference databases were searched and studies were included if they assessed cognitive performance as an outcome of CBT-I, using either self-report questionnaires or cognitive tests. Eighteen studies met inclusion criteria, comprising 923 individuals with insomnia symptoms. The standardised mean difference was calculated at post-intervention and follow-up. We found preliminary evidence for small to moderate effects of CBT-I on subjective measures of cognitive functioning. Few of the effects were statistically significant, likely due to small sample sizes and limited statistical power. There is a lack of evidence with regards to the impact of CBT-I on objective cognitive performance, primarily due to the small number of studies that administered an objective measure (n = 4). We conclude that adequately powered randomised controlled trials, utilising both subjective and objective measures of cognitive functioning are required. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jets and Metastability in Quantum Mechanics and Quantum Field Theory
NASA Astrophysics Data System (ADS)
Farhi, David
I give a high level overview of the state of particle physics in the introduction, accessible without any background in the field. I discuss improvements of theoretical and statistical methods used for collider physics. These include telescoping jets, a statistical method which was claimed to allow jet searches to increase their sensitivity by considering several interpretations of each event. We find that indeed multiple interpretations extend the power of searches, for both simple counting experiments and powerful multivariate fitting experiments, at least for h → bb¯ at the LHC. Then I propose a method for automation of background calculations using SCET by appropriating the technology of Monte Carlo generators such as MadGraph. In the third chapter I change gears and discuss the future of the universe. It has long been known that our pocket of the standard model is unstable; there is a lower-energy configuration in a remote part of the configuration space, to which our universe will, eventually, decay. While the timescales involved are on the order of 10400 years (depending on how exactly one counts) and thus of no immediate worry, I discuss the shortcomings of the standard methods and propose a more physically motivated derivation for the decay rate. I then make various observations about the structure of decays in quantum field theory.
Angular power spectrum in publically released ALICE events
NASA Astrophysics Data System (ADS)
Llanes-Estrada, Felipe J.; Muñoz Martinez, Jose L.
2018-02-01
We study the particles emitted in the fireball following a Relativistic Heavy Ion Collision with the traditional angular analysis employed in cosmology and earth sciences, producing Mollweide plots of the number and pt distribution of a few actual, publically released ALICE-collaboration events and calculating their angular power spectrum. We also examine the angular spectrum of a simple two-particle correlation. While this may not be the optimal way of analyzing heavy ion data, our intention is to provide a one to one comparison to analysis in cosmology. With the limited statistics at hand, we do not find evidence for acoustic peaks but a decrease of Cl that is reminiscent of viscous attenuation, but subject to a strong effect from the rapidity acceptance which probably dominates (so we also subtract the m = 0 component). As an exercise, we still extract a characteristic Silk damping length (proportional to the square root of the viscosity over entropy density ratio) to illustrate the method. The absence of acoustic-like peaks is also compatible with a crossover from the QGP to the hadron gas (because a surface tension at domain boundaries would effect a restoring force that could have driven acoustic oscillations). Presently we do not understand a depression of the l = 6 multipole strength; perhaps ALICE could reexamine it with full statistics.
Chi-Square Statistics, Tests of Hypothesis and Technology.
ERIC Educational Resources Information Center
Rochowicz, John A.
The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Punjabi, Alkesh; Ali, Halima
2011-02-15
Any canonical transformation of Hamiltonian equations is symplectic, and any area-preserving transformation in 2D is a symplectomorphism. Based on these, a discrete symplectic map and its continuous symplectic analog are derived for forward magnetic field line trajectories in natural canonical coordinates. The unperturbed axisymmetric Hamiltonian for magnetic field lines is constructed from the experimental data in the DIII-D [J. L. Luxon and L. E. Davis, Fusion Technol. 8, 441 (1985)]. The equilibrium Hamiltonian is a highly accurate, analytic, and realistic representation of the magnetic geometry of the DIII-D. These symplectic mathematical maps are used to calculate the magnetic footprint onmore » the inboard collector plate in the DIII-D. Internal statistical topological noise and field errors are irreducible and ubiquitous in magnetic confinement schemes for fusion. It is important to know the stochasticity and magnetic footprint from noise and error fields. The estimates of the spectrum and mode amplitudes of the spatial topological noise and magnetic errors in the DIII-D are used as magnetic perturbation. The discrete and continuous symplectic maps are used to calculate the magnetic footprint on the inboard collector plate of the DIII-D by inverting the natural coordinates to physical coordinates. The combination of highly accurate equilibrium generating function, natural canonical coordinates, symplecticity, and small step-size together gives a very accurate calculation of magnetic footprint. Radial variation of magnetic perturbation and the response of plasma to perturbation are not included. The inboard footprint from noise and errors are dominated by m=3, n=1 mode. The footprint is in the form of a toroidally winding helical strip. The width of stochastic layer scales as (1/2) power of amplitude. The area of footprint scales as first power of amplitude. The physical parameters such as toroidal angle, length, and poloidal angle covered before striking, and the safety factor all have fractal structure. The average field diffusion near the X-point for lines that strike and that do not strike differs by about three to four orders of magnitude. The magnetic footprint gives the maximal bounds on size and heat flux density on collector plate.« less
Robust inference for group sequential trials.
Ganju, Jitendra; Lin, Yunzhi; Zhou, Kefei
2017-03-01
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests. Copyright © 2017 John Wiley & Sons, Ltd.
Wicks, J
2000-01-01
The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs. PMID:10788332
Wicks, J
2000-06-01
The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs.
Sequential associative memory with nonuniformity of the layer sizes.
Teramae, Jun-Nosuke; Fukai, Tomoki
2007-01-01
Sequence retrieval has a fundamental importance in information processing by the brain, and has extensively been studied in neural network models. Most of the previous sequential associative memory embedded sequences of memory patterns have nearly equal sizes. It was recently shown that local cortical networks display many diverse yet repeatable precise temporal sequences of neuronal activities, termed "neuronal avalanches." Interestingly, these avalanches displayed size and lifetime distributions that obey power laws. Inspired by these experimental findings, here we consider an associative memory model of binary neurons that stores sequences of memory patterns with highly variable sizes. Our analysis includes the case where the statistics of these size variations obey the above-mentioned power laws. We study the retrieval dynamics of such memory systems by analytically deriving the equations that govern the time evolution of macroscopic order parameters. We calculate the critical sequence length beyond which the network cannot retrieve memory sequences correctly. As an application of the analysis, we show how the present variability in sequential memory patterns degrades the power-law lifetime distribution of retrieved neural activities.
Density dependence of the nuclear energy-density functional
NASA Astrophysics Data System (ADS)
Papakonstantinou, Panagiota; Park, Tae-Sun; Lim, Yeunhwan; Hyun, Chang Ho
2018-01-01
Background: The explicit density dependence in the coupling coefficients entering the nonrelativistic nuclear energy-density functional (EDF) is understood to encode effects of three-nucleon forces and dynamical correlations. The necessity for the density-dependent coupling coefficients to assume the form of a preferably small fractional power of the density ρ is empirical and the power is often chosen arbitrarily. Consequently, precision-oriented parametrizations risk overfitting in the regime of saturation and extrapolations in dilute or dense matter may lose predictive power. Purpose: Beginning with the observation that the Fermi momentum kF, i.e., the cubic root of the density, is a key variable in the description of Fermi systems, we first wish to examine if a power hierarchy in a kF expansion can be inferred from the properties of homogeneous matter in a domain of densities, which is relevant for nuclear structure and neutron stars. For subsequent applications we want to determine a functional that is of good quality but not overtrained. Method: For the EDF, we fit systematically polynomial and other functions of ρ1 /3 to existing microscopic, variational calculations of the energy of symmetric and pure neutron matter (pseudodata) and analyze the behavior of the fits. We select a form and a set of parameters, which we found robust, and examine the parameters' naturalness and the quality of resulting extrapolations. Results: A statistical analysis confirms that low-order terms such as ρ1 /3 and ρ2 /3 are the most relevant ones in the nuclear EDF beyond lowest order. It also hints at a different power hierarchy for symmetric vs. pure neutron matter, supporting the need for more than one density-dependent term in nonrelativistic EDFs. The functional we propose easily accommodates known or adopted properties of nuclear matter near saturation. More importantly, upon extrapolation to dilute or asymmetric matter, it reproduces a range of existing microscopic results, to which it has not been fitted. It also predicts a neutron-star mass-radius relation consistent with observations. The coefficients display naturalness. Conclusions: Having been already determined for homogeneous matter, a functional of the present form can be mapped onto extended Skyrme-type functionals in a straightforward manner, as we outline here, for applications to finite nuclei. At the same time, the statistical analysis can be extended to higher orders and for different microscopic (ab initio) calculations with sufficient pseudodata points and for polarized matter.
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J
2007-09-24
In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Green Power Equivalency Calculator
Use this calculator to translate your green power use from kilowatt-hours to more understandable terms, such as the equivalent number of average American homes it could power or miles an electric car could drive.
Fu, Wenjiang J.; Stromberg, Arnold J.; Viele, Kert; Carroll, Raymond J.; Wu, Guoyao
2009-01-01
Over the past two decades, there have been revolutionary developments in life science technologies characterized by high throughput, high efficiency, and rapid computation. Nutritionists now have the advanced methodologies for the analysis of DNA, RNA, protein, low-molecular-weight metabolites, as well as access to bioinformatics databases. Statistics, which can be defined as the process of making scientific inferences from data that contain variability, has historically played an integral role in advancing nutritional sciences. Currently, in the era of systems biology, statistics has become an increasingly important tool to quantitatively analyze information about biological macromolecules. This article describes general terms used in statistical analysis of large, complex experimental data. These terms include experimental design, power analysis, sample size calculation, and experimental errors (type I and II errors) for nutritional studies at population, tissue, cellular, and molecular levels. In addition, we highlighted various sources of experimental variations in studies involving microarray gene expression, real-time polymerase chain reaction, proteomics, and other bioinformatics technologies. Moreover, we provided guidelines for nutritionists and other biomedical scientists to plan and conduct studies and to analyze the complex data. Appropriate statistical analyses are expected to make an important contribution to solving major nutrition-associated problems in humans and animals (including obesity, diabetes, cardiovascular disease, cancer, ageing, and intrauterine fetal retardation). PMID:20233650
Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.
2014-01-01
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748
Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S
2013-12-06
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.
Simulating flaring events in complex active regions driven by observed magnetograms
NASA Astrophysics Data System (ADS)
Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M. K.
2011-05-01
Context. We interpret solar flares as events originating in active regions that have reached the self organized critical state, by using a refined cellular automaton model with initial conditions derived from observations. Aims: We investigate whether the system, with its imposed physical elements, reaches a self organized critical state and whether well-known statistical properties of flares, such as scaling laws observed in the distribution functions of characteristic parameters, are reproduced after this state has been reached. Methods: To investigate whether the distribution functions of total energy, peak energy and event duration follow the expected scaling laws, we first applied a nonlinear force-free extrapolation that reconstructs the three-dimensional magnetic fields from two-dimensional vector magnetograms. We then locate magnetic discontinuities exceeding a threshold in the Laplacian of the magnetic field. These discontinuities are relaxed in local diffusion events, implemented in the form of cellular automaton evolution rules. Subsequent loading and relaxation steps lead the system to self organized criticality, after which the statistical properties of the simulated events are examined. Physical requirements, such as the divergence-free condition for the magnetic field vector, are approximately imposed on all elements of the model. Results: Our results show that self organized criticality is indeed reached when applying specific loading and relaxation rules. Power-law indices obtained from the distribution functions of the modeled flaring events are in good agreement with observations. Single power laws (peak and total flare energy) are obtained, as are power laws with exponential cutoff and double power laws (flare duration). The results are also compared with observational X-ray data from the GOES satellite for our active-region sample. Conclusions: We conclude that well-known statistical properties of flares are reproduced after the system has reached self organized criticality. A significant enhancement of our refined cellular automaton model is that it commences the simulation from observed vector magnetograms, thus facilitating energy calculation in physical units. The model described in this study remains consistent with fundamental physical requirements, and imposes physically meaningful driving and redistribution rules.
On projectile fragmentation at high-velocity perforation of a thin bumper
NASA Astrophysics Data System (ADS)
Myagkov, N. N.; Stepanov, V. V.
2014-09-01
By means of 3D numerical simulations, we study the statistical properties of the fragments cloud formed during high-velocity impact of a spherical projectile on a mesh bumper. We present a quantitative description of the projectile fragmentation, and study the nature of the transition from the damage to the fragmentation of the projectile when the impact velocity varies. A distinctive feature of the present work is that the calculations are carried out by smoothed particle hydrodynamics (SPH) method applied to the equations of mechanics of deformable solids (MDS). We describe the materials behavior by the Mie-Grüneisen equation of state and the Johnson-Cook model for the yield strength. The maximum principal stress spall model is used as the fracture model. It is shown that the simulation results of fragmentation based on the MDS equations by the SPH method are qualitatively consistent with the results obtained earlier on the basis of the molecular dynamics and discrete element models. It is found that the power-law distribution exponent does not depend on energy imparted to the projectile during the high-velocity impact. At the same time, our calculations show that the critical impact velocity, the power-law exponent and other critical exponents depend on the fracture criterion.
NASA Astrophysics Data System (ADS)
Yamamura, Hideho; Sato, Ryohei; Iwata, Yoshiharu
Global efforts toward energy conservation, increasing data centers, and the increasing use of IT equipments are leading to a demand in reduced power consumption of equipments, and power efficiency improvement of power supply units is becoming a necessity. MOSFETs are widely used for their low ON-resistances. Power efficiency is designed using time-domain circuit simulators, except for transformer copper-loss, which has frequency dependency which is calculated separately using methods based on skin and proximity effects. As semiconductor technology reduces the ON-resistance of MOSFETs, frequency dependency due to the skin effect or proximity effect is anticipated. In this study, ON-resistance of MOSFETs are measured and frequency dependency is confirmed. Power loss against rectangular current pulse is calculated. The calculation method for transformer copper-loss is expanded to MOSFETs. A frequency function for the resistance model is newly developed and parametric calculation is enabled. Acceleration of calculation is enabled by eliminating summation terms. Using this method, it is shown that the frequency dependent component of the measured MOSFETs increases the dissipation from 11% to 32% at a switching frequency of 100kHz. From above, this paper points out the importance of the frequency dependency of MOSFETs' ON-resistance, provides means of calculating its pulse losses, and improves loss calculation accuracy of SMPSs.
Calculated power distribution of a thermionic, beryllium oxide reflected, fast-spectrum reactor
NASA Technical Reports Server (NTRS)
Mayo, W.; Lantz, E.
1973-01-01
A procedure is developed and used to calculate the detailed power distribution in the fuel elements next to a beryllium oxide reflector of a fast-spectrum, thermionic reactor. The results of the calculations show that, although the average power density in these outer fuel elements is not far from the core average, the power density at the very edge of the fuel closest to the beryllium oxide is about 1.8 times the core avearge.
Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz
2015-03-01
FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.
Escalante, Yolanda; Saavedra, Jose M; Tella, Victor; Mansilla, Mirella; García-Hermoso, Antonio; Domínguez, Ana M
2013-04-01
The aims of this study were (a) to compare water polo game-related statistics by context (winning and losing teams) and phase (preliminary, classification, and semifinal/bronze medal/gold medal), and (b) identify characteristics that discriminate performances for each phase. The game-related statistics of the 230 men's matches played in World Championships (2007, 2009, and 2011) and European Championships (2008 and 2010) were analyzed. Differences between contexts (winning or losing teams) in each phase (preliminary, classification, and semifinal/bronze medal/gold medal) were determined using the chi-squared statistic, also calculating the effect sizes of the differences. A discriminant analysis was then performed after the sample-splitting method according to context (winning and losing teams) in each of the 3 phases. It was found that the game-related statistics differentiate the winning from the losing teams in each phase of an international championship. The differentiating variables are both offensive and defensive, including action shots, sprints, goalkeeper-blocked shots, and goalkeeper-blocked action shots. However, the number of discriminatory variables decreases as the phase becomes more demanding and the teams become more equally matched. The discriminant analysis showed the game-related statistics to discriminate performance in all phases (preliminary, classificatory, and semifinal/bronze medal/gold medal phase) with high percentages (91, 90, and 73%, respectively). Again, the model selected both defensive and offensive variables.
Magnification Bias in Gravitational Arc Statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caminha, G. B.; Estrada, J.; Makler, M.
2013-08-29
The statistics of gravitational arcs in galaxy clusters is a powerful probe of cluster structure and may provide complementary cosmological constraints. Despite recent progresses, discrepancies still remain among modelling and observations of arc abundance, specially regarding the redshift distribution of strong lensing clusters. Besides, fast "semi-analytic" methods still have to incorporate the success obtained with simulations. In this paper we discuss the contribution of the magnification in gravitational arc statistics. Although lensing conserves surface brightness, the magnification increases the signal-to-noise ratio of the arcs, enhancing their detectability. We present an approach to include this and other observational effects in semi-analyticmore » calculations for arc statistics. The cross section for arc formation ({\\sigma}) is computed through a semi-analytic method based on the ratio of the eigenvalues of the magnification tensor. Using this approach we obtained the scaling of {\\sigma} with respect to the magnification, and other parameters, allowing for a fast computation of the cross section. We apply this method to evaluate the expected number of arcs per cluster using an elliptical Navarro--Frenk--White matter distribution. Our results show that the magnification has a strong effect on the arc abundance, enhancing the fraction of arcs, moving the peak of the arc fraction to higher redshifts, and softening its decrease at high redshifts. We argue that the effect of magnification should be included in arc statistics modelling and that it could help to reconcile arcs statistics predictions with the observational data.« less
Vexler, Albert; Tanajian, Hovig; Hutson, Alan D
In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.
Non-resonant multipactor--A statistical model
NASA Astrophysics Data System (ADS)
Rasch, J.; Johansson, J. F.
2012-12-01
High power microwave systems operating in vacuum or near vacuum run the risk of multipactor breakdown. In order to avoid multipactor, it is necessary to make theoretical predictions of critical parameter combinations. These treatments are generally based on the assumption of electrons moving in resonance with the electric field while traversing the gap between critical surfaces. Through comparison with experiments, it has been found that only for small system dimensions will the resonant approach give correct predictions. Apparently, the resonance is destroyed due to the statistical spread in electron emission velocity, and for a more valid description it is necessary to resort to rather complicated statistical treatments of the electron population, and extensive simulations. However, in the limit where resonance is completely destroyed it is possible to use a much simpler treatment, here called non-resonant theory. In this paper, we develop the formalism for this theory, use it to calculate universal curves for the existence of multipactor, and compare with previous results. Two important effects that leads to an increase in the multipactor threshold in comparison with the resonant prediction are identified. These are the statistical spread of impact speed, which leads to a lower average electron impact speed, and the impact of electrons in phase regions where the secondary electrons are immediately reabsorbed, leading to an effective removal of electrons from the discharge.
NASA Astrophysics Data System (ADS)
Paramonov, P. V.; Vorontsov, A. M.; Kunitsyn, V. E.
2015-10-01
Numerical modeling of optical wave propagation in atmospheric turbulence is traditionally performed with using the so-called "split"-operator method, when the influence of the propagation medium's refractive index inhomogeneities is accounted for only within a system of infinitely narrow layers (phase screens) where phase is distorted. Commonly, under certain assumptions, such phase screens are considered as mutually statistically uncorrelated. However, in several important applications including laser target tracking, remote sensing, and atmospheric imaging, accurate optical field propagation modeling assumes upper limitations on interscreen spacing. The latter situation can be observed, for instance, in the presence of large-scale turbulent inhomogeneities or in deep turbulence conditions, where interscreen distances become comparable with turbulence outer scale and, hence, corresponding phase screens cannot be statistically uncorrelated. In this paper, we discuss correlated phase screens. The statistical characteristics of screens are calculated based on a representation of turbulent fluctuations of three-dimensional (3D) refractive index random field as a set of sequentially correlated 3D layers displaced in the wave propagation direction. The statistical characteristics of refractive index fluctuations are described in terms of the von Karman power spectrum density. In the representation of these 3D layers by corresponding phase screens, the geometrical optics approximation is used.
Ho, Lindsey A; Lange, Ethan M
2010-12-01
Genome-wide association (GWA) studies are a powerful approach for identifying novel genetic risk factors associated with human disease. A GWA study typically requires the inclusion of thousands of samples to have sufficient statistical power to detect single nucleotide polymorphisms that are associated with only modest increases in risk of disease given the heavy burden of a multiple test correction that is necessary to maintain valid statistical tests. Low statistical power and the high financial cost of performing a GWA study remains prohibitive for many scientific investigators anxious to perform such a study using their own samples. A number of remedies have been suggested to increase statistical power and decrease cost, including the utilization of free publicly available genotype data and multi-stage genotyping designs. Herein, we compare the statistical power and relative costs of alternative association study designs that use cases and screened controls to study designs that are based only on, or additionally include, free public control genotype data. We describe a novel replication-based two-stage study design, which uses free public control genotype data in the first stage and follow-up genotype data on case-matched controls in the second stage that preserves many of the advantages inherent when using only an epidemiologically matched set of controls. Specifically, we show that our proposed two-stage design can substantially increase statistical power and decrease cost of performing a GWA study while controlling the type-I error rate that can be inflated when using public controls due to differences in ancestry and batch genotype effects.
Multiplicative point process as a model of trading activity
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Powerlaw: a Python package for analysis of heavy-tailed distributions.
Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar
2014-01-01
Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible.
McMullan, Miriam; Jones, Ray; Lea, Susan
2010-04-01
This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
Geronikolou, Styliani; Zimeras, Stelios; Davos, Constantinos H; Michalopoulos, Ioannis; Tsitomeneas, Stephanos
2014-01-01
The impact of electromagnetic fields on health is of increasing scientific interest. The aim of this study was to examine how the Drosophila melanogaster animal model is affected when exposed to portable or mobile phone fields. Two experiments have been designed and performed in the same laboratory conditions. Insect cultures were exposed to the near field of a 2G mobile phone (the GSM 2G networks support and complement in parallel the 3G wide band or in other words the transmission of information via voice signals is served by the 2G technology in both mobile phones generations) and a 1880 MHz cordless phone both digitally modulated by human voice. Comparison with advanced statistics of the egg laying of the second generation exposed and non-exposed cultures showed limited statistical significance for the cordless phone exposed culture and statistical significance for the 900 MHz exposed insects. We calculated by physics, simulated and illustrated in three dimensional figures the calculated near fields of radiation inside the experimenting vials and their difference. Comparison of the power of the two fields showed that the difference between them becomes null when the experimental cylinder radius and the height of the antenna increase. Our results suggest a possible radiofrequency sensitivity difference in insects which may be due to the distance from the antenna or to unexplored intimate factors. Comparing the near fields of the two frequencies bands, we see similar not identical geometry in length and height from the antenna and that lower frequencies tend to drive to increased radiofrequency effects.
Statistical power as a function of Cronbach alpha of instrument questionnaire items.
Heo, Moonseong; Kim, Namhee; Faith, Myles S
2015-10-14
In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.
IOL calculation using paraxial matrix optics.
Haigis, Wolfgang
2009-07-01
Matrix methods have a long tradition in paraxial physiological optics. They are especially suited to describe and handle optical systems in a simple and intuitive manner. While these methods are more and more applied to calculate the refractive power(s) of toric intraocular lenses (IOL), they are hardly used in routine IOL power calculations for cataract and refractive surgery, where analytical formulae are commonly utilized. Since these algorithms are also based on paraxial optics, matrix optics can offer rewarding approaches to standard IOL calculation tasks, as will be shown here. Some basic concepts of matrix optics are introduced and the system matrix for the eye is defined, and its application in typical IOL calculation problems is illustrated. Explicit expressions are derived to determine: predicted refraction for a given IOL power; necessary IOL power for a given target refraction; refractive power for a phakic IOL (PIOL); predicted refraction for a thick lens system. Numerical examples with typical clinical values are given for each of these expressions. It is shown that matrix optics can be applied in a straightforward and intuitive way to most problems of modern routine IOL calculation, in thick or thin lens approximation, for aphakic or phakic eyes.
Power Enhancement in High Dimensional Cross-Sectional Tests
Fan, Jianqing; Liao, Yuan; Yao, Jiawei
2016-01-01
We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846
2017-10-19
GaN) was calculated and compared . Alpha-voltaic energy converters were designed in diamond and GaN based on the energy deposition calculations...Example Power Source Two example device designs are calculated and compared . A diamond device containing 2 charge collection regions (Schottky and p...ARL-TR-8189 ● OCT 2017 US Army Research Laboratory Design of Alpha-Voltaic Power Source Using Americium-241 (241Am) and Diamond
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell
2014-05-01
The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.
Wagner, Tyler; Irwin, Brian J.; James R. Bence,; Daniel B. Hayes,
2016-01-01
Monitoring to detect temporal trends in biological and habitat indices is a critical component of fisheries management. Thus, it is important that management objectives are linked to monitoring objectives. This linkage requires a definition of what constitutes a management-relevant “temporal trend.” It is also important to develop expectations for the amount of time required to detect a trend (i.e., statistical power) and for choosing an appropriate statistical model for analysis. We provide an overview of temporal trends commonly encountered in fisheries management, review published studies that evaluated statistical power of long-term trend detection, and illustrate dynamic linear models in a Bayesian context, as an additional analytical approach focused on shorter term change. We show that monitoring programs generally have low statistical power for detecting linear temporal trends and argue that often management should be focused on different definitions of trends, some of which can be better addressed by alternative analytical approaches.
Statistical Analysis of Large-Scale Structure of Universe
NASA Astrophysics Data System (ADS)
Tugay, A. V.
While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.
Green Power Equivalency Calculator - Calculations and References
Green power products eligible to be certified by an independent third party against national standards. As a matter of best practice and consumer protection, EPA strongly encourages organizations to purchase these types of certified green power products.
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
NASA Astrophysics Data System (ADS)
Castaño Moraga, C. A.; Suárez Santana, E.; Sabbagh Rodríguez, I.; Nebot Medina, R.; Suárez García, S.; Rodríguez Alvarado, J.; Piernavieja Izquierdo, G.; Ruiz Alzola, J.
2010-09-01
Wind farms authorization and power allocations to private investors promoting wind energy projects requires some planification strategies. This issue is even more important under land restrictions, as it is the case of Canary Islands, where numerous specially protected areas are present for environmental reasons and land is a scarce resource. Aware of this limitation, the Regional Government of Canary Islands designed the requirements of a public tender to grant licences to install new wind farms trying to maximize the energy produced in terms of occupied land. In this paper, we detail the methodology developed by the Canary Islands Institute of Technology (ITC, S.A.) to support the work of the technical staff of the Regional Ministry of Industry, responsible for the evaluation of a competitive tender process for awarding power lincenses to private investors. The maximization of wind energy production per unit of area requires an exhaustive wind profile characterization. To that end, wind speed was statistically characterized by means of a Weibull probability density function, which mainly depends on two parameters: the shape parameter K, which determines the slope of the curve, and the average wind speed v , which is a scale parameter. These two parameters have been evaluated at three different heights (40,60,80 m) over the whole canarian archipelago, as well as the main wind speed direction. These parameters are available from the public data source Wind Energy Map of the Canary Islands [1]. The proposed methodology is based on the calculation of an initially defined Energy Efficiency Basic Index (EEBI), which is a performance criteria that weighs the annual energy production of a wind farm per unit of area. The calculation of this parameter considers wind conditions, windturbine characteristics, geometry of windturbine distribution in the wind farm (position within the row and column of machines), and involves four steps: Estimation of the energy produced by every windturbine as if it were isolated from all the other machines of the wind farm, using its power curve and the statistical characterization of the wind profile at the site. Estimation of energy losses due to affections caused by other windturbine in the same row and missalignment with respect to the main wind speed direction. Estimation of energy losses due to affections induced by windturbines located upstream. EEBI calculation as the ratio between the annual energy production and the area occupied by the wind farm, as a function of wind speed profile and wind turbine characteristics. Computations involved above are modeled under a System Theory characterization
Eickhoff, Simon B; Nichols, Thomas E; Laird, Angela R; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T; Bzdok, Danilo; Eickhoff, Claudia R
2016-08-15
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Dental plaque removal with a novel battery-powered toothbrush.
Biesbrock, Aaron R; Walters, Patricia; Bartizek, Robert D; Ruhlman, Douglas; Donly, Kevin J
2002-04-01
To compare the plaque removal efficacy of a positive control power toothbrush (Oral-B Ultra Plaque Remover) to an experimental power toothbrush (Crest SpinBrush) following a single use. This study was a randomized, controlled, examiner-blind, 2-period crossover design which examined plaque removal with the two toothbrushes following a single use in 38 completed subjects. Plaque was scored before and after brushing using the Turesky Modification of the Quigley-Hein Index. Baseline plaque scores were 1.89 and 1.91 for the experimental toothbrush and control toothbrush treatment groups, respectively. With respect to all surfaces examined, the experimental toothbrush delivered an adjusted (via analysis of covariance) mean difference between baseline and post-brushing plaque scores of 0.46 while the control toothbrush delivered an adjusted mean difference of 0.45. These results were not statistically significant (P=0.645). A 95% one-sided upper confidence limit on the Ultra Plaque Remover minus SpinBrush difference in amount of plaque removed was calculated as 9.4% of the Ultra Plaque Remover adjusted mean. A common criterion for what is known as an "at least as good as" test is that the 95% one-sided confidence limit on the product difference is below 10% of the control product mean. Using this criterion, the SpinBrush is at least as good as the Oral-B Ultra Plaque Remover. With respect to buccal and lingual surfaces, the experimental toothbrush delivered very similar results relative to the control toothbrush. These results were also not statistically significant (P> 0.564).
Eickhoff, Simon B.; Nichols, Thomas E.; Laird, Angela R.; Hoffstaedter, Felix; Amunts, Katrin; Fox, Peter T.
2016-01-01
Given the increasing number of neuroimaging publications, the automated knowledge extraction on brain-behavior associations by quantitative meta-analyses has become a highly important and rapidly growing field of research. Among several methods to perform coordinate-based neuroimaging meta-analyses, Activation Likelihood Estimation (ALE) has been widely adopted. In this paper, we addressed two pressing questions related to ALE meta-analysis: i) Which thresholding method is most appropriate to perform statistical inference? ii) Which sample size, i.e., number of experiments, is needed to perform robust meta-analyses? We provided quantitative answers to these questions by simulating more than 120,000 meta-analysis datasets using empirical parameters (i.e., number of subjects, number of reported foci, distribution of activation foci) derived from the BrainMap database. This allowed to characterize the behavior of ALE analyses, to derive first power estimates for neuroimaging meta-analyses, and to thus formulate recommendations for future ALE studies. We could show as a first consequence that cluster-level family-wise error (FWE) correction represents the most appropriate method for statistical inference, while voxel-level FWE correction is valid but more conservative. In contrast, uncorrected inference and false-discovery rate correction should be avoided. As a second consequence, researchers should aim to include at least 20 experiments into an ALE meta-analysis to achieve sufficient power for moderate effects. We would like to note, though, that these calculations and recommendations are specific to ALE and may not be extrapolated to other approaches for (neuroimaging) meta-analysis. PMID:27179606
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Rong; Li, Yongdong; Liu, Chunliang
2016-07-15
The output power fluctuations caused by weights of macro particles used in particle-in-cell (PIC) simulations of a backward wave oscillator and a travelling wave tube are statistically analyzed. It is found that the velocities of electrons passed a specific slow-wave structure form a specific electron velocity distribution. The electron velocity distribution obtained in PIC simulation with a relative small weight of macro particles is considered as an initial distribution. By analyzing this initial distribution with a statistical method, the estimations of the output power fluctuations caused by different weights of macro particles are obtained. The statistical method is verified bymore » comparing the estimations with the simulation results. The fluctuations become stronger with increasing weight of macro particles, which can also be determined reversely from estimations of the output power fluctuations. With the weights of macro particles optimized by the statistical method, the output power fluctuations in PIC simulations are relatively small and acceptable.« less
Power Consumption and Calculation Requirement Analysis of AES for WSN IoT.
Hung, Chung-Wen; Hsu, Wen-Ting
2018-05-23
Because of the ubiquity of Internet of Things (IoT) devices, the power consumption and security of IoT systems have become very important issues. Advanced Encryption Standard (AES) is a block cipher algorithm is commonly used in IoT devices. In this paper, the power consumption and cryptographic calculation requirement for different payload lengths and AES encryption types are analyzed. These types include software-based AES-CB, hardware-based AES-ECB (Electronic Codebook Mode), and hardware-based AES-CCM (Counter with CBC-MAC Mode). The calculation requirement and power consumption for these AES encryption types are measured on the Texas Instruments LAUNCHXL-CC1310 platform. The experimental results show that the hardware-based AES performs better than the software-based AES in terms of power consumption and calculation cycle requirements. In addition, in terms of AES mode selection, the AES-CCM-MIC64 mode may be a better choice if the IoT device is considering security, encryption calculation requirement, and low power consumption at the same time. However, if the IoT device is pursuing lower power and the payload length is generally less than 16 bytes, then AES-ECB could be considered.
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
Properties of different selection signature statistics and a new strategy for combining them.
Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H
2015-11-01
Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.
Deep-Hole Neutron States with the (polarized Proton, Proton-Neutron Reaction.
NASA Astrophysics Data System (ADS)
Pella, Peter J.
The(' )(p,pn) reaction with a polarized proton beam of 148.9 MeV was used to investigate neutron deep -hole states at the Indiana University Cyclotron Facility. A coplanar geometry was used with the proton detector at 36(DEGREES) and the neutron detector at -36.7(DEGREES) with a flight path of 17.8 meters. Separation energies, triple differential cross sections and analyzing powers were measured for CD(,2), ('9)Be, BeO, ('28)Si, ('58)Ni, and ('90)Zr targets. An overall energy resolution of better than 1 MeV was achieved for the heavier targets where kinematic corrections are small. The energy resolution varied between 1 MeV and 3 MeV for the lighter targets. The analysis of the data was performed within the framework of the Distorted Wave Impulse Approximation (DWIA). The cross section shapes are consistent with DWIA calculations and extracted spectroscopic factors are reasonable for targets through Si. The DWIA interpretation begins to fail for larger separation energies and heavier targets. The analyzing powers showed an out -of-phase characteristic for different j-values of the oxygen p-states, but they did not agree with the DWIA predictions. Statistical uncertainties did not allow for detailed investigation of the analyzing power data for other targets. This experiment determined neutron deep-hole states up to approximately 70 MeV in separation energy for a representative set of targets with neutron number N between 1 and 50. The experiment determined spectroscopic factors for "valence" (loosely bound) neutrons where the DWIA calculations are expected to be valid and established the areas where the DWIA approach begins to fail. Also the experiment failed to demonstrate the usefulness of analyzing powers to distinguish between j = 1 + 1/2 and j = 1 - 1/2 states, but did determine the failure of DWIA calculations in this area. It should now be possible to study the reaction mechanism more closely by making longer runs on selected targets; in addition, it should be possible to study deep-hole states in heavier Z targets where comparable (p,2p) studies have run into difficulties because of Coulomb effects.
Iordache, Sevastiţa; Filip, Maria-Monalisa; Georgescu, Claudia-Valentina; Angelescu, Cristina; Ciurea, Tudorel; Săftoiu, Adrian
2012-06-01
Besides representing angiogenesis markers, microvascular density (MVD) and vascular endothelial growth factor (VEGF) are two important tools for the assessment of prognosis in patients with gastric cancer. The aim of our study was to assess the Doppler parameters (resistivity and pulsatility indexes) and vascularity index (VI) calculated by contrast-enhanced power Doppler endoscopic ultrasound (CEPD-EUS) in correlation with the expression of intra-tumoral MVD and VEGF in patients with gastric cancer. The study included 20 consecutive patients with advanced gastric carcinoma, but without distant metastasis at initial assessment. All the patients were assessed by contrast-enhanced power Doppler endoscopic ultrasound (EUS) combined with pulsed Doppler examinations in the late venous phase. The vascularity index (VI) was calculated before and after injection of second generation microbubble contrast specific agent (SonoVue 2.4 mL), used as a Doppler signal enhancer. Moreover, pulsed Doppler parameters (resistivity and pulsatility indexes) were further calculated. The correlation between power Doppler parameters and pathological/molecular parameters (MVD assessed through immunohistochemistry with CD31 and CD34, as well as VEGF assessed through real-time PCR) was assessed. Kaplan-Meier survival analysis was used for the assessment of prognosis. Significantly statistical correlations were found between post-contrast VI and CD34 (p=0.0226), VEGF (p=0.0231), VEGF-A (p=0.0464) and VEGF-B (p=0.0022) while pre-contrast VI was correlated only with CD34 expression. Pulsatility index and resistivity index were not correlated with MVD or VEGF expression. Survival analysis demonstrated that VEGF-A is an accurate parameter for survival rate (p=0.045), as compared to VEGF (p=0.085) and VEGF-B (p=0.230). We did not find any correlation between the survival rate and ultrasound parameters (RI, PI, pre-contrast VI or post-contrast VI). Assessment of tumor vascularity using contrast-enhanced EUS, including analysis of spectral Doppler parameters is possible and feasible in gastric cancer patients. A correlation between measured EUS vascularity and pathological parameters of angiogenesis (MVD and VEGF expression) was found.
An entropy-based statistic for genomewide association studies.
Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao
2005-07-01
Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.
The Enigmatic Cornea and Intraocular Lens Calculations: The LXXIII Edward Jackson Memorial Lecture.
Koch, Douglas D
2016-11-01
To review the progress and challenges in obtaining accurate corneal power measurements for intraocular lens (IOL) calculations. Personal perspective, review of literature, case presentations, and personal data. Through literature review findings, case presentations, and data from the author's center, the types of corneal measurement errors that can occur in IOL calculation are categorized and described, along with discussion of future options to improve accuracy. Advances in IOL calculation technology and formulas have greatly increased the accuracy of IOL calculations. Recent reports suggest that over 90% of normal eyes implanted with IOLs may achieve accuracy to within 0.5 diopter (D) of the refractive target. Though errors in estimation of corneal power can cause IOL calculation errors in eyes with normal corneas, greater difficulties in measuring corneal power are encountered in eyes with diseased, scarred, and postsurgical corneas. For these corneas, problematic issues are quantifying anterior corneal power and measuring posterior corneal power and astigmatism. Results in these eyes are improving, but 2 examples illustrate current limitations: (1) spherical accuracy within 0.5 D is achieved in only 70% of eyes with post-refractive surgery corneas, and (2) astigmatism accuracy within 0.5 D is achieved in only 80% of eyes implanted with toric IOLs. Corneal power measurements are a major source of error in IOL calculations. New corneal imaging technology and IOL calculation formulas have improved outcomes and hold the promise of ongoing progress. Copyright © 2016 Elsevier Inc. All rights reserved.
Kobayashi, Y; Narazaki, K; Akagi, R; Nakagaki, K; Kawamori, N; Ohta, K
2013-09-01
For achieving accurate and safe measurements of the force and power exerted on a load during resistance exercise, the Smith machine has been used instead of free weights. However, because some Smith machines possess counterweights, the equation for the calculation of force and power in this system should be different from the one used for free weights. The purpose of this investigation was to calculate force and power using an equation derived from a dynamic equation for a Smith machine with counterweights and to determine the differences in force and power calculated using 2 different equations. One equation was established ignoring the effect of the counterweights (Method 1). The other equation was derived from a dynamic equation for a barbell and counterweight system (Method 2). 9 female collegiate judo athletes performed bench throws using a Smith machine with a counterweight at 6 different loading conditions. Barbell displacement was recorded using a linear position transducer. The force and power were subsequently calculated by Methods 1 and 2. The results showed that the mean and peak power and force in Method 1 were significantly lower relative to those of Method 2 under all loading conditions. These results indicate that the mean and peak power and force during bench throwing using a Smith machine with counterweights would be underestimated when the calculations used to determine these parameters do not account for the effect of counterweights. © Georg Thieme Verlag KG Stuttgart · New York.
Kim, Mingue; Eom, Youngsub; Lee, Hwa; Suh, Young-Woo; Song, Jong Suk; Kim, Hyo Myung
2018-02-01
To evaluate the accuracy of IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio. Nine hundred twenty-eight eyes from 928 reference subjects and 158 eyes from 158 cataract patients who underwent phacoemulsification surgery were enrolled. Adjusted corneal power of cataract patients was calculated using the fictitious refractive index that was obtained from the geometric mean posterior/anterior corneal curvature radii ratio of reference subjects and adjusted anterior and predicted posterior corneal curvature radii from conventional keratometry (K) using the posterior/anterior corneal curvature radii ratio. The median absolute error (MedAE) based on the adjusted corneal power was compared with that based on conventional K in the Haigis and SRK/T formulae. The geometric mean posterior/anterior corneal curvature radii ratio was 0.808, and the fictitious refractive index of the cornea for a single Scheimpflug camera was 1.3275. The mean difference between adjusted corneal power and conventional K was 0.05 diopter (D). The MedAE based on adjusted corneal power (0.31 D in the Haigis formula and 0.32 D in the SRK/T formula) was significantly smaller than that based on conventional K (0.41 D and 0.40 D, respectively; P < 0.001 and P < 0.001, respectively). The percentage of eyes with refractive prediction error within ± 0.50 D calculated using adjusted corneal power (74.7%) was significantly greater than that obtained using conventional K (62.7%) in the Haigis formula (P = 0.029). IOL power calculation using adjusted corneal power according to the posterior/anterior corneal curvature radii ratio provided more accurate refractive outcomes than calculation using conventional K.
Canovas, Carmen; van der Mooren, Marrie; Rosén, Robert; Piers, Patricia A; Wang, Li; Koch, Douglas D; Artal, Pablo
2015-05-01
To determine the impact of the equivalent refractive index (ERI) on intraocular lens (IOL) power prediction for eyes with previous myopic laser in situ keratomileusis (LASIK) using custom ray tracing. AMO B.V., Groningen, the Netherlands, and the Department of Ophthalmology, Baylor College of Medicine, Houston, Texas, USA. Retrospective data analysis. The ERI was calculated individually from the post-LASIK total corneal power. Two methods to account for the posterior corneal surface were tested; that is, calculation from pre-LASIK data or from post-LASIK data only. Four IOL power predictions were generated using a computer-based ray-tracing technique, including individual ERI results from both calculation methods, a mean ERI over the whole population, and the ERI for normal patients. For each patient, IOL power results calculated from the four predictions as well as those obtained with the Haigis-L were compared with the optimum IOL power calculated after cataract surgery. The study evaluated 25 patients. The mean and range of ERI values determined using post-LASIK data were similar to those determined from pre-LASIK data. Introducing individual or an average ERI in the ray-tracing IOL power calculation procedure resulted in mean IOL power errors that were not significantly different from zero. The ray-tracing procedure that includes an average ERI gave a greater percentage of eyes with an IOL power prediction error within ±0.5 diopter than the Haigis-L (84% versus 52%). For IOL power determination in post-LASIK patients, custom ray tracing including a modified ERI was an accurate procedure that exceeded the current standards for normal eyes. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
PHOG analysis of self-similarity in aesthetic images
NASA Astrophysics Data System (ADS)
Amirshahi, Seyed Ali; Koch, Michael; Denzler, Joachim; Redies, Christoph
2012-03-01
In recent years, there have been efforts in defining the statistical properties of aesthetic photographs and artworks using computer vision techniques. However, it is still an open question how to distinguish aesthetic from non-aesthetic images with a high recognition rate. This is possibly because aesthetic perception is influenced also by a large number of cultural variables. Nevertheless, the search for statistical properties of aesthetic images has not been futile. For example, we have shown that the radially averaged power spectrum of monochrome artworks of Western and Eastern provenance falls off according to a power law with increasing spatial frequency (1/f2 characteristics). This finding implies that this particular subset of artworks possesses a Fourier power spectrum that is self-similar across different scales of spatial resolution. Other types of aesthetic images, such as cartoons, comics and mangas also display this type of self-similarity, as do photographs of complex natural scenes. Since the human visual system is adapted to encode images of natural scenes in a particular efficient way, we have argued that artists imitate these statistics in their artworks. In support of this notion, we presented results that artists portrait human faces with the self-similar Fourier statistics of complex natural scenes although real-world photographs of faces are not self-similar. In view of these previous findings, we investigated other statistical measures of self-similarity to characterize aesthetic and non-aesthetic images. In the present work, we propose a novel measure of self-similarity that is based on the Pyramid Histogram of Oriented Gradients (PHOG). For every image, we first calculate PHOG up to pyramid level 3. The similarity between the histograms of each section at a particular level is then calculated to the parent section at the previous level (or to the histogram at the ground level). The proposed approach is tested on datasets of aesthetic and non-aesthetic categories of monochrome images. The aesthetic image datasets comprise a large variety of artworks of Western provenance. Other man-made aesthetically pleasing images, such as comics, cartoons and mangas, were also studied. For comparison, a database of natural scene photographs is used, as well as datasets of photographs of plants, simple objects and faces that are in general of low aesthetic value. As expected, natural scenes exhibit the highest degree of PHOG self-similarity. Images of artworks also show high selfsimilarity values, followed by cartoons, comics and mangas. On average, other (non-aesthetic) image categories are less self-similar in the PHOG analysis. A measure of scale-invariant self-similarity (PHOG) allows a good separation of the different aesthetic and non-aesthetic image categories. Our results provide further support for the notion that, like complex natural scenes, images of artworks display a higher degree of self-similarity across different scales of resolution than other image categories. Whether the high degree of self-similarity is the basis for the perception of beauty in both complex natural scenery and artworks remains to be investigated.
The Ironic Effect of Significant Results on the Credibility of Multiple-Study Articles
ERIC Educational Resources Information Center
Schimmack, Ulrich
2012-01-01
Cohen (1962) pointed out the importance of statistical power for psychology as a science, but statistical power of studies has not increased, while the number of studies in a single article has increased. It has been overlooked that multiple studies with modest power have a high probability of producing nonsignificant results because power…
Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models
ERIC Educational Resources Information Center
Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles
2012-01-01
This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…
Notes on numerical reliability of several statistical analysis programs
Landwehr, J.M.; Tasker, Gary D.
1999-01-01
This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.
Typical calculation and analysis of carbon emissions in thermal power plants
NASA Astrophysics Data System (ADS)
Gai, Zhi-jie; Zhao, Jian-gang; Zhang, Gang
2018-03-01
On December 19, 2017, the national development and reform commission issued the national carbon emissions trading market construction plan (power generation industry), which officially launched the construction process of the carbon emissions trading market. The plan promotes a phased advance in carbon market construction, taking the power industry with a large carbon footprint as a breakthrough, so it is extremely urgent for power generation plants to master their carbon emissions. Taking a coal power plant as an example, the paper introduces the calculation process of carbon emissions, and comes to the fuel activity level, fuel emissions factor and carbon emissions data of the power plant. Power plants can master their carbon emissions according to this paper, increase knowledge in the field of carbon reserves, and make the plant be familiar with calculation method based on the power industry carbon emissions data, which can help power plants positioning accurately in the upcoming carbon emissions trading market.
Transitioning of power flow in beam models with bends
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.
1990-01-01
The propagation of power flow through a dynamically loaded beam model with 90 degree bends is investigated using NASTRAN and McPOW. The transitioning of power flow types (axial, torsional, and flexural) is observed throughout the structure. To get accurate calculations of the torsional response of beams using NASTRAN, torsional inertia effects had to be added to the mass matrix calculation section of the program. Also, mass effects were included in the calculation of BAR forces to improve the continuity of power flow between elements. The importance of including all types of power flow in an analysis, rather than only flexural power, is indicated by the example. Trying to interpret power flow results that only consider flexural components in even a moderately complex problem will result in incorrect conclusions concerning the total power flow field.
2013-01-01
Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463
Development of My Footprint Calculator
NASA Astrophysics Data System (ADS)
Mummidisetti, Karthik
The Environmental footprint is a very powerful tool that helps an individual to understand how their everyday activities are impacting environmental surroundings. Data shows that global climate change, which is a growing concern for nations all over the world, is already affecting humankind, plants and animals through raising ocean levels, droughts & desertification and changing weather patterns. In addition to a wide range of policy measures implemented by national and state governments, it is necessary for individuals to understand the impact that their lifestyle may have on their personal environmental footprint, and thus over the global climate change. "My Footprint Calculator" (myfootprintcalculator.com) has been designed to be one the simplest, yet comprehensive, web tools to help individuals calculate and understand their personal environmental impact. "My Footprint Calculator" is a website that queries users about their everyday habits and activities and calculates their personal impact on the environment. This website was re-designed to help users determine their environmental impact in various aspects of their lives ranging from transportation and recycling habits to water and energy usage with the addition of new features that will allow users to share their experiences and their best practices with other users interested in reducing their personal Environmental footprint. The collected data is stored in the database and a future goal of this work plans to analyze the collected data from all users (anonymously) for developing relevant trends and statistics.
NASA Astrophysics Data System (ADS)
Kartashev, A. L.; Vaulin, S. D.; Kartasheva, M. A.; Martynov, A. A.; Safonov, E. V.
2016-06-01
This article presents information about the main distinguishing features of microturbine power plants. The justification of the use of Francis turbine in microturbine power plants with rated power of 100 kW is given. Initial analytical engineering calculations of the turbine (without using computational fluid dynamics) with appropriate calculation methods are considered. The parametric study of nozzle blade and whole turbine stage using ANSYS CFX is descripted. The calculations determined the optimal geometry on the criterion of maximizing efficiency at total pressure ratio. The calculation results are presented in graphical form, as well as the velocity and pressure fields at the interscapular channels of nozzle unit and the impeller.
Detailed Uncertainty Analysis of the ZEM-3 Measurement System
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.
Scenario based optimization of a container vessel with respect to its projected operating conditions
NASA Astrophysics Data System (ADS)
Wagner, Jonas; Binkowski, Eva; Bronsart, Robert
2014-06-01
In this paper the scenario based optimization of the bulbous bow of the KRISO Container Ship (KCS) is presented. The optimization of the parametrically modeled vessel is based on a statistically developed operational profile generated from noon-to-noon reports of a comparable 3600 TEU container vessel and specific development functions representing the growth of global economy during the vessels service time. In order to consider uncertainties, statistical fluctuations are added. An analysis of these data lead to a number of most probable upcoming operating conditions (OC) the vessel will stay in the future. According to their respective likeliness an objective function for the evaluation of the optimal design variant of the vessel is derived and implemented within the parametrical optimization workbench FRIENDSHIP Framework. In the following this evaluation is done with respect to vessel's calculated effective power based on the usage of potential flow code. The evaluation shows, that the usage of scenarios within the optimization process has a strong influence on the hull form.
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
NASA Astrophysics Data System (ADS)
Pagano, E. V.; Acosta, L.; Auditore, L.; Cap, T.; Cardella, G.; Colonna, M.; De Filippo, E.; Geraci, E.; Gnoffo, B.; Lanzalone, G.; Maiolino, C.; Martorana, N.; Pagano, A.; Papa, M.; Piasecki, E.; Pirrone, S.; Politi, G.; Porto, F.; Quattrocchi, L.; Rizzo, F.; Russotto, P.; Trifiro’, A.; Trimarchi, M.; Siwek-Wilczynska, K.
2018-05-01
In nuclear reactions at Fermi energies two and multi particles intensity interferometry correlation methods are powerful tools in order to pin down the characteristic time scale of the emission processes. In this paper we summarize an improved application of the fragment-fragment correlation function in the specific physics case of heavy projectile-like (PLF) binary massive splitting in two fragments of intermediate mass(IMF). Results are shown for the reverse kinematics reaction 124 Sn+64 Ni at 35 AMeV that has been investigated by using the forward part of CHIMERA multi-detector. The analysis was performed as a function of the charge asymmetry of the observed couples of IMF. We show a coexistence of dynamical and statistical components as a function of the charge asymmetry. Transport CoMD simulations are compared with the data in order to pin down the timescale of the fragments production and the relevant ingredients of the in medium effective interaction used in the transport calculations.
Karzmark, Peter; Deutsch, Gayle K
2018-01-01
This investigation was designed to determine the predictive accuracy of a comprehensive neuropsychological and brief neuropsychological test battery with regard to the capacity to perform instrumental activities of daily living (IADLs). Accuracy statistics that included measures of sensitivity, specificity, positive and negative predicted power and positive likelihood ratio were calculated for both types of batteries. The sample was drawn from a general neurological group of adults (n = 117) that included a number of older participants (age >55; n = 38). Standardized neuropsychological assessments were administered to all participants and were comprised of the Halstead Reitan Battery and portions of the Wechsler Adult Intelligence Scale-III. A comprehensive test battery yielded a moderate increase over base-rate in predictive accuracy that generalized to older individuals. There was only limited support for using a brief battery, for although sensitivity was high, specificity was low. We found that a comprehensive neuropsychological test battery provided good classification accuracy for predicting IADL capacity.
de Jong, Maarten; Chen, Wei; Notestine, Randy; Persson, Kristin; Ceder, Gerbrand; Jain, Anubhav; Asta, Mark; Gamst, Anthony
2016-10-03
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. The approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials.
de Jong, Maarten; Chen, Wei; Notestine, Randy; Persson, Kristin; Ceder, Gerbrand; Jain, Anubhav; Asta, Mark; Gamst, Anthony
2016-01-01
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. The approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials. PMID:27694824
de Jong, Maarten; Chen, Wei; Notestine, Randy; ...
2016-10-03
Materials scientists increasingly employ machine or statistical learning (SL) techniques to accelerate materials discovery and design. Such pursuits benefit from pooling training data across, and thus being able to generalize predictions over, k-nary compounds of diverse chemistries and structures. This work presents a SL framework that addresses challenges in materials science applications, where datasets are diverse but of modest size, and extreme values are often of interest. Our advances include the application of power or Hölder means to construct descriptors that generalize over chemistry and crystal structure, and the incorporation of multivariate local regression within a gradient boosting framework. Themore » approach is demonstrated by developing SL models to predict bulk and shear moduli (K and G, respectively) for polycrystalline inorganic compounds, using 1,940 compounds from a growing database of calculated elastic moduli for metals, semiconductors and insulators. The usefulness of the models is illustrated by screening for superhard materials.« less
No shortcut solution to the problem of Y-STR match probability calculation.
Caliebe, Amke; Jochens, Arne; Willuweit, Sascha; Roewer, Lutz; Krawczak, Michael
2015-03-01
Match probability calculation is deemed much more intricate for lineage genetic markers, including Y-chromosomal short tandem repeats (Y-STRs), than for autosomal markers. This is because, owing to the lack of recombination, strong interdependence between markers is likely, which implies that haplotype frequency estimates cannot simply be obtained through the multiplication of allele frequency estimates. As yet, however, the practical relevance of this problem has not been studied in much detail using real data. In fact, such scrutiny appears well warranted because the high mutation rates of Y-STRs and the possibility of backward mutation should have worked against the statistical association of Y-STRs. We examined haplotype data of 21 markers included in the PowerPlex(®)Y23 set (PPY23, Promega Corporation, Madison, WI) originating from six different populations (four European and two Asian). Assessing the conditional entropies of the markers, given different subsets of markers from the same panel, we demonstrate that the PowerPlex(®)Y23 set cannot be decomposed into smaller marker subsets that would be (conditionally) independent. Nevertheless, in all six populations, >94% of the joint entropy of the 21 markers is explained by the seven most rapidly mutating markers. Although this result might render a reduction in marker number a sensible option for practical casework, the partial haplotypes would still be almost as diverse as the full haplotypes. Therefore, match probability calculation remains difficult and calls for the improvement of currently available methods of haplotype frequency estimation. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Global Tropospheric Noise Maps for InSAR Observations
NASA Astrophysics Data System (ADS)
Yun, S. H.; Hensley, S.; Agram, P. S.; Chaubell, M.; Fielding, E. J.; Pan, L.
2014-12-01
Radio wave's differential phase delay variation through the troposphere is the largest error sources in Interferometric Synthetic Aperture Radar (InSAR) measurements, and water vapor variability in the troposphere is known to be the dominant factor. We use the precipitable water vapor (PWV) products from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) sensors mounted on Terra and Aqua satellites to produce tropospheric noise maps of InSAR. We estimate the slope and y-intercept of power spectral density curve of MODIS PWV and calculate the structure function to estimate the expected tropospheric noise level as a function of distance. The results serve two purposes: 1) to provide guidance on the expected covariance matrix for geophysical modeling, 2) to provide quantitative basis for the science Level-1 requirements of the planned NASA-ISRO L-band SAR mission (NISAR mission). We populate lookup tables of such power spectrum parameters derived from each 1-by-1 degree tile of global coverage. The MODIS data were retrieved from OSCAR (Online Services for Correcting Atmosphere in Radar) server. Users will be able to use the lookup tables and calculate expected tropospheric noise level of any date of MODIS data at any distance scale. Such calculation results can be used for constructing covariance matrix for geophysical modeling, or building statistics to support InSAR missions' requirements. For example, about 74% of the world had InSAR tropospheric noise level (along a radar line-of-sight for an incidence angle of 40 degrees) of 2 cm or less at 50 km distance scale during the time period of 2010/01/01 - 2010/01/09.
Simulation of laser beam reflection at the sea surface modeling and validation
NASA Astrophysics Data System (ADS)
Schwenger, Frédéric; Repasi, Endre
2013-06-01
A 3D simulation of the reflection of a Gaussian shaped laser beam on the dynamic sea surface is presented. The simulation is suitable for the pre-calculation of images for cameras operating in different spectral wavebands (visible, short wave infrared) for a bistatic configuration of laser source and receiver for different atmospheric conditions. In the visible waveband the calculated detected total power of reflected laser light from a 660nm laser source is compared with data collected in a field trial. Our computer simulation comprises the 3D simulation of a maritime scene (open sea/clear sky) and the simulation of laser beam reflected at the sea surface. The basic sea surface geometry is modeled by a composition of smooth wind driven gravity waves. To predict the view of a camera the sea surface radiance must be calculated for the specific waveband. Additionally, the radiances of laser light specularly reflected at the wind-roughened sea surface are modeled considering an analytical statistical sea surface BRDF (bidirectional reflectance distribution function). Validation of simulation results is prerequisite before applying the computer simulation to maritime laser applications. For validation purposes data (images and meteorological data) were selected from field measurements, using a 660nm cw-laser diode to produce laser beam reflection at the water surface and recording images by a TV camera. The validation is done by numerical comparison of measured total laser power extracted from recorded images with the corresponding simulation results. The results of the comparison are presented for different incident (zenith/azimuth) angles of the laser beam.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.
Desai, Trunil S; Srivastava, Shireesh
2018-01-01
13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses
Desai, Trunil S.
2018-01-01
13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347
A New Approach to Monte Carlo Simulations in Statistical Physics
NASA Astrophysics Data System (ADS)
Landau, David P.
2002-08-01
Monte Carlo simulations [1] have become a powerful tool for the study of diverse problems in statistical/condensed matter physics. Standard methods sample the probability distribution for the states of the system, most often in the canonical ensemble, and over the past several decades enormous improvements have been made in performance. Nonetheless, difficulties arise near phase transitions-due to critical slowing down near 2nd order transitions and to metastability near 1st order transitions, and these complications limit the applicability of the method. We shall describe a new Monte Carlo approach [2] that uses a random walk in energy space to determine the density of states directly. Once the density of states is known, all thermodynamic properties can be calculated. This approach can be extended to multi-dimensional parameter spaces and should be effective for systems with complex energy landscapes, e.g., spin glasses, protein folding models, etc. Generalizations should produce a broadly applicable optimization tool. 1. A Guide to Monte Carlo Simulations in Statistical Physics, D. P. Landau and K. Binder (Cambridge U. Press, Cambridge, 2000). 2. Fugao Wang and D. P. Landau, Phys. Rev. Lett. 86, 2050 (2001); Phys. Rev. E64, 056101-1 (2001).
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
NASA Technical Reports Server (NTRS)
Helgason, K.; Cappelluti, N.; Hasinger, G.; Kashlinsky, A.; Ricotti, M.
2014-01-01
A spatial clustering signal has been established in Spitzer/IRAC measurements of the unresolved cosmic near-infrared background (CIB) out to large angular scales, approx. 1deg. This CIB signal, while significantly exceeding the contribution from the remaining known galaxies, was further found to be coherent at a highly statistically significant level with the unresolved soft cosmic X-ray background (CXB). This measurement probes the unresolved CXB to very faint source levels using deep near-IR source subtraction.We study contributions from extragalactic populations at low to intermediate redshifts to the measured positive cross-power signal of the CIB fluctuations with the CXB. We model the X-ray emission from active galactic nuclei (AGNs), normal galaxies, and hot gas residing in virialized structures, calculating their CXB contribution including their spatial coherence with all infrared emitting counterparts. We use a halo model framework to calculate the auto and cross-power spectra of the unresolved fluctuations based on the latest constraints of the halo occupation distribution and the biasing of AGNs, galaxies, and diffuse emission. At small angular scales (1), the 4.5microns versus 0.5-2 keV coherence can be explained by shot noise from galaxies and AGNs. However, at large angular scales (approx.10), we find that the net contribution from the modeled populations is only able to account for approx. 3% of the measured CIB×CXB cross-power. The discrepancy suggests that the CIB×CXB signal originates from the same unknown source population producing the CIB clustering signal out to approx. 1deg.
The effect of different calculation methods of flywheel parameters on the Wingate Anaerobic Test.
Coleman, S G; Hale, T
1998-08-01
Researchers compared different methods of calculating kinetic parameters of friction-braked cycle ergometers, and the subsequent effects on calculating power outputs in the Wingate Anaerobic Test (WAnT). Three methods of determining flywheel moment of inertia and frictional torque were investigated, requiring "run-down" tests and segmental geometry. Parameters were used to calculate corrected power outputs from 10 males in a 30-s WAnT against a load related to body mass (0.075 kg.kg-1). Wingate Indices of maximum (5 s) power, work, and fatigue index were also compared. Significant differences were found between uncorrected and corrected power outputs and between correction methods (p < .05). The same finding was evident for all Wingate Indices (p < .05). Results suggest that WAnT must be corrected to give true power outputs and that choosing an appropriate correction calculation is important. Determining flywheel moment of inertia and frictional torque using unloaded run-down tests is recommended.
NASA Astrophysics Data System (ADS)
Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.
2016-08-01
As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.
El-Sayed, Adly H; Aly, A A; EI-Sayed, N I; Mekawy, M M; EI-Gendy, A A
2007-03-01
High quality heating device made of ferromagnetic alloy (thermal seed) was developed for hyperthermia treatment of cancer. The device generates sufficient heat at room temperature and stops heating at the Curie temperature T (c). The power dissipated from each seed was calculated from the area enclosed by the hysteresis loop. A new mathematical formula for the calculation of heating power was derived and showed good agreement with those calculated from hysteresis loop and calorimetric method.
On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.
Koyama, Shinsuke
2015-07-01
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
New insights into faster computation of uncertainties
NASA Astrophysics Data System (ADS)
Bhattacharya, Atreyee
2012-11-01
Heavy computation power, lengthy simulations, and an exhaustive number of model runs—often these seem like the only statistical tools that scientists have at their disposal when computing uncertainties associated with predictions, particularly in cases of environmental processes such as groundwater movement. However, calculation of uncertainties need not be as lengthy, a new study shows. Comparing two approaches—the classical Bayesian “credible interval” and a less commonly used regression-based “confidence interval” method—Lu et al. show that for many practical purposes both methods provide similar estimates of uncertainties. The advantage of the regression method is that it demands 10-1000 model runs, whereas the classical Bayesian approach requires 10,000 to millions of model runs.
NASA Astrophysics Data System (ADS)
Jin, Yang; Ciwei, Gao; Jing, Zhang; Min, Sun; Jie, Yu
2017-05-01
The selection and evaluation of priority domains in Global Energy Internet standard development will help to break through limits of national investment, thus priority will be given to standardizing technical areas with highest urgency and feasibility. Therefore, in this paper, the process of Delphi survey based on technology foresight is put forward, the evaluation index system of priority domains is established, and the index calculation method is determined. Afterwards, statistical method is used to evaluate the alternative domains. Finally the top four priority domains are determined as follows: Interconnected Network Planning and Simulation Analysis, Interconnected Network Safety Control and Protection, Intelligent Power Transmission and Transformation, and Internet of Things.
Resende, Ana; Amorim, António; da Silva, Cláudia Vieira; Ribeiro, Teresa; Porto, Maria João; Costa Santos, Jorge; Afonso Costa, Heloísa
2017-01-01
Twenty-two autosomal short tandem repeats included in the PowerPlex® Fusion System Amplification kit (Promega Corporation) were genotyped in a population sample of 500 unrelated individuals from Cabo Verde living in Lisboa. Allelic frequency data and forensic and statistical parameters were calculated and evaluated in this work. The genetic relationship among immigrant population from Cabo Verde living in Lisboa and other populations, such as Brazilian and Angola immigrants living in Lisboa; Afro-Americans, Caucasians, Hispanics and Asians living in the USA and the population from Lisboa was assessed, and a multidimensional scaling plot was drown to show these results.
Joint probability of statistical success of multiple phase III trials.
Zhang, Jianliang; Zhang, Jenny J
2013-01-01
In drug development, after completion of phase II proof-of-concept trials, the sponsor needs to make a go/no-go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision-making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k₀ (k₀≤ K) trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.
Power of tests for comparing trend curves with application to national immunization survey (NIS).
Zhao, Zhen
2011-02-28
To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.
Lahham, Adnan; Alkbash, Jehad Abu; ALMasri, Hussien
2017-04-20
Theoretical assessments of power density in far-field conditions were used to evaluate the levels of environmental electromagnetic frequencies from selected GSM900 macrocell base stations in the West Bank and Gaza Strip. Assessments were based on calculating the power densities using commercially available software (RF-Map from Telstra Research Laboratories-Australia). Calculations were carried out for single base stations with multiantenna systems and also for multiple base stations with multiantenna systems at 1.7 m above the ground level. More than 100 power density levels were calculated at different locations around the investigated base stations. These locations include areas accessible to the general public (schools, parks, residential areas, streets and areas around kindergartens). The maximum calculated electromagnetic emission level resulted from a single site was 0.413 μW cm-2 and found at Hizma town near Jerusalem. Average maximum power density from all single sites was 0.16 μW cm-2. The results of all calculated power density levels in 100 locations distributed over the West Bank and Gaza were nearly normally distributed with a peak value of ~0.01% of the International Commission on Non-Ionizing Radiation Protection's limit recommended for general public. Comparison between calculated and experimentally measured value of maximum power density from a base station showed that calculations overestimate the actual measured power density by ~27%. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Caravaca-Arens, Esteban; de Fez, Dolores; Blanes-Mompó, Francisco J.
2017-01-01
Purpose To analyze the errors associated to corneal power calculation using the keratometric approach in keratoconus eyes after accelerated corneal collagen crosslinking (CXL) surgery and to obtain a model for the estimation of an adjusted corneal refractive index (nkadj) minimizing such errors. Methods Potential differences (ΔPc) among keratometric (Pk) and Gaussian corneal power (PcGauss) were simulated. Three algorithms based on the use of nkadj for the estimation of an adjusted keratometric corneal power (Pkadj) were developed. The agreement between Pk(1.3375) (keratometric power using the keratometric index of 1.3375), PcGauss, and Pkadj was evaluated. The validity of the algorithm developed was investigated in 21 keratoconus eyes undergoing accelerated CXL. Results P k(1.3375) overestimated corneal power between 0.3 and 3.2 D in theoretical simulations and between 0.8 and 2.9 D in the clinical study (ΔPc). Three linear equations were defined for nkadj to be used for different ranges of r1c. In the clinical study, differences between Pkadj and PcGauss did not exceed ±0.8 D nk = 1.3375. No statistically significant differences were found between Pkadj and PcGauss (p > 0.05) and Pk(1.3375) and Pkadj (p < 0.001). Conclusions The use of the keratometric approach in keratoconus eyes after accelerated CXL can lead to significant clinical errors. These errors can be minimized with an adjusted keratometric approach. PMID:29201459
Stopping Power for Degenerate Electrons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singleton, Jr., Robert
2016-05-16
This is a first attempt at calculating the BPS stopping power with electron degeneracy corrections. Section I establishes some notation and basic facts. Section II outlines the basics of the calculation, and in Section III contains some brief notes on how to proceed with the details of the calculation. The remaining work for the calculation starts with Section III.
This paper provides the EPA Combined Heat and Power Partnership's recommended methodology for calculating fuel and carbon dioxide emissions savings from CHP compared to SHP, which serves as the basis for the EPA's CHP emissions calculator.
ERIC Educational Resources Information Center
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
SPA- STATISTICAL PACKAGE FOR TIME AND FREQUENCY DOMAIN ANALYSIS
NASA Technical Reports Server (NTRS)
Brownlow, J. D.
1994-01-01
The need for statistical analysis often arises when data is in the form of a time series. This type of data is usually a collection of numerical observations made at specified time intervals. Two kinds of analysis may be performed on the data. First, the time series may be treated as a set of independent observations using a time domain analysis to derive the usual statistical properties including the mean, variance, and distribution form. Secondly, the order and time intervals of the observations may be used in a frequency domain analysis to examine the time series for periodicities. In almost all practical applications, the collected data is actually a mixture of the desired signal and a noise signal which is collected over a finite time period with a finite precision. Therefore, any statistical calculations and analyses are actually estimates. The Spectrum Analysis (SPA) program was developed to perform a wide range of statistical estimation functions. SPA can provide the data analyst with a rigorous tool for performing time and frequency domain studies. In a time domain statistical analysis the SPA program will compute the mean variance, standard deviation, mean square, and root mean square. It also lists the data maximum, data minimum, and the number of observations included in the sample. In addition, a histogram of the time domain data is generated, a normal curve is fit to the histogram, and a goodness-of-fit test is performed. These time domain calculations may be performed on both raw and filtered data. For a frequency domain statistical analysis the SPA program computes the power spectrum, cross spectrum, coherence, phase angle, amplitude ratio, and transfer function. The estimates of the frequency domain parameters may be smoothed with the use of Hann-Tukey, Hamming, Barlett, or moving average windows. Various digital filters are available to isolate data frequency components. Frequency components with periods longer than the data collection interval are removed by least-squares detrending. As many as ten channels of data may be analyzed at one time. Both tabular and plotted output may be generated by the SPA program. This program is written in FORTRAN IV and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 142K (octal) of 60 bit words. This core requirement can be reduced by segmentation of the program. The SPA program was developed in 1978.
Business Pattern of Distributed Energy in Electric Power System Reformation
NASA Astrophysics Data System (ADS)
Liang, YUE; Zhuochu, LIU; Jun, LI; Siwei, LI
2017-05-01
Under the trend of the electric power system revolution, the operation mode of micro power grid that including distributed power will be more diversified. User’s demand response and different strategies on electricity all have great influence on the operation of distributed power grid. This paper will not only research sensitive factors of micro power grid operation, but also analyze and calculate the cost and benefit of micro power grid operation upon different types. Then it will build a tech-economic calculation model, which applies to different types of micro power grid under the reformation of electric power system.
Pasaniuc, Bogdan; Zaitlen, Noah; Lettre, Guillaume; Chen, Gary K; Tandon, Arti; Kao, W H Linda; Ruczinski, Ingo; Fornage, Myriam; Siscovick, David S; Zhu, Xiaofeng; Larkin, Emma; Lange, Leslie A; Cupples, L Adrienne; Yang, Qiong; Akylbekova, Ermeg L; Musani, Solomon K; Divers, Jasmin; Mychaleckyj, Joe; Li, Mingyao; Papanicolaou, George J; Millikan, Robert C; Ambrosone, Christine B; John, Esther M; Bernstein, Leslie; Zheng, Wei; Hu, Jennifer J; Ziegler, Regina G; Nyante, Sarah J; Bandera, Elisa V; Ingles, Sue A; Press, Michael F; Chanock, Stephen J; Deming, Sandra L; Rodriguez-Gil, Jorge L; Palmer, Cameron D; Buxbaum, Sarah; Ekunwe, Lynette; Hirschhorn, Joel N; Henderson, Brian E; Myers, Simon; Haiman, Christopher A; Reich, David; Patterson, Nick; Wilson, James G; Price, Alkes L
2011-04-01
While genome-wide association studies (GWAS) have primarily examined populations of European ancestry, more recent studies often involve additional populations, including admixed populations such as African Americans and Latinos. In admixed populations, linkage disequilibrium (LD) exists both at a fine scale in ancestral populations and at a coarse scale (admixture-LD) due to chromosomal segments of distinct ancestry. Disease association statistics in admixed populations have previously considered SNP association (LD mapping) or admixture association (mapping by admixture-LD), but not both. Here, we introduce a new statistical framework for combining SNP and admixture association in case-control studies, as well as methods for local ancestry-aware imputation. We illustrate the gain in statistical power achieved by these methods by analyzing data of 6,209 unrelated African Americans from the CARe project genotyped on the Affymetrix 6.0 chip, in conjunction with both simulated and real phenotypes, as well as by analyzing the FGFR2 locus using breast cancer GWAS data from 5,761 African-American women. We show that, at typed SNPs, our method yields an 8% increase in statistical power for finding disease risk loci compared to the power achieved by standard methods in case-control studies. At imputed SNPs, we observe an 11% increase in statistical power for mapping disease loci when our local ancestry-aware imputation framework and the new scoring statistic are jointly employed. Finally, we show that our method increases statistical power in regions harboring the causal SNP in the case when the causal SNP is untyped and cannot be imputed. Our methods and our publicly available software are broadly applicable to GWAS in admixed populations.
Sprecher, Kate E.; Riedner, Brady A.; Smith, Richard F.; Tononi, Giulio; Davidson, Richard J.; Benca, Ruth M.
2016-01-01
Sleeping brain activity reflects brain anatomy and physiology. The aim of this study was to use high density (256 channel) electroencephalography (EEG) during sleep to characterize topographic changes in sleep EEG power across normal aging, with high spatial resolution. Sleep was evaluated in 92 healthy adults aged 18–65 years old using full polysomnography and high density EEG. After artifact removal, spectral power density was calculated for standard frequency bands for all channels, averaged across the NREM periods of the first 3 sleep cycles. To quantify topographic changes with age, maps were generated of the Pearson’s coefficient of the correlation between power and age at each electrode. Significant correlations were determined by statistical non-parametric mapping. Absolute slow wave power declined significantly with increasing age across the entire scalp, whereas declines in theta and sigma power were significant only in frontal regions. Power in fast spindle frequencies declined significantly with increasing age frontally, whereas absolute power of slow spindle frequencies showed no significant change with age. When EEG power was normalized across the scalp, a left centro-parietal region showed significantly less age-related decline in power than the rest of the scalp. This partial preservation was particularly significant in the slow wave and sigma bands. The effect of age on sleep EEG varies substantially by region and frequency band. This non-uniformity should inform the design of future investigations of aging and sleep. This study provides normative data on the effect of age on sleep EEG topography, and provides a basis from which to explore the mechanisms of normal aging as well as neurodegenerative disorders for which age is a risk factor. PMID:26901503
NASA Astrophysics Data System (ADS)
Boughezal, Radja; Isgrò, Andrea; Petriello, Frank
2018-04-01
We present a detailed derivation of the power corrections to the factorization theorem for the 0-jettiness event shape variable T . Our calculation is performed directly in QCD without using the formalism of effective field theory. We analytically calculate the next-to-leading logarithmic power corrections for small T at next-to-leading order in the strong coupling constant, extending previous computations which obtained only the leading-logarithmic power corrections. We address a discrepancy in the literature between results for the leading-logarithmic power corrections to a particular definition of 0-jettiness. We present a numerical study of the power corrections in the context of their application to the N -jettiness subtraction method for higher-order calculations, using gluon-fusion Higgs production as an example. The inclusion of the next-to-leading-logarithmic power corrections further improves the numerical efficiency of the approach beyond the improvement obtained from the leading-logarithmic power corrections.
Validation of a program for supercritical power plant calculations
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian
2011-12-01
This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.
Hunting down the best model of inflation with Bayesian evidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, Jerome; Ringeval, Christophe; Trotta, Roberto
2011-03-15
We present the first calculation of the Bayesian evidence for different prototypical single field inflationary scenarios, including representative classes of small field and large field models. This approach allows us to compare inflationary models in a well-defined statistical way and to determine the current 'best model of inflation'. The calculation is performed numerically by interfacing the inflationary code FieldInf with MultiNest. We find that small field models are currently preferred, while large field models having a self-interacting potential of power p>4 are strongly disfavored. The class of small field models as a whole has posterior odds of approximately 3 ratiomore » 1 when compared with the large field class. The methodology and results presented in this article are an additional step toward the construction of a full numerical pipeline to constrain the physics of the early Universe with astrophysical observations. More accurate data (such as the Planck data) and the techniques introduced here should allow us to identify conclusively the best inflationary model.« less
A two-component rain model for the prediction of attenuation statistics
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.
Physical characteristics of experienced and junior open-wheel car drivers.
Raschner, Christian; Platzer, Hans-Peter; Patterson, Carson
2013-01-01
Despite the popularity of open-wheel car racing, scientific literature about the physical characteristics of competitive race car drivers is scarce. The purpose of this study was to compare selected fitness parameters of experienced and junior open-wheel race car drivers. The experienced drivers consisted of five Formula One, two GP2 and two Formula 3 drivers, and the nine junior drivers drove in the Formula Master, Koenig, BMW and Renault series. The following fitness parameters were tested: multiple reactions, multiple anticipation, postural stability, isometric upper body strength, isometric leg extension strength, isometric grip strength, cyclic foot speed and jump height. The group differences were calculated using the Mann-Whitney U-test. Because of the multiple testing strategy used, the statistical significance was Bonferroni corrected and set at P < 0.004. Significant differences between the experienced and junior drivers were found only for the jump height parameter (P = 0.002). The experienced drivers tended to perform better in leg strength (P = 0.009), cyclic foot speed (P = 0.024) and grip strength (P = 0.058). None of the other variables differed between the groups. The results suggested that the experienced drivers were significantly more powerful than the junior drivers: they tended to be quicker and stronger (18% to 25%) but without statistical significance. The experienced drivers demonstrated excellent strength and power compared with other high-performance athletes.
Simulation on a car interior aerodynamic noise control based on statistical energy analysis
NASA Astrophysics Data System (ADS)
Chen, Xin; Wang, Dengfeng; Ma, Zhengdong
2012-09-01
How to simulate interior aerodynamic noise accurately is an important question of a car interior noise reduction. The unsteady aerodynamic pressure on body surfaces is proved to be the key effect factor of car interior aerodynamic noise control in high frequency on high speed. In this paper, a detail statistical energy analysis (SEA) model is built. And the vibra-acoustic power inputs are loaded on the model for the valid result of car interior noise analysis. The model is the solid foundation for further optimization on car interior noise control. After the most sensitive subsystems for the power contribution to car interior noise are pointed by SEA comprehensive analysis, the sound pressure level of car interior aerodynamic noise can be reduced by improving their sound and damping characteristics. The further vehicle testing results show that it is available to improve the interior acoustic performance by using detailed SEA model, which comprised by more than 80 subsystems, with the unsteady aerodynamic pressure calculation on body surfaces and the materials improvement of sound/damping properties. It is able to acquire more than 2 dB reduction on the central frequency in the spectrum over 800 Hz. The proposed optimization method can be looked as a reference of car interior aerodynamic noise control by the detail SEA model integrated unsteady computational fluid dynamics (CFD) and sensitivity analysis of acoustic contribution.