Austin, Peter C; Schuster, Tibor; Platt, Robert W
2015-10-15
Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.
Ward, P. J.
1990-01-01
Recent developments have related quantitative trait expression to metabolic flux. The present paper investigates some implications of this for statistical aspects of polygenic inheritance. Expressions are derived for the within-sibship genetic mean and genetic variance of metabolic flux given a pair of parental, diploid, n-locus genotypes. These are exact and hold for arbitrary numbers of gene loci, arbitrary allelic values at each locus, and for arbitrary recombination fractions between adjacent gene loci. The within-sibship, genetic variance is seen to be simply a measure of parental heterozygosity plus a measure of the degree of linkage coupling within the parental genotypes. Approximations are given for the within-sibship phenotypic mean and variance of metabolic flux. These results are applied to the problem of attaining adequate statistical power in a test of association between allozymic variation and inter-individual variation in metabolic flux. Simulations indicate that statistical power can be greatly increased by augmenting the data with predictions and observations on progeny statistics in relation to parental allozyme genotypes. Adequate power may thus be attainable at small sample sizes, and when allozymic variation is scored at a only small fraction of the total set of loci whose catalytic products determine the flux. PMID:2379825
Seven ways to increase power without increasing N.
Hansen, W B; Collins, L M
1994-01-01
Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.
Chan, Robin F.; Shabalin, Andrey A.; Xie, Lin Y.; Adkins, Daniel E.; Zhao, Min; Turecki, Gustavo; Clark, Shaunna L.; Aberg, Karolina A.
2017-01-01
Abstract Methylome-wide association studies are typically performed using microarray technologies that only assay a very small fraction of the CG methylome and entirely miss two forms of methylation that are common in brain and likely of particular relevance for neuroscience and psychiatric disorders. The alternative is to use whole genome bisulfite (WGB) sequencing but this approach is not yet practically feasible with sample sizes required for adequate statistical power. We argue for revisiting methylation enrichment methods that, provided optimal protocols are used, enable comprehensive, adequately powered and cost-effective genome-wide investigations of the brain methylome. To support our claim we use data showing that enrichment methods approximate the sensitivity obtained with WGB methods and with slightly better specificity. However, this performance is achieved at <5% of the reagent costs. Furthermore, because many more samples can be sequenced simultaneously, projects can be completed about 15 times faster. Currently the only viable option available for comprehensive brain methylome studies, enrichment methods may be critical for moving the field forward. PMID:28334972
Signal Detection Theory as a Tool for Successful Student Selection
ERIC Educational Resources Information Center
van Ooijen-van der Linden, Linda; van der Smagt, Maarten J.; Woertman, Liesbeth; te Pas, Susan F.
2017-01-01
Prediction accuracy of academic achievement for admission purposes requires adequate "sensitivity" and "specificity" of admission tools, yet the available information on the validity and predictive power of admission tools is largely based on studies using correlational and regression statistics. The goal of this study was to…
Is Neurofeedback an Efficacious Treatment for ADHD? A Randomised Controlled Clinical Trial
ERIC Educational Resources Information Center
Gevensleben, Holger; Holl, Birgit; Albrecht, Bjorn; Vogel, Claudia; Schlamp, Dieter; Kratz, Oliver; Studer, Petra; Rothenberger, Aribert; Moll, Gunther H.; Heinrich, Hartmut
2009-01-01
Background: For children with attention deficit/hyperactivity disorder (ADHD), a reduction of inattention, impulsivity and hyperactivity by neurofeedback (NF) has been reported in several studies. But so far, unspecific training effects have not been adequately controlled for andor studies do not provide sufficient statistical power. To overcome…
Bryan, Janice L.; Wildhaber, Mark L.; Gladish, Dan; Holan, Scott; Ellerseick, Mark
2010-01-01
As with all large rivers in the United States, the Missouri River has been altered, with approximately 32.5 percent of the main stem length impounded and 32.5 percent channelized. These physical alterations to the environment have had effects on the fisheries, but studies examining the effects of alterations have been localized and for short periods of time. In response to the U.S. Fish and Wildlife Service biological opinion, the U.S. Army Corps of Engineers initiated monitoring of the fish community of the Missouri River in 2003. The goal of the Pallid Sturgeon Population Assessment Program is to provide information to detect changes in populations and habitat preferences with time for pallid sturgeon (Scaphirhynchus albus) and native target species in the Missouri River Basin. To determine statistical power of the Pallid Sturgeon Population Assessment Program, a power analysis was conducted using a normal linear mixed model with variance component estimates based on the first 3 years of data (2003 to 2005). In cases where 3 years of data were unavailable, estimates were obtained using those data. It was determined that at least 20 years of data, sampling 12 bends with 8 subsamples per bend, would be required to detect a 5 percent annual decline in most of the target fish populations. Power varied between Zones. Zone 1 (upstream from Lake Sakakawea) did not have any species/gear type combinations with adequate power, whereas Zone 3 (downstream from Gavins Point Dam) had 19 species/gear type combinations with adequate power. With a slight increase in the sampling effort to 12 subsamples per bend, the Pallid Sturgeon Population Assessment Program has adequate power to detect declines in shovelnose sturgeon (S. platorynchus) throughout the entire Missouri River because of large catch rates. The lowest level of non-occurrence (in other words, zero catches) at the bend level for pallid sturgeon was 0.58 using otter trawls in Zone 1. Consequently, the power of the pallid sturgeon models was not as high as other species at the current level of sampling, but an increase in the sampling effort to 16 subsamples for each of 24 bends for 20 years would generate adequate power for the pallid sturgeon in all Zones. Since gear types are selective in their species efficiency, the strength of the Pallid Sturgeon Population Assessment Program approach is using multiple gears that have statistical power to detect population trends at the same time in different fish species within the Missouri River. As often is the case with monitoring studies involving endangered species, the data used to conduct the analyses exhibit some departures from the parametric model assumptions; however, preliminary simulations indicate that the results of this study are appropriate.
2013-01-01
Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463
Kidney function endpoints in kidney transplant trials: a struggle for power.
Ibrahim, A; Garg, A X; Knoll, G A; Akbari, A; White, C A
2013-03-01
Kidney function endpoints are commonly used in randomized controlled trials (RCTs) in kidney transplantation (KTx). We conducted this study to estimate the proportion of ongoing RCTs with kidney function endpoints in KTx where the proposed sample size is large enough to detect meaningful differences in glomerular filtration rate (GFR) with adequate statistical power. RCTs were retrieved using the key word "kidney transplantation" from the National Institute of Health online clinical trial registry. Included trials had at least one measure of kidney function tracked for at least 1 month after transplant. We determined the proportion of two-arm parallel trials that had sufficient sample sizes to detect a minimum 5, 7.5 and 10 mL/min difference in GFR between arms. Fifty RCTs met inclusion criteria. Only 7% of the trials were above a sample size of 562, the number needed to detect a minimum 5 mL/min difference between the groups should one exist (assumptions: α = 0.05; power = 80%, 10% loss to follow-up, common standard deviation of 20 mL/min). The result increased modestly to 36% of trials when a minimum 10 mL/min difference was considered. Only a minority of ongoing trials have adequate statistical power to detect between-group differences in kidney function using conventional sample size estimating parameters. For this reason, some potentially effective interventions which ultimately could benefit patients may be abandoned from future assessment. © Copyright 2013 The American Society of Transplantation and the American Society of Transplant Surgeons.
ERIC Educational Resources Information Center
Holtzapple, Carol K.
2011-01-01
Character education programs support the development of positive character traits in children and adults. Effective violence prevention programs improve pro-social competencies and reduce negative behaviors in students by enhancing protective factors (strong bonds with teachers; clear rules of conduct that are consistently enforced) and targeting…
Optimized design and analysis of preclinical intervention studies in vivo
Laajala, Teemu D.; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-01-01
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions. PMID:27480578
Optimized design and analysis of preclinical intervention studies in vivo.
Laajala, Teemu D; Jumppanen, Mikael; Huhtaniemi, Riikka; Fey, Vidal; Kaur, Amanpreet; Knuuttila, Matias; Aho, Eija; Oksala, Riikka; Westermarck, Jukka; Mäkelä, Sari; Poutanen, Matti; Aittokallio, Tero
2016-08-02
Recent reports have called into question the reproducibility, validity and translatability of the preclinical animal studies due to limitations in their experimental design and statistical analysis. To this end, we implemented a matching-based modelling approach for optimal intervention group allocation, randomization and power calculations, which takes full account of the complex animal characteristics at baseline prior to interventions. In prostate cancer xenograft studies, the method effectively normalized the confounding baseline variability, and resulted in animal allocations which were supported by RNA-seq profiling of the individual tumours. The matching information increased the statistical power to detect true treatment effects at smaller sample sizes in two castration-resistant prostate cancer models, thereby leading to saving of both animal lives and research costs. The novel modelling approach and its open-source and web-based software implementations enable the researchers to conduct adequately-powered and fully-blinded preclinical intervention studies, with the aim to accelerate the discovery of new therapeutic interventions.
42 CFR 417.568 - Adequate financial records, statistical data, and cost finding.
Code of Federal Regulations, 2010 CFR
2010-10-01
..., COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Cost Basis § 417.568 Adequate... definitions and accounting, statistics, and reporting practices that are widely accepted in the health care... 42 Public Health 3 2010-10-01 2010-10-01 false Adequate financial records, statistical data, and...
42 CFR 417.568 - Adequate financial records, statistical data, and cost finding.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Cost Basis § 417.568 Adequate... definitions and accounting, statistics, and reporting practices that are widely accepted in the health care... 42 Public Health 3 2011-10-01 2011-10-01 false Adequate financial records, statistical data, and...
Dingus, Cheryl A; Teuschler, Linda K; Rice, Glenn E; Simmons, Jane Ellen; Narotsky, Michael G
2011-10-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA's Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss.
Dingus, Cheryl A.; Teuschler, Linda K.; Rice, Glenn E.; Simmons, Jane Ellen; Narotsky, Michael G.
2011-01-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA’s Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss. PMID:22073030
Power estimation using simulations for air pollution time-series studies
2012-01-01
Background Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Methods Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. Results In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. Conclusions These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided. PMID:22995599
Power estimation using simulations for air pollution time-series studies.
Winquist, Andrea; Klein, Mitchel; Tolbert, Paige; Sarnat, Stefanie Ebelt
2012-09-20
Estimation of power to assess associations of interest can be challenging for time-series studies of the acute health effects of air pollution because there are two dimensions of sample size (time-series length and daily outcome counts), and because these studies often use generalized linear models to control for complex patterns of covariation between pollutants and time trends, meteorology and possibly other pollutants. In general, statistical software packages for power estimation rely on simplifying assumptions that may not adequately capture this complexity. Here we examine the impact of various factors affecting power using simulations, with comparison of power estimates obtained from simulations with those obtained using statistical software. Power was estimated for various analyses within a time-series study of air pollution and emergency department visits using simulations for specified scenarios. Mean daily emergency department visit counts, model parameter value estimates and daily values for air pollution and meteorological variables from actual data (8/1/98 to 7/31/99 in Atlanta) were used to generate simulated daily outcome counts with specified temporal associations with air pollutants and randomly generated error based on a Poisson distribution. Power was estimated by conducting analyses of the association between simulated daily outcome counts and air pollution in 2000 data sets for each scenario. Power estimates from simulations and statistical software (G*Power and PASS) were compared. In the simulation results, increasing time-series length and average daily outcome counts both increased power to a similar extent. Our results also illustrate the low power that can result from using outcomes with low daily counts or short time series, and the reduction in power that can accompany use of multipollutant models. Power estimates obtained using standard statistical software were very similar to those from the simulations when properly implemented; implementation, however, was not straightforward. These analyses demonstrate the similar impact on power of increasing time-series length versus increasing daily outcome counts, which has not previously been reported. Implementation of power software for these studies is discussed and guidance is provided.
Harris, Alex; Reeder, Rachelle; Hyun, Jenny
2011-01-01
The authors surveyed 21 editors and reviewers from major psychology journals to identify and describe the statistical and design errors they encounter most often and to get their advice regarding prevention of these problems. Content analysis of the text responses revealed themes in 3 major areas: (a) problems with research design and reporting (e.g., lack of an a priori power analysis, lack of congruence between research questions and study design/analysis, failure to adequately describe statistical procedures); (b) inappropriate data analysis (e.g., improper use of analysis of variance, too many statistical tests without adjustments, inadequate strategy for addressing missing data); and (c) misinterpretation of results. If researchers attended to these common methodological and analytic issues, the scientific quality of manuscripts submitted to high-impact psychology journals might be significantly improved.
Statistical power analysis in wildlife research
Steidl, R.J.; Hayes, J.P.
1997-01-01
Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.
Interpreting carnivore scent-station surveys
Sargeant, G.A.; Johnson, D.H.; Berg, W.E.
1998-01-01
The scent-station survey method has been widely used to estimate trends in carnivore abundance. However, statistical properties of scent-station data are poorly understood, and the relation between scent-station indices and carnivore abundance has not been adequately evaluated. We assessed properties of scent-station indices by analyzing data collected in Minnesota during 1986-03. Visits to stations separated by <2 km were correlated for all species because individual carnivores sometimes visited several stations in succession. Thus, visits to stations had an intractable statistical distribution. Dichotomizing results for lines of 10 stations (0 or 21 visits) produced binomially distributed data that were robust to multiple visits by individuals. We abandoned 2-way comparisons among years in favor of tests for population trend, which are less susceptible to bias, and analyzed results separately for biogeographic sections of Minnesota because trends differed among sections. Before drawing inferences about carnivore population trends, we reevaluated published validation experiments. Results implicated low statistical power and confounding as possible explanations for equivocal or conflicting results of validation efforts. Long-term trends in visitation rates probably reflect real changes in populations, but poor spatial and temporal resolution, susceptibility to confounding, and low statistical power limit the usefulness of this survey method.
Bryan, Janice L.; Wildhaber, Mark L.; Gladish, Dan W.
2010-01-01
As with all large rivers in the United States, the Missouri River has been altered, with approximately one-third of the mainstem length impounded and one-third channelized. These physical alterations to the environment have affected the fish populations, but studies examining the effects of alterations have been localized and for short periods of time, thereby preventing generalization. In response to the U.S. Fish and Wildlife Service Biological Opinion, the U.S. Army Corps of Engineers (USACE) initiated monitoring of habitat improvements of the Missouri River in 2005. The goal of the Habitat Assessment Monitoring Program (HAMP) is to provide information on the response of target fish species to the USACE habitat creation on the Lower Missouri River. To determine the statistical power of the HAMP and in cooperation with USACE, a power analysis was conducted using a normal linear mixed model with variance component estimates based on the first complete year of data. At a level of 20/16 (20 bends with 16 subsamples in each bend), at least one species/month/gear model has the power to determine differences between treated and untreated bends. The trammel net in September had the most species models with adequate power at the 20/16 level and overall, the trammel net had the most species/month models with adequate power at the 20/16 level. However, using only one gear or gear/month combination would eliminate other species of interest, such as three chub species (Macrhybopsis meeki, Macrhybopsis aestivalis, and Macrhybopsis gelida), sand shiners (Notropis stramineus), pallid sturgeon (Scaphirhynchus albus), and juvenile sauger (Sander canadensis). Since gear types are selective in their species efficiency, the strength of the HAMP approach is using multiple gears that have statistical power to differentiate habitat treatment differences in different fish species within the Missouri River. As is often the case with sampling rare species like the pallid sturgeon, the data used to conduct the analyses exhibit some departures from the parametric model assumptions. However, preliminary simulations indicate that the results of this study are appropriate for application to the HAMP study design.
Bartsch, L.A.; Richardson, W.B.; Naimo, T.J.
1998-01-01
Estimation of benthic macroinvertebrate populations over large spatial scales is difficult due to the high variability in abundance and the cost of sample processing and taxonomic analysis. To determine a cost-effective, statistically powerful sample design, we conducted an exploratory study of the spatial variation of benthic macroinvertebrates in a 37 km reach of the Upper Mississippi River. We sampled benthos at 36 sites within each of two strata, contiguous backwater and channel border. Three standard ponar (525 cm(2)) grab samples were obtained at each site ('Original Design'). Analysis of variance and sampling cost of strata-wide estimates for abundance of Oligochaeta, Chironomidae, and total invertebrates showed that only one ponar sample per site ('Reduced Design') yielded essentially the same abundance estimates as the Original Design, while reducing the overall cost by 63%. A posteriori statistical power analysis (alpha = 0.05, beta = 0.20) on the Reduced Design estimated that at least 18 sites per stratum were needed to detect differences in mean abundance between contiguous backwater and channel border areas for Oligochaeta, Chironomidae, and total invertebrates. Statistical power was nearly identical for the three taxonomic groups. The abundances of several taxa of concern (e.g., Hexagenia mayflies and Musculium fingernail clams) were too spatially variable to estimate power with our method. Resampling simulations indicated that to achieve adequate sampling precision for Oligochaeta, at least 36 sample sites per stratum would be required, whereas a sampling precision of 0.2 would not be attained with any sample size for Hexagenia in channel border areas, or Chironomidae and Musculium in both strata given the variance structure of the original samples. Community-wide diversity indices (Brillouin and 1-Simpsons) increased as sample area per site increased. The backwater area had higher diversity than the channel border area. The number of sampling sites required to sample benthic macroinvertebrates during our sampling period depended on the study objective and ranged from 18 to more than 40 sites per stratum. No single sampling regime would efficiently and adequately sample all components of the macroinvertebrate community.
Use of statistical and neural net approaches in predicting toxicity of chemicals.
Basak, S C; Grunwald, G D; Gute, B D; Balasubramanian, K; Opitz, D
2000-01-01
Hierarchical quantitative structure-activity relationships (H-QSAR) have been developed as a new approach in constructing models for estimating physicochemical, biomedicinal, and toxicological properties of interest. This approach uses increasingly more complex molecular descriptors in a graduated approach to model building. In this study, statistical and neural network methods have been applied to the development of H-QSAR models for estimating the acute aquatic toxicity (LC50) of 69 benzene derivatives to Pimephales promelas (fathead minnow). Topostructural, topochemical, geometrical, and quantum chemical indices were used as the four levels of the hierarchical method. It is clear from both the statistical and neural network models that topostructural indices alone cannot adequately model this set of congeneric chemicals. Not surprisingly, topochemical indices greatly increase the predictive power of both statistical and neural network models. Quantum chemical indices also add significantly to the modeling of this set of acute aquatic toxicity data.
Young, Robin L; Weinberg, Janice; Vieira, Verónica; Ozonoff, Al; Webster, Thomas F
2010-07-19
A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic.
2010-01-01
Background A common, important problem in spatial epidemiology is measuring and identifying variation in disease risk across a study region. In application of statistical methods, the problem has two parts. First, spatial variation in risk must be detected across the study region and, second, areas of increased or decreased risk must be correctly identified. The location of such areas may give clues to environmental sources of exposure and disease etiology. One statistical method applicable in spatial epidemiologic settings is a generalized additive model (GAM) which can be applied with a bivariate LOESS smoother to account for geographic location as a possible predictor of disease status. A natural hypothesis when applying this method is whether residential location of subjects is associated with the outcome, i.e. is the smoothing term necessary? Permutation tests are a reasonable hypothesis testing method and provide adequate power under a simple alternative hypothesis. These tests have yet to be compared to other spatial statistics. Results This research uses simulated point data generated under three alternative hypotheses to evaluate the properties of the permutation methods and compare them to the popular spatial scan statistic in a case-control setting. Case 1 was a single circular cluster centered in a circular study region. The spatial scan statistic had the highest power though the GAM method estimates did not fall far behind. Case 2 was a single point source located at the center of a circular cluster and Case 3 was a line source at the center of the horizontal axis of a square study region. Each had linearly decreasing logodds with distance from the point. The GAM methods outperformed the scan statistic in Cases 2 and 3. Comparing sensitivity, measured as the proportion of the exposure source correctly identified as high or low risk, the GAM methods outperformed the scan statistic in all three Cases. Conclusions The GAM permutation testing methods provide a regression-based alternative to the spatial scan statistic. Across all hypotheses examined in this research, the GAM methods had competing or greater power estimates and sensitivities exceeding that of the spatial scan statistic. PMID:20642827
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index alpha-1 is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV. Two procedures for estimating alpha-1 the method of moments and maximum likelihood (ML), are developed and their statistical performance compared. It is concluded that the ML procedure attains the most desirable statistical properties and is hence the recommended statistical estimation procedure for estimating alpha-1. The ML procedure is then generalized for application to a set of real cosmic-ray data and thereby makes this approach applicable to existing cosmic-ray data sets. Several other important results, such as the relationship between collecting power and detector energy resolution, as well as inclusion of a non-Gaussian detector response function, are presented. These results have many practical benefits in the design phase of a cosmic-ray detector as they permit instrument developers to make important trade studies in design parameters as a function of one of the science objectives. This is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Werth, D.; Chen, K. F.
2013-08-22
The ability of water managers to maintain adequate supplies in coming decades depends, in part, on future weather conditions, as climate change has the potential to alter river flows from their current values, possibly rendering them unable to meet demand. Reliable climate projections are therefore critical to predicting the future water supply for the United States. These projections cannot be provided solely by global climate models (GCMs), however, as their resolution is too coarse to resolve the small-scale climate changes that can affect hydrology, and hence water supply, at regional to local scales. A process is needed to ‘downscale’ themore » GCM results to the smaller scales and feed this into a surface hydrology model to help determine the ability of rivers to provide adequate flow to meet future needs. We apply a statistical downscaling to GCM projections of precipitation and temperature through the use of a scaling method. This technique involves the correction of the cumulative distribution functions (CDFs) of the GCM-derived temperature and precipitation results for the 20{sup th} century, and the application of the same correction to 21{sup st} century GCM projections. This is done for three meteorological stations located within the Coosa River basin in northern Georgia, and is used to calculate future river flow statistics for the upper Coosa River. Results are compared to the historical Coosa River flow upstream from Georgia Power Company’s Hammond coal-fired power plant and to flows calculated with the original, unscaled GCM results to determine the impact of potential changes in meteorology on future flows.« less
42 CFR 417.568 - Adequate financial records, statistical data, and cost finding.
Code of Federal Regulations, 2012 CFR
2012-10-01
... ORGANIZATIONS, COMPETITIVE MEDICAL PLANS, AND HEALTH CARE PREPAYMENT PLANS Medicare Payment: Cost Basis § 417... health care industry. (b) Provision of data. (1) The HMO or CMP must provide adequate cost and... 42 Public Health 3 2012-10-01 2012-10-01 false Adequate financial records, statistical data, and...
The Problem of Auto-Correlation in Parasitology
Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick
2012-01-01
Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865
Chu, Rong; Walter, Stephen D.; Guyatt, Gordon; Devereaux, P. J.; Walsh, Michael; Thorlund, Kristian; Thabane, Lehana
2012-01-01
Background Chance imbalance in baseline prognosis of a randomized controlled trial can lead to over or underestimation of treatment effects, particularly in trials with small sample sizes. Our study aimed to (1) evaluate the probability of imbalance in a binary prognostic factor (PF) between two treatment arms, (2) investigate the impact of prognostic imbalance on the estimation of a treatment effect, and (3) examine the effect of sample size (n) in relation to the first two objectives. Methods We simulated data from parallel-group trials evaluating a binary outcome by varying the risk of the outcome, effect of the treatment, power and prevalence of the PF, and n. Logistic regression models with and without adjustment for the PF were compared in terms of bias, standard error, coverage of confidence interval and statistical power. Results For a PF with a prevalence of 0.5, the probability of a difference in the frequency of the PF≥5% reaches 0.42 with 125/arm. Ignoring a strong PF (relative risk = 5) leads to underestimating the strength of a moderate treatment effect, and the underestimate is independent of n when n is >50/arm. Adjusting for such PF increases statistical power. If the PF is weak (RR = 2), adjustment makes little difference in statistical inference. Conditional on a 5% imbalance of a powerful PF, adjustment reduces the likelihood of large bias. If an absolute measure of imbalance ≥5% is deemed important, including 1000 patients/arm provides sufficient protection against such an imbalance. Two thousand patients/arm may provide an adequate control against large random deviations in treatment effect estimation in the presence of a powerful PF. Conclusions The probability of prognostic imbalance in small trials can be substantial. Covariate adjustment improves estimation accuracy and statistical power, and hence should be performed when strong PFs are observed. PMID:22629322
Williams, Donald R; Carlsson, Rickard; Bürkner, Paul-Christian
2017-10-01
Developmental studies of hormones and behavior often include littermates-rodent siblings that share early-life experiences and genes. Due to between-litter variation (i.e., litter effects), the statistical assumption of independent observations is untenable. In two literatures-natural variation in maternal care and prenatal stress-entire litters are categorized based on maternal behavior or experimental condition. Here, we (1) review both literatures; (2) simulate false positive rates for commonly used statistical methods in each literature; and (3) characterize small sample performance of multilevel models (MLM) and generalized estimating equations (GEE). We found that the assumption of independence was routinely violated (>85%), false positives (α=0.05) exceeded nominal levels (up to 0.70), and power (1-β) rarely surpassed 0.80 (even for optimistic sample and effect sizes). Additionally, we show that MLMs and GEEs have adequate performance for common research designs. We discuss implications for the extant literature, the field of behavioral neuroendocrinology, and provide recommendations. Copyright © 2017 Elsevier Inc. All rights reserved.
Wong, Cheuk-Yin; Wilk, Grzegorz; Cirto, Leonardo J. L.; ...
2015-06-22
Transverse spectra of both jets and hadrons obtained in high-energymore » $pp$ and $$p\\bar p $$ collisions at central rapidity exhibit power-law behavior of $$1/p_T^n$$ at high $$p_T$$. The power index $n$ is 4-5 for jet production and is slightly greater for hadron production. Furthermore, the hadron spectra spanning over 14 orders of magnitude down to the lowest $$p_T$$ region in $pp$ collisions at LHC can be adequately described by a single nonextensive statistical mechanical distribution that is widely used in other branches of science. This suggests indirectly the dominance of the hard-scattering process over essentially the whole $$p_T$$ region at central rapidity in $pp$ collisions at LHC. We show here direct evidences of such a dominance of the hard-scattering process by investigating the power index of UA1 jet spectra over an extended $$p_T$$ region and the two-particle correlation data of the STAR and PHENIX Collaborations in high-energy $pp$ and $$p \\bar p$$ collisions at central rapidity. We then study how the showering of the hard-scattering product partons alters the power index of the hadron spectra and leads to a hadron distribution that can be cast into a single-particle non-extensive statistical mechanical distribution. Lastly, because of such a connection, the non-extensive statistical mechanical distribution can be considered as a lowest-order approximation of the hard-scattering of partons followed by the subsequent process of parton showering that turns the jets into hadrons, in high energy $pp$ and $$p\\bar p$$ collisions.« less
Statistical inference involving binomial and negative binomial parameters.
García-Pérez, Miguel A; Núñez-Antón, Vicente
2009-05-01
Statistical inference about two binomial parameters implies that they are both estimated by binomial sampling. There are occasions in which one aims at testing the equality of two binomial parameters before and after the occurrence of the first success along a sequence of Bernoulli trials. In these cases, the binomial parameter before the first success is estimated by negative binomial sampling whereas that after the first success is estimated by binomial sampling, and both estimates are related. This paper derives statistical tools to test two hypotheses, namely, that both binomial parameters equal some specified value and that both parameters are equal though unknown. Simulation studies are used to show that in small samples both tests are accurate in keeping the nominal Type-I error rates, and also to determine sample size requirements to detect large, medium, and small effects with adequate power. Additional simulations also show that the tests are sufficiently robust to certain violations of their assumptions.
The Revictimization of Adult Women With Histories of Childhood Abuse
CHU, JAMES A.
1992-01-01
Both clinical experience and recent research statistics support the observation that childhood abuse survivors are vulnerable to revictimization as adults. The responsibility for revictimization, such as physical or sexual assault, belongs to the perpetrators. However, the factors that make abuse survivors more vulnerable to exploitation need to be examined and understood in order to provide adequate treatment and protection. This discussion integrates an understanding of three powerful forces—the repetition compulsion, post-traumatic syndromes, and profound relational disturbances—that permit the process of revictimization to occur. PMID:22700102
Ahmed, K S
1979-01-01
In Bangladesh the Population Control and Family Planning Division of the Ministry of Health and Population Control has decided to delegate increased financial and administrative powers to the officers of the family planning program at the district level and below. Currently, about 20,000 family planning workers and officials are at work in rural areas. The government believes that the success of the entire family planning program depends on the performance of workers in rural areas, because that is where about 90% of the population lives. Awareness of the need to improve statistical data in Bangladesh has been increasing, particularly in regard to the development of rural areas. An accurate statistical profile of rural Bangladesh is crucial to the formation, implementation and evaluation of rural development programs. A Seminar on Statistics for Rural Development will be held from June 18-20, 1980. The primary objectives of the Seminar are to make an exhaustive analysis of the current availability of statistics required for rural development programs and to consider methodological and operational improvements toward building up an adequate data base.
Charan, J; Saxena, D
2014-01-01
Biased negative studies not only reflect poor research effort but also have an impact on 'patient care' as they prevent further research with similar objectives, leading to potential research areas remaining unexplored. Hence, published 'negative studies' should be methodologically strong. All parameters that may help a reader to judge validity of results and conclusions should be reported in published negative studies. There is a paucity of data on reporting of statistical and methodological parameters in negative studies published in Indian Medical Journals. The present systematic review was designed with an aim to critically evaluate negative studies published in prominent Indian Medical Journals for reporting of statistical and methodological parameters. Systematic review. All negative studies published in 15 Science Citation Indexed (SCI) medical journals published from India were included in present study. Investigators involved in the study evaluated all negative studies for the reporting of various parameters. Primary endpoints were reporting of "power" and "confidence interval." Power was reported in 11.8% studies. Confidence interval was reported in 15.7% studies. Majority of parameters like sample size calculation (13.2%), type of sampling method (50.8%), name of statistical tests (49.1%), adjustment of multiple endpoints (1%), post hoc power calculation (2.1%) were reported poorly. Frequency of reporting was more in clinical trials as compared to other study designs and in journals having impact factor more than 1 as compared to journals having impact factor less than 1. Negative studies published in prominent Indian medical journals do not report statistical and methodological parameters adequately and this may create problems in the critical appraisal of findings reported in these journals by its readers.
Overweight and pregnancy complications.
Abrams, B; Parker, J
1988-01-01
The association between increased prepregnancy weight for height and seven pregnancy complications was studied in a multi-racial sample of more than 4100 recent deliveries. Body mass indices were calculated and used to classify women as average weight (90-119 percent of ideal or BMI 19.21-25.60), moderately overweight (120-135 percent ideal or BMI 25.61-28.90), and very overweight (greater than 135 percent ideal or BMI greater than 28.91) prior to pregnancy. Compared to women of average weight for height, very overweight women had a higher risk of diabetes, hypertension, pregnancy-induced hypertension and primary cesarean section delivery. Moderately overweight women were also at higher risk than average for diabetes, pregnancy-induced hypertension and primary cesarean deliveries but the relative risks were of a smaller magnitude than for very overweight women. With women of average prepregnancy body mass as reference, moderately elevated, but not significant relative risks were found for perinatal mortality in the very overweight group, for urinary tract infections in both overweight groups, and a decreased risk for anemia was found in the very overweight group. However, post-hoc power analyses indicated that the number of overweight women in the sample did not allow adequate statistical power to detect these small differences in risk. To overcome limitations associated with low statistical power, the results of three recent studies of these outcomes in very overweight pregnant women were combined and summarized using Mantel-Haenzel techniques. This second, larger analysis suggested that very overweight women are at significantly higher risk for all seven outcomes studied. Summary results for moderately overweight women could not be calculated, since only two of the studies had evaluated moderately overweight women separately. These latter results support other findings that both moderate overweight and very overweight are risk factors during pregnancy, with the highest risk occurring in the heaviest group. Although these results indicate that moderate overweight is a risk factor during pregnancy, additional studies are needed to confirm the impact of being 20-35 percent above ideal weight prior to pregnancy. The results of this analysis also imply that since the baseline incidence of many perinatal complications is low, studies relating overweight and pregnancy complications should include large enough samples of overweight women so that there is adequate statistical power to reliably detect differences in complication rates.
Acute Respiratory Distress Syndrome Measurement Error. Potential Effect on Clinical Study Results
Cooke, Colin R.; Iwashyna, Theodore J.; Hofer, Timothy P.
2016-01-01
Rationale: Identifying patients with acute respiratory distress syndrome (ARDS) is a recognized challenge. Experts often have only moderate agreement when applying the clinical definition of ARDS to patients. However, no study has fully examined the implications of low reliability measurement of ARDS on clinical studies. Objectives: To investigate how the degree of variability in ARDS measurement commonly reported in clinical studies affects study power, the accuracy of treatment effect estimates, and the measured strength of risk factor associations. Methods: We examined the effect of ARDS measurement error in randomized clinical trials (RCTs) of ARDS-specific treatments and cohort studies using simulations. We varied the reliability of ARDS diagnosis, quantified as the interobserver reliability (κ-statistic) between two reviewers. In RCT simulations, patients identified as having ARDS were enrolled, and when measurement error was present, patients without ARDS could be enrolled. In cohort studies, risk factors as potential predictors were analyzed using reviewer-identified ARDS as the outcome variable. Measurements and Main Results: Lower reliability measurement of ARDS during patient enrollment in RCTs seriously degraded study power. Holding effect size constant, the sample size necessary to attain adequate statistical power increased by more than 50% as reliability declined, although the result was sensitive to ARDS prevalence. In a 1,400-patient clinical trial, the sample size necessary to maintain similar statistical power increased to over 1,900 when reliability declined from perfect to substantial (κ = 0.72). Lower reliability measurement diminished the apparent effectiveness of an ARDS-specific treatment from a 15.2% (95% confidence interval, 9.4–20.9%) absolute risk reduction in mortality to 10.9% (95% confidence interval, 4.7–16.2%) when reliability declined to moderate (κ = 0.51). In cohort studies, the effect on risk factor associations was similar. Conclusions: ARDS measurement error can seriously degrade statistical power and effect size estimates of clinical studies. The reliability of ARDS measurement warrants careful attention in future ARDS clinical studies. PMID:27159648
Relating design and environmental variables to reliability
NASA Astrophysics Data System (ADS)
Kolarik, William J.; Landers, Thomas L.
The combination of space application and nuclear power source demands high reliability hardware. The possibilities of failure, either an inability to provide power or a catastrophic accident, must be minimized. Nuclear power experiences on the ground have led to highly sophisticated probabilistic risk assessment procedures, most of which require quantitative information to adequately assess such risks. In the area of hardware risk analysis, reliability information plays a key role. One of the lessons learned from the Three Mile Island experience is that thorough analyses of critical components are essential. Nuclear grade equipment shows some reliability advantages over commercial. However, no statistically significant difference has been found. A recent study pertaining to spacecraft electronics reliability, examined some 2500 malfunctions on more than 300 aircraft. The study classified the equipment failures into seven general categories. Design deficiencies and lack of environmental protection accounted for about half of all failures. Within each class, limited reliability modeling was performed using a Weibull failure model.
Vulnerability of water supply systems to cyber-physical attacks
NASA Astrophysics Data System (ADS)
Galelli, Stefano; Taormina, Riccardo; Tippenhauer, Nils; Salomons, Elad; Ostfeld, Avi
2016-04-01
The adoption of smart meters, distributed sensor networks and industrial control systems has largely improved the level of service provided by modern water supply systems. Yet, the progressive computerization exposes these critical infrastructures to cyber-physical attacks, which are generally aimed at stealing critical information (cyber-espionage) or causing service disruption (denial-of-service). Recent statistics show that water and power utilities are undergoing frequent attacks - such as the December power outage in Ukraine - , attracting the interest of operators and security agencies. Taking the security of Water Distribution Networks (WDNs) as domain of study, our work seeks to characterize the vulnerability of WDNs to cyber-physical attacks, so as to conceive adequate defense mechanisms. We extend the functionality of EPANET, which models hydraulic and water quality processes in pressurized pipe networks, to include a cyber layer vulnerable to repeated attacks. Simulation results on a medium-scale network show that several hydraulic actuators (valves and pumps, for example) can be easily attacked, causing both service disruption - i.e., water spillage and loss of pressure - and structural damages - e.g., pipes burst. Our work highlights the need for adequate countermeasures, such as attacks detection and reactive control systems.
Vecchiato, G; De Vico Fallani, F; Astolfi, L; Toppi, J; Cincotti, F; Mattia, D; Salinari, S; Babiloni, F
2010-08-30
This paper presents some considerations about the use of adequate statistical techniques in the framework of the neuroelectromagnetic brain mapping. With the use of advanced EEG/MEG recording setup involving hundred of sensors, the issue of the protection against the type I errors that could occur during the execution of hundred of univariate statistical tests, has gained interest. In the present experiment, we investigated the EEG signals from a mannequin acting as an experimental subject. Data have been collected while performing a neuromarketing experiment and analyzed with state of the art computational tools adopted in specialized literature. Results showed that electric data from the mannequin's head presents statistical significant differences in power spectra during the visualization of a commercial advertising when compared to the power spectra gathered during a documentary, when no adjustments were made on the alpha level of the multiple univariate tests performed. The use of the Bonferroni or Bonferroni-Holm adjustments returned correctly no differences between the signals gathered from the mannequin in the two experimental conditions. An partial sample of recently published literature on different neuroscience journals suggested that at least the 30% of the papers do not use statistical protection for the type I errors. While the occurrence of type I errors could be easily managed with appropriate statistical techniques, the use of such techniques is still not so largely adopted in the literature. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Ma, Li-Xin; Liu, Jian-Ping
2012-01-01
To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.
46 CFR 199.110 - Survival craft muster and embarkation arrangements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... be adequately illuminated by lighting with power supplied from the vessel's emergency source of electrical power. (d) Each alleyway, stairway, and exit giving access to a muster and embarkation station must be adequately illuminated by lighting that is capable of having its power supplied by the vessel's...
46 CFR 199.110 - Survival craft muster and embarkation arrangements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... be adequately illuminated by lighting with power supplied from the vessel's emergency source of electrical power. (d) Each alleyway, stairway, and exit giving access to a muster and embarkation station must be adequately illuminated by lighting that is capable of having its power supplied by the vessel's...
Identifying the Source of Misfit in Item Response Theory Models.
Liu, Yang; Maydeu-Olivares, Alberto
2014-01-01
When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Zipkin, Elise F.; Kinlan, Brian P.; Sussman, Allison; Rypkema, Diana; Wimer, Mark; O'Connell, Allan F.
2015-01-01
Estimating patterns of habitat use is challenging for marine avian species because seabirds tend to aggregate in large groups and it can be difficult to locate both individuals and groups in vast marine environments. We developed an approach to estimate the statistical power of discrete survey events to identify species-specific hotspots and coldspots of long-term seabird abundance in marine environments. We illustrate our approach using historical seabird data from survey transects in the U.S. Atlantic Ocean Outer Continental Shelf (OCS), an area that has been divided into “lease blocks” for proposed offshore wind energy development. For our power analysis, we examined whether discrete lease blocks within the region could be defined as hotspots (3 × mean abundance in the OCS) or coldspots (1/3 ×) for individual species within a given season. For each of 74 species/season combinations, we determined which of eight candidate statistical distributions (ranging in their degree of skewedness) best fit the count data. We then used the selected distribution and estimates of regional prevalence to calculate and map statistical power to detect hotspots and coldspots, and estimate the p-value from Monte Carlo significance tests that specific lease blocks are in fact hotspots or coldspots relative to regional average abundance. The power to detect species-specific hotspots was higher than that of coldspots for most species because species-specific prevalence was relatively low (mean: 0.111; SD: 0.110). The number of surveys required for adequate power (> 0.6) was large for most species (tens to hundreds) using this hotspot definition. Regulators may need to accept higher proportional effect sizes, combine species into groups, and/or broaden the spatial scale by combining lease blocks in order to determine optimal placement of wind farms. Our power analysis approach provides a general framework for both retrospective analyses and future avian survey design and is applicable to a broad range of research and conservation problems.
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-01-01
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008–2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0. PMID:27892471
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-11-28
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.
Data series embedding and scale invariant statistics.
Michieli, I; Medved, B; Ristov, S
2010-06-01
Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Mieth, Bettina; Kloft, Marius; Rodríguez, Juan Antonio; Sonnenburg, Sören; Vobruba, Robin; Morcillo-Suárez, Carlos; Farré, Xavier; Marigorta, Urko M.; Fehr, Ernst; Dickhaus, Thorsten; Blanchard, Gilles; Schunk, Daniel; Navarro, Arcadi; Müller, Klaus-Robert
2016-11-01
The standard approach to the analysis of genome-wide association studies (GWAS) is based on testing each position in the genome individually for statistical significance of its association with the phenotype under investigation. To improve the analysis of GWAS, we propose a combination of machine learning and statistical testing that takes correlation structures within the set of SNPs under investigation in a mathematically well-controlled manner into account. The novel two-step algorithm, COMBI, first trains a support vector machine to determine a subset of candidate SNPs and then performs hypothesis tests for these SNPs together with an adequate threshold correction. Applying COMBI to data from a WTCCC study (2007) and measuring performance as replication by independent GWAS published within the 2008-2015 period, we show that our method outperforms ordinary raw p-value thresholding as well as other state-of-the-art methods. COMBI presents higher power and precision than the examined alternatives while yielding fewer false (i.e. non-replicated) and more true (i.e. replicated) discoveries when its results are validated on later GWAS studies. More than 80% of the discoveries made by COMBI upon WTCCC data have been validated by independent studies. Implementations of the COMBI method are available as a part of the GWASpi toolbox 2.0.
Statistical considerations in monitoring birds over large areas
Johnson, D.H.
2000-01-01
The proper design of a monitoring effort depends primarily on the objectives desired, constrained by the resources available to conduct the work. Typically, managers have numerous objectives, such as determining abundance of the species, detecting changes in population size, evaluating responses to management activities, and assessing habitat associations. A design that is optimal for one objective will likely not be optimal for others. Careful consideration of the importance of the competing objectives may lead to a design that adequately addresses the priority concerns, although it may not be optimal for any individual objective. Poor design or inadequate sample sizes may result in such weak conclusions that the effort is wasted. Statistical expertise can be used at several stages, such as estimating power of certain hypothesis tests, but is perhaps most useful in fundamental considerations of describing objectives and designing sampling plans.
Cysique, Lucette A; Waters, Edward K; Brew, Bruce J
2011-11-22
There is conflicting information as to whether antiretroviral drugs with better central nervous system (CNS) penetration (neuroHAART) assist in improving neurocognitive function and suppressing cerebrospinal fluid (CSF) HIV RNA. The current review aims to better synthesise existing literature by using an innovative two-phase review approach (qualitative and quantitative) to overcome methodological differences between studies. Sixteen studies, all observational, were identified using a standard citation search. They fulfilled the following inclusion criteria: conducted in the HAART era; sample size > 10; treatment effect involved more than one antiretroviral and none had a retrospective design. The qualitative phase of review of these studies consisted of (i) a blind assessment rating studies on features such as sample size, statistical methods and definitions of neuroHAART, and (ii) a non-blind assessment of the sensitivity of the neuropsychological methods to HIV-associated neurocognitive disorder (HAND). During quantitative evaluation we assessed the statistical power of studies, which achieved a high rating in the qualitative analysis. The objective of the power analysis was to determine the studies ability to assess their proposed research aims. After studies with at least three limitations were excluded in the qualitative phase, six studies remained. All six found a positive effect of neuroHAART on neurocognitive function or CSF HIV suppression. Of these six studies, only two had statistical power of at least 80%. Studies assessed as using more rigorous methods found that neuroHAART was effective in improving neurocognitive function and decreasing CSF viral load, but only two of those studies were adequately statistically powered. Because all of these studies were observational, they represent a less compelling evidence base than randomised control trials for assessing treatment effect. Therefore, large randomised trials are needed to determine the robustness of any neuroHAART effect. However, such trials must be longitudinal, include the full spectrum of HAND, ideally carefully control for co-morbidities, and be based on optimal neuropsychology methods.
Tyrrell, Pascal N; Corey, Paul N; Feldman, Brian M; Silverman, Earl D
2013-06-01
Physicians often assess the effectiveness of treatments on a small number of patients. Multiple-baseline designs (MBDs), based on the Wampold-Worsham (WW) method of randomization and applied to four subjects, have relatively low power. Our objective was to propose another approach with greater power that does not suffer from the time requirements of the WW method applied to a greater number of subjects. The power of a design that involves the combination of two four-subject MBDs was estimated using computer simulation and compared with the four- and eight-subject designs. The effect of a delayed linear response to treatment on the power of the test was also investigated. Power was found to be adequate (>80%) for a standardized mean difference (SMD) greater than 0.8. The effect size associated with 80% power from combined tests was smaller than that of the single four-subject MBD (SMD=1.3) and comparable with the eight-subject MBD (SMD=0.6). A delayed linear response to the treatment resulted in important reductions in power (20-35%). By combining two four-subject MBD tests, an investigator can detect better effect sizes (SMD=0.8) and be able to complete a comparatively timelier and feasible study. Copyright © 2013 Elsevier Inc. All rights reserved.
Design of portable ultraminiature flow cytometers for medical diagnostics
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of portable microfluidic flow/image cytometry devices for measurements in the field (e.g. initial medical diagnostics) requires careful design in terms of power requirements and weight to allow for realistic portability. True portability with high-throughput microfluidic systems also requires sampling systems without the need for sheath hydrodynamic focusing both to avoid the need for sheath fluid and to enable higher volumes of actual sample, rather than sheath/sample combinations. Weight/power requirements dictate use of super-bright LEDs with top-hat excitation beam architectures and very small silicon photodiodes or nanophotonic sensors that can both be powered by small batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. Microfluidic cytometry also requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically in less than 15 minutes) initial medical decisions for patients in the field. This is not something conventional cytometry traditionally worries about, but is very important for development of small, portable microfluidic devices with small-volume throughputs. It also provides a more reasonable alternative to conventional tubes of blood when sampling geriatric and newborn patients for whom a conventional peripheral blood draw can be problematical. Instead one or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the doctor's office or field.
Design of point-of-care (POC) microfluidic medical diagnostic devices
NASA Astrophysics Data System (ADS)
Leary, James F.
2018-02-01
Design of inexpensive and portable hand-held microfluidic flow/image cytometry devices for initial medical diagnostics at the point of initial patient contact by emergency medical personnel in the field requires careful design in terms of power/weight requirements to allow for realistic portability as a hand-held, point-of-care medical diagnostics device. True portability also requires small micro-pumps for high-throughput capability. Weight/power requirements dictate use of super-bright LEDs and very small silicon photodiodes or nanophotonic sensors that can be powered by batteries. Signal-to-noise characteristics can be greatly improved by appropriately pulsing the LED excitation sources and sampling and subtracting noise in between excitation pulses. The requirements for basic computing, imaging, GPS and basic telecommunications can be simultaneously met by use of smartphone technologies, which become part of the overall device. Software for a user-interface system, limited real-time computing, real-time imaging, and offline data analysis can be accomplished through multi-platform software development systems that are well-suited to a variety of currently available cellphone technologies which already contain all of these capabilities. Microfluidic cytometry requires judicious use of small sample volumes and appropriate statistical sampling by microfluidic cytometry or imaging for adequate statistical significance to permit real-time (typically < 15 minutes) medical decisions for patients at the physician's office or real-time decision making in the field. One or two drops of blood obtained by pin-prick should be able to provide statistically meaningful results for use in making real-time medical decisions without the need for blood fractionation, which is not realistic in the field.
Anderson, Samantha F; Maxwell, Scott E
2017-01-01
Psychology is undergoing a replication crisis. The discussion surrounding this crisis has centered on mistrust of previous findings. Researchers planning replication studies often use the original study sample effect size as the basis for sample size planning. However, this strategy ignores uncertainty and publication bias in estimated effect sizes, resulting in overly optimistic calculations. A psychologist who intends to obtain power of .80 in the replication study, and performs calculations accordingly, may have an actual power lower than .80. We performed simulations to reveal the magnitude of the difference between actual and intended power based on common sample size planning strategies and assessed the performance of methods that aim to correct for effect size uncertainty and/or bias. Our results imply that even if original studies reflect actual phenomena and were conducted in the absence of questionable research practices, popular approaches to designing replication studies may result in a low success rate, especially if the original study is underpowered. Methods correcting for bias and/or uncertainty generally had higher actual power, but were not a panacea for an underpowered original study. Thus, it becomes imperative that 1) original studies are adequately powered and 2) replication studies are designed with methods that are more likely to yield the intended level of power.
NONPARAMETRIC MANOVA APPROACHES FOR NON-NORMAL MULTIVARIATE OUTCOMES WITH MISSING VALUES
He, Fanyin; Mazumdar, Sati; Tang, Gong; Bhatia, Triptish; Anderson, Stewart J.; Dew, Mary Amanda; Krafty, Robert; Nimgaonkar, Vishwajit; Deshpande, Smita; Hall, Martica; Reynolds, Charles F.
2017-01-01
Between-group comparisons often entail many correlated response variables. The multivariate linear model, with its assumption of multivariate normality, is the accepted standard tool for these tests. When this assumption is violated, the nonparametric multivariate Kruskal-Wallis (MKW) test is frequently used. However, this test requires complete cases with no missing values in response variables. Deletion of cases with missing values likely leads to inefficient statistical inference. Here we extend the MKW test to retain information from partially-observed cases. Results of simulated studies and analysis of real data show that the proposed method provides adequate coverage and superior power to complete-case analyses. PMID:29416225
INTERPRETATION OF THE STRUCTURE FUNCTION OF ROTATION MEASURE IN THE INTERSTELLAR MEDIUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Siyao; Zhang, Bing, E-mail: syxu@pku.edu.cn, E-mail: zhang@physics.unlv.edu
2016-06-20
The observed structure function (SF) of rotation measure (RM) varies as a broken power-law function of angular scales. The systematic shallowness of its spectral slope is inconsistent with the standard Kolmogorov scaling. This motivates us to examine the statistical analysis on RM fluctuations. The correlations of RM constructed by Lazarian and Pogosyan are demonstrated to be adequate in explaining the observed features of RM SFs through a direct comparison between the theoretically obtained and observationally measured SF results. By segregating the density and magnetic field fluctuations and adopting arbitrary indices for their respective power spectra, we find that when themore » SFs of RM and emission measure have a similar form over the same range of angular scales, the statistics of the RM fluctuations reflect the properties of density fluctuations. RM SFs can be used to evaluate the mean magnetic field along the line of sight, but cannot serve as an informative source on the properties of turbulent magnetic field in the interstellar medium. We identify the spectral break of RM SFs as the inner scale of a shallow spectrum of electron density fluctuations, which characterizes the typical size of discrete electron density structures in the observed region.« less
The art and science of choosing efficacy endpoints for rare disease clinical trials.
Cox, Gerald F
2018-04-01
An important challenge in rare disease clinical trials is to demonstrate a clinically meaningful and statistically significant response to treatment. Selecting the most appropriate and sensitive efficacy endpoints for a treatment trial is part art and part science. The types of endpoints should align with the stage of development (e.g., proof of concept vs. confirmation of clinical efficacy). The patient characteristics and disease stage should reflect the treatment goal of improving disease manifestations or preventing disease progression. For rare diseases, regulatory approval requires demonstration of clinical benefit, defined as how a patient, feels, functions, or survives, in at least one adequate and well-controlled pivotal study conducted according to Good Clinical Practice. In some cases, full regulatory approval can occur using a validated surrogate biomarker, while accelerated, or provisional, approval can occur using a biomarker that is likely to predict clinical benefit. Rare disease studies are small by necessity and require the use of endpoints with large effect sizes to demonstrate statistical significance. Understanding the quantitative factors that determine effect size and its impact on powering the study with an adequate sample size is key to the successful choice of endpoints. Interpreting the clinical meaningfulness of an observed change in an efficacy endpoint can be justified by statistical methods, regulatory precedence, and clinical context. Heterogeneous diseases that affect multiple organ systems may be better accommodated by endpoints that assess mean change across multiple endpoints within the same patient rather than mean change in an individual endpoint across all patients. © 2018 Wiley Periodicals, Inc.
Optimal sample sizes for the design of reliability studies: power consideration.
Shieh, Gwowen
2014-09-01
Intraclass correlation coefficients are used extensively to measure the reliability or degree of resemblance among group members in multilevel research. This study concerns the problem of the necessary sample size to ensure adequate statistical power for hypothesis tests concerning the intraclass correlation coefficient in the one-way random-effects model. In view of the incomplete and problematic numerical results in the literature, the approximate sample size formula constructed from Fisher's transformation is reevaluated and compared with an exact approach across a wide range of model configurations. These comprehensive examinations showed that the Fisher transformation method is appropriate only under limited circumstances, and therefore it is not recommended as a general method in practice. For advance design planning of reliability studies, the exact sample size procedures are fully described and illustrated for various allocation and cost schemes. Corresponding computer programs are also developed to implement the suggested algorithms.
NASA Technical Reports Server (NTRS)
Murrow, H. N.; Mccain, W. E.; Rhyne, R. H.
1982-01-01
Measurements of three components of clear air atmospheric turbulence were made with an airplane incorporating a special instrumentation system to provide accurate data resolution to wavelengths of approximately 12,500 m (40,000 ft). Flight samplings covered an altitude range from approximately 500 to 14,000 m (1500 to 46,000 ft) in various meteorological conditions. Individual autocorrelation functions and power spectra for the three turbulence components from 43 data runs taken primarily from mountain wave and jet stream encounters are presented. The flight location (Eastern or Western United States), date, time, run length, intensity level (standard deviation), and values of statistical degrees of freedom for each run are provided in tabular form. The data presented should provide adequate information for detailed meteorological correlations. Some time histories which contain predominant low frequency wave motion are also presented.
2013-01-01
Background The theoretical basis of genome-wide association studies (GWAS) is statistical inference of linkage disequilibrium (LD) between any polymorphic marker and a putative disease locus. Most methods widely implemented for such analyses are vulnerable to several key demographic factors and deliver a poor statistical power for detecting genuine associations and also a high false positive rate. Here, we present a likelihood-based statistical approach that accounts properly for non-random nature of case–control samples in regard of genotypic distribution at the loci in populations under study and confers flexibility to test for genetic association in presence of different confounding factors such as population structure, non-randomness of samples etc. Results We implemented this novel method together with several popular methods in the literature of GWAS, to re-analyze recently published Parkinson’s disease (PD) case–control samples. The real data analysis and computer simulation show that the new method confers not only significantly improved statistical power for detecting the associations but also robustness to the difficulties stemmed from non-randomly sampling and genetic structures when compared to its rivals. In particular, the new method detected 44 significant SNPs within 25 chromosomal regions of size < 1 Mb but only 6 SNPs in two of these regions were previously detected by the trend test based methods. It discovered two SNPs located 1.18 Mb and 0.18 Mb from the PD candidates, FGF20 and PARK8, without invoking false positive risk. Conclusions We developed a novel likelihood-based method which provides adequate estimation of LD and other population model parameters by using case and control samples, the ease in integration of these samples from multiple genetically divergent populations and thus confers statistically robust and powerful analyses of GWAS. On basis of simulation studies and analysis of real datasets, we demonstrated significant improvement of the new method over the non-parametric trend test, which is the most popularly implemented in the literature of GWAS. PMID:23394771
Statistical variances of diffusional properties from ab initio molecular dynamics simulations
NASA Astrophysics Data System (ADS)
He, Xingfeng; Zhu, Yizhou; Epstein, Alexander; Mo, Yifei
2018-12-01
Ab initio molecular dynamics (AIMD) simulation is widely employed in studying diffusion mechanisms and in quantifying diffusional properties of materials. However, AIMD simulations are often limited to a few hundred atoms and a short, sub-nanosecond physical timescale, which leads to models that include only a limited number of diffusion events. As a result, the diffusional properties obtained from AIMD simulations are often plagued by poor statistics. In this paper, we re-examine the process to estimate diffusivity and ionic conductivity from the AIMD simulations and establish the procedure to minimize the fitting errors. In addition, we propose methods for quantifying the statistical variance of the diffusivity and ionic conductivity from the number of diffusion events observed during the AIMD simulation. Since an adequate number of diffusion events must be sampled, AIMD simulations should be sufficiently long and can only be performed on materials with reasonably fast diffusion. We chart the ranges of materials and physical conditions that can be accessible by AIMD simulations in studying diffusional properties. Our work provides the foundation for quantifying the statistical confidence levels of diffusion results from AIMD simulations and for correctly employing this powerful technique.
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W., Jr.
2003-01-01
A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.
Applications of statistical physics to technology price evolution
NASA Astrophysics Data System (ADS)
McNerney, James
Understanding how changing technology affects the prices of goods is a problem with both rich phenomenology and important policy consequences. Using methods from statistical physics, I model technology-driven price evolution. First, I examine a model for the price evolution of individual technologies. The price of a good often follows a power law equation when plotted against its cumulative production. This observation turns out to have significant consequences for technology policy aimed at mitigating climate change, where technologies are needed that achieve low carbon emissions at low cost. However, no theory adequately explains why technology prices follow power laws. To understand this behavior, I simplify an existing model that treats technologies as machines composed of interacting components. I find that the power law exponent of the price trajectory is inversely related to the number of interactions per component. I extend the model to allow for more realistic component interactions and make a testable prediction. Next, I conduct a case-study on the cost evolution of coal-fired electricity. I derive the cost in terms of various physical and economic components. The results suggest that commodities and technologies fall into distinct classes of price models, with commodities following martingales, and technologies following exponentials in time or power laws in cumulative production. I then examine the network of money flows between industries. This work is a precursor to studying the simultaneous evolution of multiple technologies. Economies resemble large machines, with different industries acting as interacting components with specialized functions. To begin studying the structure of these machines, I examine 20 economies with an emphasis on finding common features to serve as targets for statistical physics models. I find they share the same money flow and industry size distributions. I apply methods from statistical physics to show that industries cluster the same way according to industry type. Finally, I use these industry money flows to model the price evolution of many goods simultaneously, where network effects become important. I derive a prediction for which goods tend to improve most rapidly. The fastest-improving goods are those with the highest mean path lengths in the money flow network.
Fourier and Wavelet Analysis of Coronal Time Series
NASA Astrophysics Data System (ADS)
Auchère, F.; Froment, C.; Bocchialini, K.; Buchlin, E.; Solomon, J.
2016-10-01
Using Fourier and wavelet analysis, we critically re-assess the significance of our detection of periodic pulsations in coronal loops. We show that the proper identification of the frequency dependence and statistical properties of the different components of the power spectra provies a strong argument against the common practice of data detrending, which tends to produce spurious detections around the cut-off frequency of the filter. In addition, the white and red noise models built into the widely used wavelet code of Torrence & Compo cannot, in most cases, adequately represent the power spectra of coronal time series, thus also possibly causing false positives. Both effects suggest that several reports of periodic phenomena should be re-examined. The Torrence & Compo code nonetheless effectively computes rigorous confidence levels if provided with pertinent models of mean power spectra, and we describe the appropriate manner in which to call its core routines. We recall the meaning of the default confidence levels output from the code, and we propose new Monte-Carlo-derived levels that take into account the total number of degrees of freedom in the wavelet spectra. These improvements allow us to confirm that the power peaks that we detected have a very low probability of being caused by noise.
Population patterns in World’s administrative units
Miramontes, Pedro; Cocho, Germinal
2017-01-01
Whereas there has been an extended discussion concerning city population distribution, little has been said about that of administrative divisions. In this work, we investigate the population distribution of second-level administrative units of 150 countries and territories and propose the discrete generalized beta distribution (DGBD) rank-size function to describe the data. After testing the balance between the goodness of fit and number of parameters of this function compared with a power law, which is the most common model for city population, the DGBD is a good statistical model for 96% of our datasets and preferred over a power law in almost every case. Moreover, the DGBD is preferred over a power law for fitting country population data, which can be seen as the zeroth-level administrative unit. We present a computational toy model to simulate the formation of administrative divisions in one dimension and give numerical evidence that the DGBD arises from a particular case of this model. This model, along with the fitting of the DGBD, proves adequate in reproducing and describing local unit evolution and its effect on the population distribution. PMID:28791153
Shawna, Wicks; M., Taylor Christopher; Meng, Luo; Eugene, Blanchard IV; David, Ribnicky; T., Cefalu William; L., Mynatt Randall; A., Welsh David
2014-01-01
Objective The gut microbiome has been implicated in obesity and metabolic syndrome; however, most studies have focused on fecal or colonic samples. Several species of Artemisia have been reported to ameliorate insulin signaling both in vitro and in vivo. The aim of this study was to characterize the mucosal and luminal bacterial populations in the terminal ileum with or without supplementation with Artemisia extracts. Materials/Methods Following 4 weeks of supplementation with different Artemisia extracts (PMI 5011, Santa or Scopa), diet-induced obese mice were sacrificed and luminal and mucosal samples of terminal ileum were used to evaluate microbial community composition by pyrosequencing of 16S rDNA hypervariable regions. Results Significant differences in community structure and membership were observed between luminal and mucosal samples, irrespective of diet group. All Artemisia extracts increased the Bacteroidetes:Firmicutes ratio in mucosal samples. This effect was not observed in the luminal compartment. There was high inter-individual variability in the phylogenetic assessments of the ileal microbiota, limiting the statistical power of this pilot investigation. Conclusions Marked differences in bacterial communities exist dependent upon the biogeographic compartment in the terminal ileum. Future studies testing the effects of Artemisia or other botanical supplements require larger sample sizes for adequate statistical power. PMID:24985102
Herbert, Vanessa; Kyle, Simon D; Pratt, Daniel
2018-06-01
Individuals with insomnia report difficulties pertaining to their cognitive functioning. Cognitive behavioural therapy for insomnia (CBT-I) is associated with robust, long-term improvements in sleep parameters, however less is known about the impact of CBT-I on the daytime correlates of the disorder. A systematic review and narrative synthesis was conducted in order to summarise and evaluate the evidence regarding the impact of CBT-I on cognitive functioning. Reference databases were searched and studies were included if they assessed cognitive performance as an outcome of CBT-I, using either self-report questionnaires or cognitive tests. Eighteen studies met inclusion criteria, comprising 923 individuals with insomnia symptoms. The standardised mean difference was calculated at post-intervention and follow-up. We found preliminary evidence for small to moderate effects of CBT-I on subjective measures of cognitive functioning. Few of the effects were statistically significant, likely due to small sample sizes and limited statistical power. There is a lack of evidence with regards to the impact of CBT-I on objective cognitive performance, primarily due to the small number of studies that administered an objective measure (n = 4). We conclude that adequately powered randomised controlled trials, utilising both subjective and objective measures of cognitive functioning are required. Copyright © 2017 Elsevier Ltd. All rights reserved.
Imaging of neural oscillations with embedded inferential and group prevalence statistics.
Donhauser, Peter W; Florin, Esther; Baillet, Sylvain
2018-02-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.
Imaging of neural oscillations with embedded inferential and group prevalence statistics
2018-01-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902
How many landmarks are enough to characterize shape and size variation?
Watanabe, Akinobu
2018-01-01
Accurate characterization of morphological variation is crucial for generating reliable results and conclusions concerning changes and differences in form. Despite the prevalence of landmark-based geometric morphometric (GM) data in the scientific literature, a formal treatment of whether sampled landmarks adequately capture shape variation has remained elusive. Here, I introduce LaSEC (Landmark Sampling Evaluation Curve), a computational tool to assess the fidelity of morphological characterization by landmarks. This task is achieved by calculating how subsampled data converge to the pattern of shape variation in the full dataset as landmark sampling is increased incrementally. While the number of landmarks needed for adequate shape variation is dependent on individual datasets, LaSEC helps the user (1) identify under- and oversampling of landmarks; (2) assess robustness of morphological characterization; and (3) determine the number of landmarks that can be removed without compromising shape information. In practice, this knowledge could reduce time and cost associated with data collection, maintain statistical power in certain analyses, and enable the incorporation of incomplete, but important, specimens to the dataset. Results based on simulated shape data also reveal general properties of landmark data, including statistical consistency where sampling additional landmarks has the tendency to asymptotically improve the accuracy of morphological characterization. As landmark-based GM data become more widely adopted, LaSEC provides a systematic approach to evaluate and refine the collection of shape data--a goal paramount for accumulation and analysis of accurate morphological information.
Evaluating the One-in-Five Statistic: Women's Risk of Sexual Assault While in College.
Muehlenhard, Charlene L; Peterson, Zoë D; Humphreys, Terry P; Jozkowski, Kristen N
In 2014, U.S. president Barack Obama announced a White House Task Force to Protect Students From Sexual Assault, noting that "1 in 5 women on college campuses has been sexually assaulted during their time there." Since then, this one-in-five statistic has permeated public discourse. It is frequently reported, but some commentators have criticized it as exaggerated. Here, we address the question, "What percentage of women are sexually assaulted while in college?" After discussing definitions of sexual assault, we systematically review available data, focusing on studies that used large, representative samples of female undergraduates and multiple behaviorally specific questions. We conclude that one in five is a reasonably accurate average across women and campuses. We also review studies that are inappropriately cited as either supporting or debunking the one-in-five statistic; we explain why they do not adequately address this question. We identify and evaluate several assumptions implicit in the public discourse (e.g., the assumption that college students are at greater risk than nonstudents). Given the empirical support for the one-in-five statistic, we suggest that the controversy occurs because of misunderstandings about studies' methods and results and because this topic has implications for gender relations, power, and sexuality; this controversy is ultimately about values.
Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.
2014-01-01
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748
Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S
2013-12-06
Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.
First "glass" education: telementored cardiac ultrasonography using Google Glass- a pilot study.
Russell, Patrick M; Mallin, Michael; Youngquist, Scott T; Cotton, Jennifer; Aboul-Hosn, Nael; Dawson, Matt
2014-11-01
The objective of this study was to determine the feasibility of telementored instruction in bedside ultrasonography (US) using Google Glass. The authors sought to examine whether first-time US users could obtain adequate parasternal long axis (PSLA) views to approximate ejection fraction (EF) using Google Glass telementoring. This was a prospective, randomized, single-blinded study. Eighteen second-year medical students were randomized into three groups and tasked with obtaining PSLA cardiac imaging. Group A received real-time telementored education through Google Glass via Google Hangout from a remotely located expert. Group B received bedside education from the same expert. Group C represented the control and received no instruction. Each subject was given 3 minutes to obtain a best PSLA cardiac imaging using a portable GE Vscan. Image clips obtained by each subject were stored. A second expert, blinded to instructional mode, evaluated images for adequacy and assigned an image quality rating on a 0 to 10 scale. Group A was able to obtain adequate images six out of six times (100%) with a median image quality rating of 7.5 (interquartile range [IQR] = 6 to 10) out of 10. Group B was also able to obtain adequate views six out of six times (100%), with a median image quality rating of 8 (IQR = 7 to 9). Group C was able to obtain adequate views one out of six times (17%), with a median image quality of 0 (IQR = 0 to 2). There were no statistically significant differences between Group A and Group B in the achievement of adequate images for E-point septal separation measurement or in image quality. In this pilot/feasibility study, novice US users were able to obtain adequate imaging to determine a healthy patient's EF through telementored education using Google Glass. These preliminary data suggest telementoring as an adequate means of medical education in bedside US. This conclusion will need to be validated with larger, more powerful studies including evaluation of pathologic findings and varying body habitus among models. © 2014 by the Society for Academic Emergency Medicine.
NASA Technical Reports Server (NTRS)
Howell, L. W.
2001-01-01
A simple power law model consisting of a single spectral index (alpha-1) is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at knee energy (E(sub k)) to a steeper spectral index alpha-2 > alpha-1 above E(sub k). The maximum likelihood procedure is developed for estimating these three spectral parameters of the broken power law energy spectrum from simulated detector responses. These estimates and their surrounding statistical uncertainty are being used to derive the requirements in energy resolution, calorimeter size, and energy response of a proposed sampling calorimeter for the Advanced Cosmic-ray Composition Experiment for the Space Station (ACCESS). This study thereby permits instrument developers to make important trade studies in design parameters as a function of the science objectives, which is particularly important for space-based detectors where physical parameters, such as dimension and weight, impose rigorous practical limits to the design envelope.
Recurrence Density Enhanced Complex Networks for Nonlinear Time Series Analysis
NASA Astrophysics Data System (ADS)
Costa, Diego G. De B.; Reis, Barbara M. Da F.; Zou, Yong; Quiles, Marcos G.; Macau, Elbert E. N.
We introduce a new method, which is entitled Recurrence Density Enhanced Complex Network (RDE-CN), to properly analyze nonlinear time series. Our method first transforms a recurrence plot into a figure of a reduced number of points yet preserving the main and fundamental recurrence properties of the original plot. This resulting figure is then reinterpreted as a complex network, which is further characterized by network statistical measures. We illustrate the computational power of RDE-CN approach by time series by both the logistic map and experimental fluid flows, which show that our method distinguishes different dynamics sufficiently well as the traditional recurrence analysis. Therefore, the proposed methodology characterizes the recurrence matrix adequately, while using a reduced set of points from the original recurrence plots.
ERIC Educational Resources Information Center
Hilton, Sterling C.; Schau, Candace; Olsen, Joseph A.
2004-01-01
In addition to student learning, positive student attitudes have become an important course outcome for many introductory statistics instructors. To adequately assess changes in mean attitudes across introductory statistics courses, the attitude instruments used should be invariant by administration time. Attitudes toward statistics from 4,910…
Design of partially supervised classifiers for multispectral image data
NASA Technical Reports Server (NTRS)
Jeon, Byeungwoo; Landgrebe, David
1993-01-01
A partially supervised classification problem is addressed, especially when the class definition and corresponding training samples are provided a priori only for just one particular class. In practical applications of pattern classification techniques, a frequently observed characteristic is the heavy, often nearly impossible requirements on representative prior statistical class characteristics of all classes in a given data set. Considering the effort in both time and man-power required to have a well-defined, exhaustive list of classes with a corresponding representative set of training samples, this 'partially' supervised capability would be very desirable, assuming adequate classifier performance can be obtained. Two different classification algorithms are developed to achieve simplicity in classifier design by reducing the requirement of prior statistical information without sacrificing significant classifying capability. The first one is based on optimal significance testing, where the optimal acceptance probability is estimated directly from the data set. In the second approach, the partially supervised classification is considered as a problem of unsupervised clustering with initially one known cluster or class. A weighted unsupervised clustering procedure is developed to automatically define other classes and estimate their class statistics. The operational simplicity thus realized should make these partially supervised classification schemes very viable tools in pattern classification.
NASA Astrophysics Data System (ADS)
Maolikul, S.; Kiatgamolchai, S.; Chavarnakul, T.
2012-06-01
In the context of information and communication technology (ICT) trend for worldwide individuals, social life becomes digital and portable consumer electronic devices (PCED) powered by conventional power supply from batteries have been evolving through miniaturization and various function integration. Thermoelectric generators (TEG) were hypothesized for its potential role of battery charger to serve the shining PCED market. Hence, this paper, mainly focusing at the metropolitan market in Thailand, aimed to conduct architectural innovation foresight and to develop scenarios on potential exploitation approach of PCED battery power supply with TEG charger converting power from ambient heat source adjacent to individual's daily life. After technical review and assessment for TEG potential and battery aspect, the business research was conducted to analyze PCED consumer behavior for their PCED utilization pattern, power supply lack problems, and encountering heat sources/sinks in 3 modes: daily life, work, and leisure hobbies. Based on the secondary data analysis from literature and National Statistical Office of Thailand, quantitative analysis was applied using the cluster probability sampling methodology, statistically, with the sample size of 400 at 0.05 level of significance. In addition, the qualitative analysis was conducted to emphasize the rationale of consumer's behavior using in-depth qualitative interview. Scenario planning technique was also used to generate technological and market trend foresight. Innovation field and potential scenario for matching technology with market was proposed in this paper. The ingredient for successful commercialization of battery power supply with TEG charger for PCED market consists of 5 factors as follows: (1) PCED characteristic, (2) potential ambient heat sources/sinks, (3) battery module, (4) power management module, and the final jigsaw (5) characteristic and adequate arrangement of TEG modules. The foresight outcome for the potential innovations represents a case study in the pilot commercialization of TEG technology for some interesting niche markets in metropolitan area of Thailand, and, thus, can be the clue for product development related to TEG for market-driven application in other similar requirement conditions and contexts as well.
Statistical analyses to support guidelines for marine avian sampling. Final report
Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris
2012-01-01
Interest in development of offshore renewable energy facilities has led to a need for high-quality, statistically robust information on marine wildlife distributions. A practical approach is described to estimate the amount of sampling effort required to have sufficient statistical power to identify species-specific “hotspots” and “coldspots” of marine bird abundance and occurrence in an offshore environment divided into discrete spatial units (e.g., lease blocks), where “hotspots” and “coldspots” are defined relative to a reference (e.g., regional) mean abundance and/or occurrence probability for each species of interest. For example, a location with average abundance or occurrence that is three times larger the mean (3x effect size) could be defined as a “hotspot,” and a location that is three times smaller than the mean (1/3x effect size) as a “coldspot.” The choice of the effect size used to define hot and coldspots will generally depend on a combination of ecological and regulatory considerations. A method is also developed for testing the statistical significance of possible hotspots and coldspots. Both methods are illustrated with historical seabird survey data from the USGS Avian Compendium Database. Our approach consists of five main components: 1. A review of the primary scientific literature on statistical modeling of animal group size and avian count data to develop a candidate set of statistical distributions that have been used or may be useful to model seabird counts. 2. Statistical power curves for one-sample, one-tailed Monte Carlo significance tests of differences of observed small-sample means from a specified reference distribution. These curves show the power to detect "hotspots" or "coldspots" of occurrence and abundance at a range of effect sizes, given assumptions which we discuss. 3. A model selection procedure, based on maximum likelihood fits of models in the candidate set, to determine an appropriate statistical distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.
ON THE FOURIER AND WAVELET ANALYSIS OF CORONAL TIME SERIES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Auchère, F.; Froment, C.; Bocchialini, K.
Using Fourier and wavelet analysis, we critically re-assess the significance of our detection of periodic pulsations in coronal loops. We show that the proper identification of the frequency dependence and statistical properties of the different components of the power spectra provides a strong argument against the common practice of data detrending, which tends to produce spurious detections around the cut-off frequency of the filter. In addition, the white and red noise models built into the widely used wavelet code of Torrence and Compo cannot, in most cases, adequately represent the power spectra of coronal time series, thus also possibly causingmore » false positives. Both effects suggest that several reports of periodic phenomena should be re-examined. The Torrence and Compo code nonetheless effectively computes rigorous confidence levels if provided with pertinent models of mean power spectra, and we describe the appropriate manner in which to call its core routines. We recall the meaning of the default confidence levels output from the code, and we propose new Monte-Carlo-derived levels that take into account the total number of degrees of freedom in the wavelet spectra. These improvements allow us to confirm that the power peaks that we detected have a very low probability of being caused by noise.« less
On the Fourier and Wavelet Analysis of Coronal Time Series
NASA Astrophysics Data System (ADS)
Auchère, F.; Froment, C.; Bocchialini, K.; Buchlin, E.; Solomon, J.
2016-07-01
Using Fourier and wavelet analysis, we critically re-assess the significance of our detection of periodic pulsations in coronal loops. We show that the proper identification of the frequency dependence and statistical properties of the different components of the power spectra provides a strong argument against the common practice of data detrending, which tends to produce spurious detections around the cut-off frequency of the filter. In addition, the white and red noise models built into the widely used wavelet code of Torrence & Compo cannot, in most cases, adequately represent the power spectra of coronal time series, thus also possibly causing false positives. Both effects suggest that several reports of periodic phenomena should be re-examined. The Torrence & Compo code nonetheless effectively computes rigorous confidence levels if provided with pertinent models of mean power spectra, and we describe the appropriate manner in which to call its core routines. We recall the meaning of the default confidence levels output from the code, and we propose new Monte-Carlo-derived levels that take into account the total number of degrees of freedom in the wavelet spectra. These improvements allow us to confirm that the power peaks that we detected have a very low probability of being caused by noise.
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling
Wood, John
2017-01-01
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered—some very seriously so—but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. PMID:28706080
5 CFR 297.401 - Conditions of disclosure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... with advance adequate written assurance that the record will be used solely as a statistical research... records; and (ii) Certification that the records will be used only for statistical purposes. (2) These... information from records released for statistical purposes, the system manager will reasonably ensure that the...
Statistical Rick Estimation for Communication System Design --- A Preliminary Look
NASA Astrophysics Data System (ADS)
Babuscia, A.; Cheung, K.-M.
2012-02-01
Spacecraft are complex systems that involve different subsystems with multiple relationships among them. For these reasons, the design of a spacecraft is a time-evolving process that starts from requirements and evolves over time across different design phases. During this process, a lot of changes can happen. They can affect mass and power at the component level, at the subsystem level, and even at the system level. Each spacecraft has to respect the overall constraints in terms of mass and power: for this reason, it is important to be sure that the design does not exceed these limitations. Current practice in system models primarily deals with this problem, allocating margins on individual components and on individual subsystems. However, a statistical characterization of the fluctuations in mass and power of the overall system (i.e., the spacecraft) is missing. This lack of adequate statistical characterization would result in a risky spacecraft design that might not fit the mission constraints and requirements, or in a conservative design that might not fully utilize the available resources. Due to the complexity of the problem and to the different expertise and knowledge required to develop a complete risk model for a spacecraft design, this article is focused on risk estimation for a specific spacecraft subsystem: the communication subsystem. The current research aims to be a proof of concept of a risk-based design optimization approach, which can then be further expanded to the design of other subsystems as well as to the whole spacecraft. The objective of this research is to develop a mathematical approach to quantify the likelihood that the major design drivers of mass and power of a space communication system would meet the spacecraft and mission requirements and constraints through the mission design lifecycle. Using this approach, the communication system designers will be able to evaluate and to compare different communication architectures in a risk trade-off perspective. The results described in this article include a baseline communication system design tool and a statistical characterization of the design risks through a combination of historical mission data and expert opinion contributions. An application example of the communication system of a university spacecraft is presented. IPNPR Volume 42-189 Tagged File.txt
ERIC Educational Resources Information Center
Bargagliotti, Anna E.
2012-01-01
Statistics and probability have become an integral part of mathematics education. Therefore it is important to understand whether curricular materials adequately represent statistical ideas. The "Guidelines for Assessment and Instruction in Statistics Education" (GAISE) report (Franklin, Kader, Mewborn, Moreno, Peck, Perry, & Scheaffer, 2007),…
Evaluating and Reporting Statistical Power in Counseling Research
ERIC Educational Resources Information Center
Balkin, Richard S.; Sheperis, Carl J.
2011-01-01
Despite recommendations from the "Publication Manual of the American Psychological Association" (6th ed.) to include information on statistical power when publishing quantitative results, authors seldom include analysis or discussion of statistical power. The rationale for discussing statistical power is addressed, approaches to using "G*Power" to…
Samples in applied psychology: over a decade of research in review.
Shen, Winny; Kiger, Thomas B; Davies, Stacy E; Rasch, Rena L; Simon, Kara M; Ones, Deniz S
2011-09-01
This study examines sample characteristics of articles published in Journal of Applied Psychology (JAP) from 1995 to 2008. At the individual level, the overall median sample size over the period examined was approximately 173, which is generally adequate for detecting the average magnitude of effects of primary interest to researchers who publish in JAP. Samples using higher units of analyses (e.g., teams, departments/work units, and organizations) had lower median sample sizes (Mdn ≈ 65), yet were arguably robust given typical multilevel design choices of JAP authors despite the practical constraints of collecting data at higher units of analysis. A substantial proportion of studies used student samples (~40%); surprisingly, median sample sizes for student samples were smaller than working adult samples. Samples were more commonly occupationally homogeneous (~70%) than occupationally heterogeneous. U.S. and English-speaking participants made up the vast majority of samples, whereas Middle Eastern, African, and Latin American samples were largely unrepresented. On the basis of study results, recommendations are provided for authors, editors, and readers, which converge on 3 themes: (a) appropriateness and match between sample characteristics and research questions, (b) careful consideration of statistical power, and (c) the increased popularity of quantitative synthesis. Implications are discussed in terms of theory building, generalizability of research findings, and statistical power to detect effects. PsycINFO Database Record (c) 2011 APA, all rights reserved
18 CFR 284.503 - Market-power determination.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 18 Conservation of Power and Water Resources 1 2012-04-01 2012-04-01 false Market-power determination. 284.503 Section 284.503 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... that complies with the following: (a) The applicant must set forth its specific request and adequately...
18 CFR 284.503 - Market-power determination.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 18 Conservation of Power and Water Resources 1 2013-04-01 2013-04-01 false Market-power determination. 284.503 Section 284.503 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... that complies with the following: (a) The applicant must set forth its specific request and adequately...
18 CFR 284.503 - Market-power determination.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Market-power determination. 284.503 Section 284.503 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... that complies with the following: (a) The applicant must set forth its specific request and adequately...
18 CFR 284.503 - Market-power determination.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 18 Conservation of Power and Water Resources 1 2011-04-01 2011-04-01 false Market-power determination. 284.503 Section 284.503 Conservation of Power and Water Resources FEDERAL ENERGY REGULATORY... that complies with the following: (a) The applicant must set forth its specific request and adequately...
Power-up: A Reanalysis of 'Power Failure' in Neuroscience Using Mixture Modeling.
Nord, Camilla L; Valton, Vincent; Wood, John; Roiser, Jonathan P
2017-08-23
Recently, evidence for endemically low statistical power has cast neuroscience findings into doubt. If low statistical power plagues neuroscience, then this reduces confidence in the reported effects. However, if statistical power is not uniformly low, then such blanket mistrust might not be warranted. Here, we provide a different perspective on this issue, analyzing data from an influential study reporting a median power of 21% across 49 meta-analyses (Button et al., 2013). We demonstrate, using Gaussian mixture modeling, that the sample of 730 studies included in that analysis comprises several subcomponents so the use of a single summary statistic is insufficient to characterize the nature of the distribution. We find that statistical power is extremely low for studies included in meta-analyses that reported a null result and that it varies substantially across subfields of neuroscience, with particularly low power in candidate gene association studies. Therefore, whereas power in neuroscience remains a critical issue, the notion that studies are systematically underpowered is not the full story: low power is far from a universal problem. SIGNIFICANCE STATEMENT Recently, researchers across the biomedical and psychological sciences have become concerned with the reliability of results. One marker for reliability is statistical power: the probability of finding a statistically significant result given that the effect exists. Previous evidence suggests that statistical power is low across the field of neuroscience. Our results present a more comprehensive picture of statistical power in neuroscience: on average, studies are indeed underpowered-some very seriously so-but many studies show acceptable or even exemplary statistical power. We show that this heterogeneity in statistical power is common across most subfields in neuroscience. This new, more nuanced picture of statistical power in neuroscience could affect not only scientific understanding, but potentially policy and funding decisions for neuroscience research. Copyright © 2017 Nord, Valton et al.
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
Sampling populations of humans across the world: ELSI issues.
Knoppers, Bartha Maria; Zawati, Ma'n H; Kirby, Emily S
2012-01-01
There are an increasing number of population studies collecting data and samples to illuminate gene-environment contributions to disease risk and health. The rising affordability of innovative technologies capable of generating large amounts of data helps achieve statistical power and has paved the way for new international research collaborations. Most data and sample collections can be grouped into longitudinal, disease-specific, or residual tissue biobanks, with accompanying ethical, legal, and social issues (ELSI). Issues pertaining to consent, confidentiality, and oversight cannot be examined using a one-size-fits-all approach-the particularities of each biobank must be taken into account. It remains to be seen whether current governance approaches will be adequate to handle the impact of next-generation sequencing technologies on communication with participants in population biobanking studies.
NASA Astrophysics Data System (ADS)
Devetak, Iztok; Aleksij Glažar, Saša
2010-08-01
Submicrorepresentations (SMRs) are a powerful tool for identifying misconceptions of chemical concepts and for generating proper mental models of chemical phenomena in students' long-term memory during chemical education. The main purpose of the study was to determine which independent variables (gender, formal reasoning abilities, visualization abilities, and intrinsic motivation for learning chemistry) have the maximum influence on students' reading and drawing SMRs. A total of 386 secondary school students (aged 16.3 years) participated in the study. The instruments used in the study were: test of Chemical Knowledge, Test of Logical Thinking, two tests of visualization abilities Patterns and Rotations, and questionnaire on Intrinsic Motivation for Learning Science. The results show moderate, but statistically significant correlations between students' intrinsic motivation, formal reasoning abilities and chemical knowledge at submicroscopic level based on reading and drawing SMRs. Visualization abilities are not statistically significantly correlated with students' success on items that comprise reading or drawing SMRs. It can be also concluded that there is a statistically significant difference between male and female students in solving problems that include reading or drawing SMRs. Based on these statistical results and content analysis of the sample problems, several educational strategies can be implemented for students to develop adequate mental models of chemical concepts on all three levels of representations.
9 CFR 3.50 - Facilities, general.
Code of Federal Regulations, 2012 CFR
2012-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment and Transportation of Rabbits...) Water and electric power. Reliable and adequate electric power, if required to comply with other...
9 CFR 3.50 - Facilities, general.
Code of Federal Regulations, 2014 CFR
2014-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment and Transportation of Rabbits...) Water and electric power. Reliable and adequate electric power, if required to comply with other...
9 CFR 3.50 - Facilities, general.
Code of Federal Regulations, 2013 CFR
2013-01-01
... WELFARE STANDARDS Specifications for the Humane Handling, Care, Treatment and Transportation of Rabbits...) Water and electric power. Reliable and adequate electric power, if required to comply with other...
14 CFR 171.113 - Installation requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... facilities. (c) The facility must have a reliable source of suitable primary power, either from a power distribution system or locally generated. Also, adequate power capacity must be provided for operation of test... a facility will be required to have standby power for the SDF and monitor accessories to supplement...
14 CFR 171.113 - Installation requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... facilities. (c) The facility must have a reliable source of suitable primary power, either from a power distribution system or locally generated. Also, adequate power capacity must be provided for operation of test... a facility will be required to have standby power for the SDF and monitor accessories to supplement...
14 CFR 171.113 - Installation requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... facilities. (c) The facility must have a reliable source of suitable primary power, either from a power distribution system or locally generated. Also, adequate power capacity must be provided for operation of test... a facility will be required to have standby power for the SDF and monitor accessories to supplement...
30 CFR 75.517 - Power wires and cables; insulation and protection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Power wires and cables; insulation and...-General § 75.517 Power wires and cables; insulation and protection. [Statutory Provisions] Power wires and cables, except trolley wires, trolley feeder wires, and bare signal wires, shall be insulated adequately...
30 CFR 75.517 - Power wires and cables; insulation and protection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Power wires and cables; insulation and...-General § 75.517 Power wires and cables; insulation and protection. [Statutory Provisions] Power wires and cables, except trolley wires, trolley feeder wires, and bare signal wires, shall be insulated adequately...
30 CFR 75.517 - Power wires and cables; insulation and protection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Power wires and cables; insulation and...-General § 75.517 Power wires and cables; insulation and protection. [Statutory Provisions] Power wires and cables, except trolley wires, trolley feeder wires, and bare signal wires, shall be insulated adequately...
30 CFR 75.517 - Power wires and cables; insulation and protection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Power wires and cables; insulation and...-General § 75.517 Power wires and cables; insulation and protection. [Statutory Provisions] Power wires and cables, except trolley wires, trolley feeder wires, and bare signal wires, shall be insulated adequately...
30 CFR 77.508 - Lightning arresters, ungrounded and exposed power conductors and telephone wires.
Code of Federal Regulations, 2011 CFR
2011-07-01
... power conductors and telephone wires. 77.508 Section 77.508 Mineral Resources MINE SAFETY AND HEALTH... arresters, ungrounded and exposed power conductors and telephone wires. All ungrounded, exposed power conductors and telephone wires shall be equipped with suitable lightning arresters which are adequately...
30 CFR 75.517 - Power wires and cables; insulation and protection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Power wires and cables; insulation and...-General § 75.517 Power wires and cables; insulation and protection. [Statutory Provisions] Power wires and cables, except trolley wires, trolley feeder wires, and bare signal wires, shall be insulated adequately...
30 CFR 77.508 - Lightning arresters, ungrounded and exposed power conductors and telephone wires.
Code of Federal Regulations, 2010 CFR
2010-07-01
... power conductors and telephone wires. 77.508 Section 77.508 Mineral Resources MINE SAFETY AND HEALTH... arresters, ungrounded and exposed power conductors and telephone wires. All ungrounded, exposed power conductors and telephone wires shall be equipped with suitable lightning arresters which are adequately...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-08
... Company, Davis-Besse Nuclear Power Station; Environmental Assessment And Finding of No Significant Impact... operation of the Davis-Besse Nuclear Power Station, Unit 1 (DBNPS), located in Ottawa County, Ohio. In... the reactor coolant pressure boundary of light-water nuclear power reactors provide adequate margins...
Short-term Periodization Models: Effects on Strength and Speed-strength Performance.
Hartmann, Hagen; Wirth, Klaus; Keiner, Michael; Mickel, Christoph; Sander, Andre; Szilvas, Elena
2015-10-01
Dividing training objectives into consecutive phases to gain morphological adaptations (hypertrophy phase) and neural adaptations (strength and power phases) is called strength-power periodization (SPP). These phases differ in program variables (volume, intensity, and exercise choice or type) and use stepwise intensity progression and concomitant decreasing volume, converging to peak intensity (peaking phase). Undulating periodization strategies rotate these program variables in a bi-weekly, weekly, or daily fashion. The following review addresses the effects of different short-term periodization models on strength and speed-strength both with subjects of different performance levels and with competitive athletes from different sports who use a particular periodization model during off-season, pre-season, and in-season conditioning. In most periodization studies, it is obvious that the strength endurance sessions are characterized by repetition zones (12-15 repetitions) that induce muscle hypertrophy in persons with a low performance level. Strictly speaking, when examining subjects with a low training level, many periodization studies include mainly hypertrophy sessions interspersed with heavy strength/power sessions. Studies have demonstrated equal or statistically significant higher gains in maximal strength for daily undulating periodization compared with SPP in subjects with a low to moderate performance level. The relatively short intervention period and the lack of concomitant sports conditioning call into question the practical value of these findings for competitive athletes. Possibly owing to differences in mesocycle length, conditioning programs, and program variables, competitive athletes either maintained or improved strength and/or speed-strength performance by integrating daily undulating periodization and SPP during off-season, pre-season and in-season conditioning. In high-performance sports, high-repetition strength training (>15) should be avoided because it does not provide an adequate training stimulus for gains in muscle cross-sectional area and strength performance. High-volume circuit strength training performed over 2 years negatively affected the development of the power output and maximal strength of the upper extremities in professional rugby players. Indeed, meta-analyses and results with weightlifters, American Football players, and throwers confirm the necessity of the habitual use of ≥80% 1 RM: (1) to improve maximal strength during the off-season and in-season in American Football, (2) to reach peak performance in maximal strength and vertical jump power during tapering in track-and-field, and (3) to produce hypertrophy and strength improvements in advanced athletes. The integration and extent of hypertrophy strength training in in-season conditioning depend on the duration of the contest period, the frequency of the contests, and the proportion of the conditioning program. Based on the literature, 72 h between hypertrophy strength training and strength-power training should be provided to allow for adequate regeneration times and therefore maximal stimulus intensities in training. This conclusion is only valid if the muscle is not trained otherwise during this regeneration phase. Thus, rotating hypertrophy and strength-power sessions in a microcycle during the season is a viable option. Comparative studies in competitive athletes who integrated strength training during pre-season conditioning confirm a tendency for gains in explosive strength and statistically significant improvements in medicine ball throw through SPP but not through daily undulating periodization. These findings indicate that to maximize the speed-strength in the short term (peaking), elite athletes should perform strength-power training twice per week. It is possible to perform a single strength-power session with the method of maximum explosive strength actions moving high-weight loads (90% 1 repetition maximum [RM]) at least 1-2 days before competition because of the shorter regeneration times and potentiation effects. Compared with ballistic strength training (30% 1 RM), this method has been shown to provide statistically superior gains in maximal strength, peak power, impulse size, and explosive strength during tapering in track-and-field throwers. The speed-strength performance in drop jumps of strength-trained subjects showed potentiation effects 48-148 h after a single strength-power training session. Regarding neuromuscular performance, plyometric exercises can even be performed after strength-power training on the same day if a minimum rest period of 3 h is provided.
ERIC Educational Resources Information Center
Sinharay, Sandip
2017-01-01
Karabatsos compared the power of 36 person-fit statistics using receiver operating characteristics curves and found the "H[superscript T]" statistic to be the most powerful in identifying aberrant examinees. He found three statistics, "C", "MCI", and "U3", to be the next most powerful. These four statistics,…
Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall
2018-01-01
Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2013 CFR
2013-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2014 CFR
2014-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2010 CFR
2010-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2012 CFR
2012-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
14 CFR 25.1707 - System separation: EWIS.
Code of Federal Regulations, 2011 CFR
2011-01-01
... installed to ensure adequate physical separation and electrical isolation so that damage to circuits... ensure adequate physical separation and electrical isolation so that a fault in any one airplane power... minimize potential for abrasion/chafing, vibration damage, and other types of mechanical damage. ...
ERIC Educational Resources Information Center
Social Policy, 1976
1976-01-01
Priorities on the agenda include fair representation and participation in the political process, equal education and training, meaningful work and adequate compensation, equal access to economic power, adequate housing, physical safety, and fair treatment by and equal access to media and the arts. (Author/AM)
The Relationship between Adequate Yearly Progress and the Quality of Professional Development
ERIC Educational Resources Information Center
Wolff, Lori A.; McClelland, Susan S.; Stewart, Stephanie E.
2010-01-01
Based on publicly available data, the study examined the relationship between adequate yearly progress status and teachers' perceptions of the quality of their professional development. The sample included responses of 5,558 teachers who completed the questionnaire in the 2005-2006 school year. Results of the statistical analysis show a…
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
This document replaces Cape Kennedy empirical wind component statistics which are presently being used for aerospace engineering applications that require component wind probabilities for various flight azimuths and selected altitudes. The normal (Gaussian) distribution is presented as an adequate statistical model to represent component winds at Cape Kennedy. Head-, tail-, and crosswind components are tabulated for all flight azimuths for altitudes from 0 to 70 km by monthly reference periods. Wind components are given for 11 selected percentiles ranging from 0.135 percent to 99,865 percent for each month. Results of statistical goodness-of-fit tests are presented to verify the use of the Gaussian distribution as an adequate model to represent component winds at Cape Kennedy, Florida.
30 CFR 57.12008 - Insulation and fittings for power wires and cables.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Insulation and fittings for power wires and... NONMETAL MINES Electricity Surface and Underground § 57.12008 Insulation and fittings for power wires and cables. Power wires and cables shall be insulated adequately where they pass into or out of electrical...
30 CFR 57.12008 - Insulation and fittings for power wires and cables.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Insulation and fittings for power wires and... NONMETAL MINES Electricity Surface and Underground § 57.12008 Insulation and fittings for power wires and cables. Power wires and cables shall be insulated adequately where they pass into or out of electrical...
Explanation of random experiment sheduling and its application to space station analysis
NASA Technical Reports Server (NTRS)
Moore, J. E.
1970-01-01
The capability of the McDonnell-Douglas Phase B space station concept to complete the Blue Book Experiment program is analyzed and the Random experiment program with Resource Impact (REPRI) which was used to generate the data is described. The results indicate that station manpower and electrical power are the two resources which will constrain the amount of the Blue Book program that the station can complete. The station experiment program and its resource requirements are sensitive to levels of manpower and electrical power 13.5 men and 11 kilowatts. Continuous artificial gravity experiments have much less impact on the experiment program than experiments using separate artificial gravity periods. Station storage volume presently allocated for the FPE's and their supplies (1600 cu ft) is more than adequate. The REPRI program uses the Monte Carlo technique to generate a set of feasible experiment schedules for a space station. The schedules are statistically analyzed to determine the impact of the station experiment program resource requirements on the station concept. Also, the sensitivity of the station concept to one or more resources is assessed.
Divergence of activity expansions: Is it actually a problem?
NASA Astrophysics Data System (ADS)
Ushcats, M. V.; Bulavin, L. A.; Sysoev, V. M.; Ushcats, S. Yu.
2017-12-01
For realistic interaction models, which include both molecular attraction and repulsion (e.g., Lennard-Jones, modified Lennard-Jones, Morse, and square-well potentials), the asymptotic behavior of the virial expansions for pressure and density in powers of activity has been studied taking power terms of high orders into account on the basis of the known finite-order irreducible integrals as well as the recent approximations of infinite irreducible series. Even in the divergence region (at subcritical temperatures), this behavior stays thermodynamically adequate (in contrast to the behavior of the virial equation of state with the same set of irreducible integrals) and corresponds to the beginning of the first-order phase transition: the divergence yields the jump (discontinuity) in density at constant pressure and chemical potential. In general, it provides a statistical explanation of the condensation phenomenon, but for liquid or solid states, the physically proper description (which can turn the infinite discontinuity into a finite jump of density) still needs further study of high-order cluster integrals and, especially, their real dependence on the system volume (density).
Power Plant Water Intake Assessment.
ERIC Educational Resources Information Center
Zeitoun, Ibrahim H.; And Others
1980-01-01
In order to adequately assess the impact of power plant cooling water intake on an aquatic ecosystem, total ecosystem effects must be considered, rather than merely numbers of impinged or entrained organisms. (Author/RE)
Kılıç, D; Göksu, E; Kılıç, T; Buyurgan, C S
2018-05-01
The aim of this randomized cross-over study was to compare one-minute and two-minute continuous chest compressions in terms of chest compression only CPR quality metrics on a mannequin model in the ED. Thirty-six emergency medicine residents participated in this study. In the 1-minute group, there was no statistically significant difference in the mean compression rate (p=0.83), mean compression depth (p=0.61), good compressions (p=0.31), the percentage of complete release (p=0.07), adequate compression depth (p=0.11) or the percentage of good rate (p=51) over the four-minute time period. Only flow time was statistically significant among the 1-minute intervals (p<0.001). In the 2-minute group, the mean compression depth (p=0.19), good compression (p=0.92), the percentage of complete release (p=0.28), adequate compression depth (p=0.96), and the percentage of good rate (p=0.09) were not statistically significant over time. In this group, the number of compressions (248±31 vs 253±33, p=0.01) and mean compression rates (123±15 vs 126±17, p=0.01) and flow time (p=0.001) were statistically significant along the two-minute intervals. There was no statistically significant difference in the mean number of chest compressions per minute, mean chest compression depth, the percentage of good compressions, complete release, adequate chest compression depth and percentage of good compression between the 1-minute and 2-minute groups. There was no statistically significant difference in the quality metrics of chest compressions between 1- and 2-minute chest compression only groups. Copyright © 2017 Elsevier Inc. All rights reserved.
Statistical Power in Meta-Analysis
ERIC Educational Resources Information Center
Liu, Jin
2015-01-01
Statistical power is important in a meta-analysis study, although few studies have examined the performance of simulated power in meta-analysis. The purpose of this study is to inform researchers about statistical power estimation on two sample mean difference test under different situations: (1) the discrepancy between the analytical power and…
2016-12-01
KS and AD Statistical Power via Monte Carlo Simulation Statistical power is the probability of correctly rejecting the null hypothesis when the...Select a caveat DISTRIBUTION STATEMENT A. Approved for public release: distribution unlimited. Determining the Statistical Power...real-world data to test the accuracy of the simulation. Statistical comparison of these metrics can be necessary when making such a determination
Importance of hard coal in electricity generation in Poland
NASA Astrophysics Data System (ADS)
Plewa, Franciszek; Strozik, Grzegorz
2017-11-01
Polish energy sector is facing a number of challenges, in particular as regards the reconstruction of production potential, diversification of energy sources, environmental issues, adequate fuels supplies and other. Mandatory implementation of Europe 2020 strategy in terms of “3x20” targets (20% reduction of greenhouse gases, 20% of energy from renewable sources, and 20% increase of efficiency in energy production) requires fast decision, which have to be coordinated with energetic safety issues, increasing demands for electric energy, and other factors. In Poland almost 80% of power is installed in coal fired power plants and energy from hard coals is relatively less expensive than from other sources, especially renewable. The most of renewable energy sources power plants are unable to generate power in amounts which can be competitive with coal fires power stations and are highly expensive, what leads o high prices of electric energy. Alternatively, new generation of coal fired coal power plants is able to significantly increase efficiency, reduce carbon dioxide emission, and generate less expensive electric power in amounts adequate to the demands of a country.
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
Bazant, Zdenĕk P; Pang, Sze-Dai
2006-06-20
In mechanical design as well as protection from various natural hazards, one must ensure an extremely low failure probability such as 10(-6). How to achieve that goal is adequately understood only for the limiting cases of brittle or ductile structures. Here we present a theory to do that for the transitional class of quasibrittle structures, having brittle constituents and characterized by nonnegligible size of material inhomogeneities. We show that the probability distribution of strength of the representative volume element of material is governed by the Maxwell-Boltzmann distribution of atomic energies and the stress dependence of activation energy barriers; that it is statistically modeled by a hierarchy of series and parallel couplings; and that it consists of a broad Gaussian core having a grafted far-left power-law tail with zero threshold and amplitude depending on temperature and load duration. With increasing structure size, the Gaussian core shrinks and Weibull tail expands according to the weakest-link model for a finite chain of representative volume elements. The model captures experimentally observed deviations of the strength distribution from Weibull distribution and of the mean strength scaling law from a power law. These deviations can be exploited for verification and calibration. The proposed theory will increase the safety of concrete structures, composite parts of aircraft or ships, microelectronic components, microelectromechanical systems, prosthetic devices, etc. It also will improve protection against hazards such as landslides, avalanches, ice breaks, and rock or soil failures.
A Unified Mixed-Effects Model for Rare-Variant Association in Sequencing Studies
Sun, Jianping; Zheng, Yingye; Hsu, Li
2013-01-01
For rare-variant association analysis, due to extreme low frequencies of these variants, it is necessary to aggregate them by a prior set (e.g., genes and pathways) in order to achieve adequate power. In this paper, we consider hierarchical models to relate a set of rare variants to phenotype by modeling the effects of variants as a function of variant characteristics while allowing for variant-specific effect (heterogeneity). We derive a set of two score statistics, testing the group effect by variant characteristics and the heterogeneity effect. We make a novel modification to these score statistics so that they are independent under the null hypothesis and their asymptotic distributions can be derived. As a result, the computational burden is greatly reduced compared with permutation-based tests. Our approach provides a general testing framework for rare variants association, which includes many commonly used tests, such as the burden test [Li and Leal, 2008] and the sequence kernel association test [Wu et al., 2011], as special cases. Furthermore, in contrast to these tests, our proposed test has an added capacity to identify which components of variant characteristics and heterogeneity contribute to the association. Simulations under a wide range of scenarios show that the proposed test is valid, robust and powerful. An application to the Dallas Heart Study illustrates that apart from identifying genes with significant associations, the new method also provides additional information regarding the source of the association. Such information may be useful for generating hypothesis in future studies. PMID:23483651
Statistical mechanics of complex neural systems and high dimensional data
NASA Astrophysics Data System (ADS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-03-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.
Li, Huanjie; Nickerson, Lisa D; Nichols, Thomas E; Gao, Jia-Hong
2017-03-01
Two powerful methods for statistical inference on MRI brain images have been proposed recently, a non-stationary voxelation-corrected cluster-size test (CST) based on random field theory and threshold-free cluster enhancement (TFCE) based on calculating the level of local support for a cluster, then using permutation testing for inference. Unlike other statistical approaches, these two methods do not rest on the assumptions of a uniform and high degree of spatial smoothness of the statistic image. Thus, they are strongly recommended for group-level fMRI analysis compared to other statistical methods. In this work, the non-stationary voxelation-corrected CST and TFCE methods for group-level analysis were evaluated for both stationary and non-stationary images under varying smoothness levels, degrees of freedom and signal to noise ratios. Our results suggest that, both methods provide adequate control for the number of voxel-wise statistical tests being performed during inference on fMRI data and they are both superior to current CSTs implemented in popular MRI data analysis software packages. However, TFCE is more sensitive and stable for group-level analysis of VBM data. Thus, the voxelation-corrected CST approach may confer some advantages by being computationally less demanding for fMRI data analysis than TFCE with permutation testing and by also being applicable for single-subject fMRI analyses, while the TFCE approach is advantageous for VBM data. Hum Brain Mapp 38:1269-1280, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
18 CFR 39.3 - Electric Reliability Organization certification.
Code of Federal Regulations, 2010 CFR
2010-04-01
... operators of the Bulk-Power System, and other interested parties for improvement of the Electric Reliability... 18 Conservation of Power and Water Resources 1 2010-04-01 2010-04-01 false Electric Reliability..., Reliability Standards that provide for an adequate level of reliability of the Bulk-Power System, and (2) Has...
14 CFR 171.271 - Installation requirements.
Code of Federal Regulations, 2010 CFR
2010-01-01
... generated. Adequate power capacity must be provided for the operation of test and working equipment of the... restoration of power, the batteries must be restored to full charge within 24 hours. When primary power is... battery must permit continuation of normal operation for at least two hours under the normal operating...
Pitfalls in statistical landslide susceptibility modelling
NASA Astrophysics Data System (ADS)
Schröder, Boris; Vorpahl, Peter; Märker, Michael; Elsenbeer, Helmut
2010-05-01
The use of statistical methods is a well-established approach to predict landslide occurrence probabilities and to assess landslide susceptibility. This is achieved by applying statistical methods relating historical landslide inventories to topographic indices as predictor variables. In our contribution, we compare several new and powerful methods developed in machine learning and well-established in landscape ecology and macroecology for predicting the distribution of shallow landslides in tropical mountain rainforests in southern Ecuador (among others: boosted regression trees, multivariate adaptive regression splines, maximum entropy). Although these methods are powerful, we think it is necessary to follow a basic set of guidelines to avoid some pitfalls regarding data sampling, predictor selection, and model quality assessment, especially if a comparison of different models is contemplated. We therefore suggest to apply a novel toolbox to evaluate approaches to the statistical modelling of landslide susceptibility. Additionally, we propose some methods to open the "black box" as an inherent part of machine learning methods in order to achieve further explanatory insights into preparatory factors that control landslides. Sampling of training data should be guided by hypotheses regarding processes that lead to slope failure taking into account their respective spatial scales. This approach leads to the selection of a set of candidate predictor variables considered on adequate spatial scales. This set should be checked for multicollinearity in order to facilitate model response curve interpretation. Model quality assesses how well a model is able to reproduce independent observations of its response variable. This includes criteria to evaluate different aspects of model performance, i.e. model discrimination, model calibration, and model refinement. In order to assess a possible violation of the assumption of independency in the training samples or a possible lack of explanatory information in the chosen set of predictor variables, the model residuals need to be checked for spatial auto¬correlation. Therefore, we calculate spline correlograms. In addition to this, we investigate partial dependency plots and bivariate interactions plots considering possible interactions between predictors to improve model interpretation. Aiming at presenting this toolbox for model quality assessment, we investigate the influence of strategies in the construction of training datasets for statistical models on model quality.
2010-01-01
Background Patients-Reported Outcomes (PRO) are increasingly used in clinical and epidemiological research. Two main types of analytical strategies can be found for these data: classical test theory (CTT) based on the observed scores and models coming from Item Response Theory (IRT). However, whether IRT or CTT would be the most appropriate method to analyse PRO data remains unknown. The statistical properties of CTT and IRT, regarding power and corresponding effect sizes, were compared. Methods Two-group cross-sectional studies were simulated for the comparison of PRO data using IRT or CTT-based analysis. For IRT, different scenarios were investigated according to whether items or person parameters were assumed to be known, to a certain extent for item parameters, from good to poor precision, or unknown and therefore had to be estimated. The powers obtained with IRT or CTT were compared and parameters having the strongest impact on them were identified. Results When person parameters were assumed to be unknown and items parameters to be either known or not, the power achieved using IRT or CTT were similar and always lower than the expected power using the well-known sample size formula for normally distributed endpoints. The number of items had a substantial impact on power for both methods. Conclusion Without any missing data, IRT and CTT seem to provide comparable power. The classical sample size formula for CTT seems to be adequate under some conditions but is not appropriate for IRT. In IRT, it seems important to take account of the number of items to obtain an accurate formula. PMID:20338031
Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra
NASA Astrophysics Data System (ADS)
Karstens, William; Smith, David
2013-03-01
Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.
ERIC Educational Resources Information Center
Oslund, Eric L.; Clemens, Nathan H.; Simmons, Deborah C.; Simmons, Leslie E.
2018-01-01
The current study examined statistically significant differences between struggling and adequate readers using a multicomponent model of reading comprehension in 796 sixth through eighth graders, with a primary focus on word reading and vocabulary. Path analyses and Wald tests were used to investigate the direct and indirect relations of word…
Options for Affordable Fission Surface Power Systems
NASA Technical Reports Server (NTRS)
Houts, Mike; Gaddis, Steve; Porter, Ron; VanDyke, Melissa; Martin Jim; Godfroy, Tom; Bragg-Sitton, Shannon; Garber, Anne; Pearson, Boise
2006-01-01
Fission surface power systems could provide abundant power anywhere on free surface of the moon or Mars. Locations could include permanently shaded regions on the moon and high latitudes on Mars. To be fully utilized; however, fission surface power systems must be safe, have adequate performance, and be affordable. This paper discusses options for the design and development of such systems.
Developing a Resilient Green Cellular Network
2013-12-01
to provide BS autonomy from grid power through alternative energy, such as: fuel cells and xiii renewable photovoltaic (PV), wind energy...stations with adequate backup power or utilizing alternative/renewable energy technology such as photovoltaic or wind power to allow them to...mitigating strategies with the consensus view on BSs migrating away from grid power , to renewable energy ( photovoltaic ), and alternative fuels. 40
46 CFR 169.672 - Wiring for power and lighting circuits.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Wiring for power and lighting circuits must have copper conductors, of 14 AWG or larger, and— (1) Meet... must have stranded conductors. (c) Conductors must be sized so that— (1) They are adequate for the...
Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information
NASA Technical Reports Server (NTRS)
Howell, L. W.
2002-01-01
A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.
41 CFR 51-9.201 - Conditions of disclosure.
Code of Federal Regulations, 2010 CFR
2010-07-01
... provided the agency with advance adequate written assurance that the record will be used solely as a statistical research or reporting record and the record is to be transferred in a form that is not... for requesting the records, and (2) Certification that the records will be used only for statistical...
What Is Missing in Counseling Research? Reporting Missing Data
ERIC Educational Resources Information Center
Sterner, William R.
2011-01-01
Missing data have long been problematic in quantitative research. Despite the statistical and methodological advances made over the past 3 decades, counseling researchers fail to provide adequate information on this phenomenon. Interpreting the complex statistical procedures and esoteric language seems to be a contributing factor. An overview of…
Nateghi, Roshanak; Guikema, Seth D; Wu, Yue Grace; Bruss, C Bayan
2016-01-01
The U.S. federal government regulates the reliability of bulk power systems, while the reliability of power distribution systems is regulated at a state level. In this article, we review the history of regulating electric service reliability and study the existing reliability metrics, indices, and standards for power transmission and distribution networks. We assess the foundations of the reliability standards and metrics, discuss how they are applied to outages caused by large exogenous disturbances such as natural disasters, and investigate whether the standards adequately internalize the impacts of these events. Our reflections shed light on how existing standards conceptualize reliability, question the basis for treating large-scale hazard-induced outages differently from normal daily outages, and discuss whether this conceptualization maps well onto customer expectations. We show that the risk indices for transmission systems used in regulating power system reliability do not adequately capture the risks that transmission systems are prone to, particularly when it comes to low-probability high-impact events. We also point out several shortcomings associated with the way in which regulators require utilities to calculate and report distribution system reliability indices. We offer several recommendations for improving the conceptualization of reliability metrics and standards. We conclude that while the approaches taken in reliability standards have made considerable advances in enhancing the reliability of power systems and may be logical from a utility perspective during normal operation, existing standards do not provide a sufficient incentive structure for the utilities to adequately ensure high levels of reliability for end-users, particularly during large-scale events. © 2015 Society for Risk Analysis.
75 FR 13142 - Florida Power and Light Company; Turkey Point, Units 3 and 4; Exemption
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-18
... Light Company; Turkey Point, Units 3 and 4; Exemption 1.0 Background Florida Power and Light Company... ferritic materials of pressure-retaining components of the reactor coolant pressure boundary of light water... reactor coolant pressure boundary of light water nuclear power reactors to provide adequate margins of...
30 CFR 57.12008 - Insulation and fittings for power wires and cables.
Code of Federal Regulations, 2014 CFR
2014-07-01
... cables. Power wires and cables shall be insulated adequately where they pass into or out of electrical... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Insulation and fittings for power wires and cables. 57.12008 Section 57.12008 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF...
30 CFR 57.12008 - Insulation and fittings for power wires and cables.
Code of Federal Regulations, 2013 CFR
2013-07-01
... cables. Power wires and cables shall be insulated adequately where they pass into or out of electrical... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Insulation and fittings for power wires and cables. 57.12008 Section 57.12008 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF...
30 CFR 57.12008 - Insulation and fittings for power wires and cables.
Code of Federal Regulations, 2012 CFR
2012-07-01
... cables. Power wires and cables shall be insulated adequately where they pass into or out of electrical... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Insulation and fittings for power wires and cables. 57.12008 Section 57.12008 Mineral Resources MINE SAFETY AND HEALTH ADMINISTRATION, DEPARTMENT OF...
26 CFR 20.2038-1 - Revocable transfers.
Code of Federal Regulations, 2011 CFR
2011-04-01
... subject at the date of the decedent's death to any change through the exercise of a power by the decedent to alter, amend, revoke, or terminate, or if the decedent relinquished such a power in contemplation... adequate and full consideration in money or money's worth (see § 20.2043-1); (2) If the decedent's power...
46 CFR 56.50-55 - Bilge pumps.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Each self-propelled vessel must be provided with a power-driven pump or pumps connected to the bilge... power-driven pump is required. (See Part 171 of this chapter for determination of criterion numeral.) 5... available, or where a suitable water supply is available from a power-driven pump of adequate pressure and...
Low statistical power in biomedical science: a review of three human research domains.
Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R
2017-02-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.
Low statistical power in biomedical science: a review of three human research domains
Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois
2017-01-01
Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409
Model-Based Diagnosis in a Power Distribution Test-Bed
NASA Technical Reports Server (NTRS)
Scarl, E.; McCall, K.
1998-01-01
The Rodon model-based diagnosis shell was applied to a breadboard test-bed, modeling an automated power distribution system. The constraint-based modeling paradigm and diagnostic algorithm were found to adequately represent the selected set of test scenarios.
Toward "Constructing" the Concept of Statistical Power: An Optical Analogy.
ERIC Educational Resources Information Center
Rogers, Bruce G.
This paper presents a visual analogy that may be used by instructors to teach the concept of statistical power in statistical courses. Statistical power is mathematically defined as the probability of rejecting a null hypothesis when that null is false, or, equivalently, the probability of detecting a relationship when it exists. The analogy…
Statistical mechanics of shell models for two-dimensional turbulence
NASA Astrophysics Data System (ADS)
Aurell, E.; Boffetta, G.; Crisanti, A.; Frick, P.; Paladin, G.; Vulpiani, A.
1994-12-01
We study shell models that conserve the analogs of energy and enstrophy and hence are designed to mimic fluid turbulence in two-dimensions (2D). The main result is that the observed state is well described as a formal statistical equilibrium, closely analogous to the approach to two-dimensional ideal hydrodynamics of Onsager [Nuovo Cimento Suppl. 6, 279 (1949)], Hopf [J. Rat. Mech. Anal. 1, 87 (1952)], and Lee [Q. Appl. Math. 10, 69 (1952)]. In the presence of forcing and dissipation we observe a forward flux of enstrophy and a backward flux of energy. These fluxes can be understood as mean diffusive drifts from a source to two sinks in a system which is close to local equilibrium with Lagrange multipliers (``shell temperatures'') changing slowly with scale. This is clear evidence that the simplest shell models are not adequate to reproduce the main features of two-dimensional turbulence. The dimensional predictions on the power spectra from a supposed forward cascade of enstrophy and from one branch of the formal statistical equilibrium coincide in these shell models in contrast to the corresponding predictions for the Navier-Stokes and Euler equations in 2D. This coincidence has previously led to the mistaken conclusion that shell models exhibit a forward cascade of enstrophy. We also study the dynamical properties of the models and the growth of perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kogalovskii, M.R.
This paper presents a review of problems related to statistical database systems, which are wide-spread in various fields of activity. Statistical databases (SDB) are referred to as databases that consist of data and are used for statistical analysis. Topics under consideration are: SDB peculiarities, properties of data models adequate for SDB requirements, metadata functions, null-value problems, SDB compromise protection problems, stored data compression techniques, and statistical data representation means. Also examined is whether the present Database Management Systems (DBMS) satisfy the SDB requirements. Some actual research directions in SDB systems are considered.
The Promises and Pitfalls of Genoeconomics*
Benjamin, Daniel J.; Cesarini, David; Chabris, Christopher F.; Glaeser, Edward L.; Laibson, David I.; Guðnason, Vilmundur; Harris, Tamara B.; Launer, Lenore J.; Purcell, Shaun; Smith, Albert Vernon; Johannesson, Magnus; Magnusson, Patrik K.E.; Beauchamp, Jonathan P.; Christakis, Nicholas A.; Atwood, Craig S.; Hebert, Benjamin; Freese, Jeremy; Hauser, Robert M.; Hauser, Taissa S.; Grankvist, Alexander; Hultman, Christina M.; Lichtenstein, Paul
2012-01-01
This article reviews existing research at the intersection of genetics and economics, presents some new findings that illustrate the state of genoeconomics research, and surveys the prospects of this emerging field. Twin studies suggest that economic outcomes and preferences, once corrected for measurement error, appear to be about as heritable as many medical conditions and personality traits. Consistent with this pattern, we present new evidence on the heritability of permanent income and wealth. Turning to genetic association studies, we survey the main ways that the direct measurement of genetic variation across individuals is likely to contribute to economics, and we outline the challenges that have slowed progress in making these contributions. The most urgent problem facing researchers in this field is that most existing efforts to find associations between genetic variation and economic behavior are based on samples that are too small to ensure adequate statistical power. This has led to many false positives in the literature. We suggest a number of possible strategies to improve and remedy this problem: (a) pooling data sets, (b) using statistical techniques that exploit the greater information content of many genes considered jointly, and (c) focusing on economically relevant traits that are most proximate to known biological mechanisms. PMID:23482589
I. Arismendi; S. L. Johnson; J. B. Dunham
2015-01-01
Statistics of central tendency and dispersion may not capture relevant or desired characteristics of the distribution of continuous phenomena and, thus, they may not adequately describe temporal patterns of change. Here, we present two methodological approaches that can help to identify temporal changes in environmental regimes. First, we use higher-order statistical...
Recommended high-tank temperatures for maintenance of high-tank backup support, Revision 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greager, O.H.
1964-05-20
Purpose of this note is to recommend revised curves for the high-tank temperature required to maintain adequate high-tank backup support at the six small reactors. Compliance with the conditions shown on these curves will ensure adequate high-tank flow rates following the simultaneous loss of electric and steam power.
Etz, Alexander J.
2017-01-01
Psychology journals rarely publish nonsignificant results. At the same time, it is often very unlikely (or “too good to be true”) that a set of studies yields exclusively significant results. Here, we use likelihood ratios to explain when sets of studies that contain a mix of significant and nonsignificant results are likely to be true or “too true to be bad.” As we show, mixed results are not only likely to be observed in lines of research but also, when observed, often provide evidence for the alternative hypothesis, given reasonable levels of statistical power and an adequately controlled low Type 1 error rate. Researchers should feel comfortable submitting such lines of research with an internal meta-analysis for publication. A better understanding of probabilities, accompanied by more realistic expectations of what real sets of studies look like, might be an important step in mitigating publication bias in the scientific literature. PMID:29276574
Lakens, Daniël; Etz, Alexander J
2017-11-01
Psychology journals rarely publish nonsignificant results. At the same time, it is often very unlikely (or "too good to be true") that a set of studies yields exclusively significant results. Here, we use likelihood ratios to explain when sets of studies that contain a mix of significant and nonsignificant results are likely to be true or "too true to be bad." As we show, mixed results are not only likely to be observed in lines of research but also, when observed, often provide evidence for the alternative hypothesis, given reasonable levels of statistical power and an adequately controlled low Type 1 error rate. Researchers should feel comfortable submitting such lines of research with an internal meta-analysis for publication. A better understanding of probabilities, accompanied by more realistic expectations of what real sets of studies look like, might be an important step in mitigating publication bias in the scientific literature.
Plante, David T.; Landsness, Eric C.; Peterson, Michael J.; Goldstein, Michael R.; Wanger, Tim; Guokas, Jeff J.; Tononi, Giulio; Benca, Ruth M.
2012-01-01
Hypersomnolence in major depressive disorder (MDD) plays an important role in the natural history of the disorder, but the basis of hypersomnia in MDD is poorly understood. Slow wave activity (SWA) has been associated with sleep homeostasis, as well as sleep restoration and maintenance, and may be altered in MDD. Therefore, we conducted a post-hoc study that utilized high density electroencephalography (hdEEG) to test the hypothesis that MDD subjects with hypersomnia (HYS+) would have decreased SWA relative to age and sex-matched MDD subjects without hypersomnia (HYS−) and healthy controls (n=7 for each group). After correcting for multiple comparisons using statistical non-parametric mapping, HYS+ subjects demonstrated significantly reduced parieto-occipital all-night SWA relative to HYS− subjects. Our results suggest hypersomnolence may be associated with topographic reductions in SWA in MDD. Further research using adequately powered prospective design is indicated to confirm these findings. PMID:22512951
Spectral F-test power evaluation in the EEG during intermittent photic stimulation.
de Sá, Antonio Mauricio F L Miranda; Cagy, Mauricio; Lazarev, Vladimir V; Infantosi, Antonio Fernando C
2006-06-01
Intermittent photic stimulation (IPS) is an important functional test, which can induce the photic driving in the electroencephalogram (EEG). It is capable of enhancing latent oscillations manifestations not present in the resting EEG. However, for adequate quantitative evaluation of the photic driving, these changes should be assessed on a statistical basis. With this aim, the sampling distribution of spectral F test was investigated. On this basis, confidence limits of the SFT-estimate could be obtained for different practical situations, in which the signal-to-noise ratio and the number of epochs used in the estimation may vary. The technique was applied to the EEG of 10 normal subjects during IPS, and allowed detecting responses not only at the fundamental IPS frequency but also at higher harmonics. It also permitted to assess the strength of the photic driving responses and to compare them in different derivations and in different subjects.
Patient Electronic Health Records as a Means to Approach Genetic Research in Gastroenterology
Ananthakrishnan, Ashwin N; Lieberman, David
2015-01-01
Electronic health records (EHR) are being increasingly utilized and form a unique source of extensive data gathered during routine clinical care. Through use of codified and free text concepts identified using clinical informatics tools, disease labels can be assigned with a high degree of accuracy. Analysis linking such EHR-assigned disease labels to a biospecimen repository has demonstrated that genetic associations identified in prospective cohorts can be replicated with adequate statistical power, and novel phenotypic associations identified. In addition, genetic discovery research can be performed utilizing clinical, laboratory, and procedure data obtained during care. Challenges with such research include the need to tackle variability in quality and quantity of EHR data and importance of maintaining patient privacy and data security. With appropriate safeguards, this novel and emerging field of research offers considerable promise and potential to further scientific research in gastroenterology efficiently, cost-effectively, and with engagement of patients and communities. PMID:26073373
Antweiler, Ronald C.; Taylor, Howard E.
2008-01-01
The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.
The provision of therapy mattresses for pressure ulcer prevention.
Pagnamenta, Fania
2017-03-23
Preventing pressure ulcers is complex and involves skin care, the provision of therapy mattresses, repositioning, the management of incontinence and adequate nutritional support. This article describes a model of therapy mattress provision that is based on non-powered products. Evaluating the efficiency of this model is challenging, due to the complexities of care, but Safety Thermometer data and incidents reports offer reassurance that non-powered therapy mattresses can provide adequate pressure ulcer prevention. Therapy mattress provision is only one of the five interventions and these are described in details to give readers a fuller picture of the model used at the author's trust.
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Progressive statistics for studies in sports medicine and exercise science.
Hopkins, William G; Marshall, Stephen W; Batterham, Alan M; Hanin, Juri
2009-01-01
Statistical guidelines and expert statements are now available to assist in the analysis and reporting of studies in some biomedical disciplines. We present here a more progressive resource for sample-based studies, meta-analyses, and case studies in sports medicine and exercise science. We offer forthright advice on the following controversial or novel issues: using precision of estimation for inferences about population effects in preference to null-hypothesis testing, which is inadequate for assessing clinical or practical importance; justifying sample size via acceptable precision or confidence for clinical decisions rather than via adequate power for statistical significance; showing SD rather than SEM, to better communicate the magnitude of differences in means and nonuniformity of error; avoiding purely nonparametric analyses, which cannot provide inferences about magnitude and are unnecessary; using regression statistics in validity studies, in preference to the impractical and biased limits of agreement; making greater use of qualitative methods to enrich sample-based quantitative projects; and seeking ethics approval for public access to the depersonalized raw data of a study, to address the need for more scrutiny of research and better meta-analyses. Advice on less contentious issues includes the following: using covariates in linear models to adjust for confounders, to account for individual differences, and to identify potential mechanisms of an effect; using log transformation to deal with nonuniformity of effects and error; identifying and deleting outliers; presenting descriptive, effect, and inferential statistics in appropriate formats; and contending with bias arising from problems with sampling, assignment, blinding, measurement error, and researchers' prejudices. This article should advance the field by stimulating debate, promoting innovative approaches, and serving as a useful checklist for authors, reviewers, and editors.
Statistical inference on censored data for targeted clinical trials under enrichment design.
Chen, Chen-Fang; Lin, Jr-Rung; Liu, Jen-Pei
2013-01-01
For the traditional clinical trials, inclusion and exclusion criteria are usually based on some clinical endpoints; the genetic or genomic variability of the trial participants are not totally utilized in the criteria. After completion of the human genome project, the disease targets at the molecular level can be identified and can be utilized for the treatment of diseases. However, the accuracy of diagnostic devices for identification of such molecular targets is usually not perfect. Some of the patients enrolled in targeted clinical trials with a positive result for the molecular target might not have the specific molecular targets. As a result, the treatment effect may be underestimated in the patient population truly with the molecular target. To resolve this issue, under the exponential distribution, we develop inferential procedures for the treatment effects of the targeted drug based on the censored endpoints in the patients truly with the molecular targets. Under an enrichment design, we propose using the expectation-maximization algorithm in conjunction with the bootstrap technique to incorporate the inaccuracy of the diagnostic device for detection of the molecular targets on the inference of the treatment effects. A simulation study was conducted to empirically investigate the performance of the proposed methods. Simulation results demonstrate that under the exponential distribution, the proposed estimator is nearly unbiased with adequate precision, and the confidence interval can provide adequate coverage probability. In addition, the proposed testing procedure can adequately control the size with sufficient power. On the other hand, when the proportional hazard assumption is violated, additional simulation studies show that the type I error rate is not controlled at the nominal level and is an increasing function of the positive predictive value. A numerical example illustrates the proposed procedures. Copyright © 2013 John Wiley & Sons, Ltd.
The Statistical Power of Planned Comparisons.
ERIC Educational Resources Information Center
Benton, Roberta L.
Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…
Toward a functional definition of a "rare disease" for regulatory authorities and funding agencies.
Clarke, Joe T R; Coyle, Doug; Evans, Gerald; Martin, Janet; Winquist, Eric
2014-12-01
The designation of a disease as "rare" is associated with some substantial benefits for companies involved in new drug development, including expedited review by regulatory authorities and relaxed criteria for reimbursement. How "rare disease" is defined therefore has major financial implications, both for pharmaceutical companies and for insurers or public drug reimbursement programs. All existing definitions are based, somewhat arbitrarily, on disease incidence or prevalence. What is proposed here is a functional definition of rare based on an assessment of the feasibility of measuring the efficacy of a new treatment in conventional randomized controlled trials, to inform regulatory authorities and funding agencies charged with assessing new therapies being considered for public funding. It involves a five-step process, involving significant negotiations between patient advocacy groups, pharmaceutical companies, physicians, and public drug reimbursement programs, designed to establish the feasibility of carrying out a randomized controlled trial with sufficient statistical power to show a clinically significant treatment effect. The steps are as follows: 1) identification of a specific disease, including appropriate genetic definition; 2) identification of clinically relevant outcomes to evaluate efficacy; 3) establishment of the inherent variability of measurements of clinically relevant outcomes; 4) calculation of the sample size required to assess the efficacy of a new treatment with acceptable statistical power; and 5) estimation of the difficulty of recruiting an adequate sample size given the estimated prevalence or incidence of the disorder in the population and the inclusion criteria to be used. Copyright © 2014 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Winiarek, Victor; Vira, Julius; Bocquet, Marc; Sofiev, Mikhail; Saunier, Olivier
2011-06-01
In the event of an accidental atmospheric release of radionuclides from a nuclear power plant, accurate real-time forecasting of the activity concentrations of radionuclides is required by the decision makers for the preparation of adequate countermeasures. The accuracy of the forecast plume is highly dependent on the source term estimation. On several academic test cases, including real data, inverse modelling and data assimilation techniques were proven to help in the assessment of the source term. In this paper, a semi-automatic method is proposed for the sequential reconstruction of the plume, by implementing a sequential data assimilation algorithm based on inverse modelling, with a care to develop realistic methods for operational risk agencies. The performance of the assimilation scheme has been assessed through the intercomparison between French and Finnish frameworks. Two dispersion models have been used: Polair3D and Silam developed in two different research centres. Different release locations, as well as different meteorological situations are tested. The existing and newly planned surveillance networks are used and realistically large multiplicative observational errors are assumed. The inverse modelling scheme accounts for strong error bias encountered with such errors. The efficiency of the data assimilation system is tested via statistical indicators. For France and Finland, the average performance of the data assimilation system is strong. However there are outlying situations where the inversion fails because of a too poor observability. In addition, in the case where the power plant responsible for the accidental release is not known, robust statistical tools are developed and tested to discriminate candidate release sites.
NASA Astrophysics Data System (ADS)
Werth, D. W.; O'Steen, L.; Chen, K.; Altinakar, M. S.; Garrett, A.; Aleman, S.; Ramalingam, V.
2010-12-01
Global climate change has the potential for profound impacts on society, and poses significant challenges to government and industry in the areas of energy security and sustainability. Given that the ability to exploit energy resources often depends on the climate, the possibility of climate change means we cannot simply assume that the untapped potential of today will still exist in the future. Predictions of future climate are generally based on global climate models (GCMs) which, due to computational limitations, are run at spatial resolutions of hundreds of kilometers. While the results from these models can predict climatic trends averaged over large spatial and temporal scales, their ability to describe the effects of atmospheric phenomena that affect weather on regional to local scales is inadequate. We propose the use of several optimized statistical downscaling techniques that can infer climate change at the local scale from coarse resolution GCM predictions, and apply the results to assess future sustainability for two sources of energy production dependent on adequate water resources: nuclear power (through the dissipation of waste heat from cooling towers, ponds, etc.) and hydroelectric power. All methods will be trained with 20th century data, and applied to data from the years 2040-2049 to get the local-scale changes. Models of cooling tower operation and hydropower potential will then use the downscaled data to predict the possible changes in energy production, and the implications of climate change on plant siting, design, and contribution to the future energy grid can then be examined.
Vinaya, Kundapur; Rakshith, Hegde; Prasad D, Krishna; Manoj, Shetty; Sunil, Mankar; Naresh, Shetty
2015-06-01
To evaluate the retention of complete cast crowns in teeth with adequate and inadequate crown height and to evaluate the effects of auxiliary retentive features on retention form complete cast crowns. Sixty freshly extracted human premolars. They were divided into 2 major groups depending upon the height of the teeth after the preparation. Group1 (H1): prepared teeth with constant height of 3.5 mm and Group 2 (H2): prepared teeth with constant height of 2.5 mm. Each group is further subdivided into 3 subgroups, depending upon the retentive features incorporated. First sub group were prepared conventionally, second sub group with proximal grooves and third subgroups with proximal boxes preparation. Castings produced in Nickel chromium alloy were cemented with glass ionomer cement and the cemented castings were subjected to tensional forces required to dislodge each cemented casting from its preparation and used for comparison of retentive quality. The data obtained were statistically analyzed using Oneway ANOVA test. The results showed there was statistically significant difference between adequate (H1) and inadequate (H2) group and increase in retention when there was incorporation of retentive features compared to conventional preparations. Incorporation of retentive grooves was statistically significant compared to retention obtained by boxes. Results also showed there was no statistically significant difference between long conventional and short groove. Complete cast crowns on teeth with adequate crown height exhibited greater retention than with inadequate crown height. Proximal grooves provided greater amount of retention when compared with proximal boxes.
Hamilton Jr, David A; Reilly, Danielle; Wipf, Felix; Kamineni, Srinath
2015-01-01
AIM: To determine whether use of a precontoured olecranon plate provides adequate fixation to withstand supraphysiologic force in a comminuted olecranon fracture model. METHODS: Five samples of fourth generation composite bones and five samples of fresh frozen human cadaveric left ulnae were utilized for this study. The cadaveric specimens underwent dual-energy X-ray absorptiometry (DEXA) scanning to quantify the bone quality. The composite and cadaveric bones were prepared by creating a comminuted olecranon fracture and fixed with a pre-contoured olecranon plate with locking screws. Construct stiffness and failure load were measured by subjecting specimens to cantilever bending moments until failure. Fracture site motion was measured with differential variable resistance transducer spanning the fracture. Statistical analysis was performed with two-tailed Mann-Whitney-U test with Monte Carlo Exact test. RESULTS: There was a significant difference in fixation stiffness and strength between the composite bones and human cadaver bones. Failure modes differed in cadaveric and composite specimens. The load to failure for the composite bones (n = 5) and human cadaver bones (n = 5) specimens were 10.67 nm (range 9.40-11.91 nm) and 13.05 nm (range 12.59-15.38 nm) respectively. This difference was statistically significant (P ˂ 0.007, 97% power). Median stiffness for composite bones and human cadaver bones specimens were 5.69 nm/mm (range 4.69-6.80 nm/mm) and 7.55 nm/mm (range 6.31-7.72 nm/mm). There was a significant difference for stiffness (P ˂ 0.033, 79% power) between composite bones and cadaveric bones. No correlation was found between the DEXA results and stiffness. All cadaveric specimens withstood the physiologic load anticipated postoperatively. Catastrophic failure occurred in all composite specimens. All failures resulted from composite bone failure at the distal screw site and not hardware failure. There were no catastrophic fracture failures in the cadaveric specimens. Failure of 4/5 cadaveric specimens was defined when a fracture gap of 2 mm was observed, but 1/5 cadaveric specimens failed due to a failure of the triceps mechanism. All failures occurred at forces greater than that expected in postoperative period prior to healing. CONCLUSION: The pre-contoured olecranon plate provides adequate fixation to withstand physiologic force in a composite bone and cadaveric comminuted olecranon fracture model. PMID:26495247
Austin, Peter C
2018-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest.
Austin, Peter C.
2017-01-01
The use of the Cox proportional hazards regression model is widespread. A key assumption of the model is that of proportional hazards. Analysts frequently test the validity of this assumption using statistical significance testing. However, the statistical power of such assessments is frequently unknown. We used Monte Carlo simulations to estimate the statistical power of two different methods for detecting violations of this assumption. When the covariate was binary, we found that a model-based method had greater power than a method based on cumulative sums of martingale residuals. Furthermore, the parametric nature of the distribution of event times had an impact on power when the covariate was binary. Statistical power to detect a strong violation of the proportional hazards assumption was low to moderate even when the number of observed events was high. In many data sets, power to detect a violation of this assumption is likely to be low to modest. PMID:29321694
USDA-ARS?s Scientific Manuscript database
Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...
Explorations in Statistics: Power
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2010-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This fifth installment of "Explorations in Statistics" revisits power, a concept fundamental to the test of a null hypothesis. Power is the probability that we reject the null hypothesis when it is false. Four…
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
The Electric Power Annual presents a summary of electric utility statistics at national, regional and State levels. The objective of the publication is to provide industry decisionmakers, government policymakers, analysts and the general public with historical data that may be used in understanding US electricity markets. The Electric Power Annual is prepared by the Survey Management Division; Office of Coal, Nuclear, Electric and Alternate Fuels; Energy Information Administration (EIA); US Department of Energy. ``The US Electric Power Industry at a Glance`` section presents a profile of the electric power industry ownership and performance, and a review of key statistics formore » the year. Subsequent sections present data on generating capability, including proposed capability additions; net generation; fossil-fuel statistics; retail sales; revenue; financial statistics; environmental statistics; electric power transactions; demand-side management; and nonutility power producers. In addition, the appendices provide supplemental data on major disturbances and unusual occurrences in US electricity power systems. Each section contains related text and tables and refers the reader to the appropriate publication that contains more detailed data on the subject matter. Monetary values in this publication are expressed in nominal terms.« less
ERIC Educational Resources Information Center
Martin, Tammy Faith
2012-01-01
The purpose of this study was to examine principal leadership styles and their influence on school performance as measured by adequate yearly progress at selected Title I schools in South Carolina. The main focus of the research study was to complete descriptive statistics on principal leadership styles in schools that met or did not meet adequate…
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-01-01
Context: Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. Objective: To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Data Sources: Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. Study Selection: The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Data Extraction: Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Results: Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, –0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Conclusions: Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up. PMID:23016017
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-05-01
Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, -0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up.
Masoumi, Hamid Reza Fard; Basri, Mahiran; Kassim, Anuar; Abdullah, Dzulkefly Kuang; Abdollahi, Yadollah; Abd Gani, Siti Salwa; Rezaee, Malahat
2013-01-01
Lipase-catalyzed production of triethanolamine-based esterquat by esterification of oleic acid (OA) with triethanolamine (TEA) in n-hexane was performed in 2 L stirred-tank reactor. A set of experiments was designed by central composite design to process modeling and statistically evaluate the findings. Five independent process variables, including enzyme amount, reaction time, reaction temperature, substrates molar ratio of OA to TEA, and agitation speed, were studied under the given conditions designed by Design Expert software. Experimental data were examined for normality test before data processing stage and skewness and kurtosis indices were determined. The mathematical model developed was found to be adequate and statistically accurate to predict the optimum conversion of product. Response surface methodology with central composite design gave the best performance in this study, and the methodology as a whole has been proven to be adequate for the design and optimization of the enzymatic process.
Experimental toxicology: Issues of statistics, experimental design, and replication.
Briner, Wayne; Kirwan, Jeral
2017-01-01
The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
The Importance of Teaching Power in Statistical Hypothesis Testing
ERIC Educational Resources Information Center
Olinsky, Alan; Schumacher, Phyllis; Quinn, John
2012-01-01
In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
New powerful statistics for alignment-free sequence comparison under a pattern transfer model.
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S; Sun, Fengzhu
2011-09-07
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D*2 and D(s)2 showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D*2 and D(s)2 by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. Copyright © 2011 Elsevier Ltd. All rights reserved.
New Powerful Statistics for Alignment-free Sequence Comparison Under a Pattern Transfer Model
Liu, Xuemei; Wan, Lin; Li, Jing; Reinert, Gesine; Waterman, Michael S.; Sun, Fengzhu
2011-01-01
Alignment-free sequence comparison is widely used for comparing gene regulatory regions and for identifying horizontally transferred genes. Recent studies on the power of a widely used alignment-free comparison statistic D2 and its variants D2∗ and D2s showed that their power approximates a limit smaller than 1 as the sequence length tends to infinity under a pattern transfer model. We develop new alignment-free statistics based on D2, D2∗ and D2s by comparing local sequence pairs and then summing over all the local sequence pairs of certain length. We show that the new statistics are much more powerful than the corresponding statistics and the power tends to 1 as the sequence length tends to infinity under the pattern transfer model. PMID:21723298
Dark energy homogeneity in general relativity: Are we applying it correctly?
NASA Astrophysics Data System (ADS)
Duniya, Didam G. A.
2016-04-01
Thus far, there does not appear to be an agreed (or adequate) definition of homogeneous dark energy (DE). This paper seeks to define a valid, adequate homogeneity condition for DE. Firstly, it is shown that as long as w_x ≠ -1, DE must have perturbations. It is then argued, independent of w_x, that a correct definition of homogeneous DE is one whose density perturbation vanishes in comoving gauge: and hence, in the DE rest frame. Using phenomenological DE, the consequence of this approach is then investigated in the observed galaxy power spectrum—with the power spectrum being normalized on small scales, at the present epoch z=0. It is found that for high magnification bias, relativistic corrections in the galaxy power spectrum are able to distinguish the concordance model from both a homogeneous DE and a clustering DE—on super-horizon scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Lihua; Cui, Jingkun; Tang, Fengjiao
Purpose: Studies of the association between ataxia telangiectasia–mutated (ATM) gene polymorphisms and acute radiation injuries are often small in sample size, and the results are inconsistent. We conducted the first meta-analysis to provide a systematic review of published findings. Methods and Materials: Publications were identified by searching PubMed up to April 25, 2014. Primary meta-analysis was performed for all acute radiation injuries, and subgroup meta-analyses were based on clinical endpoint. The influence of sample size and radiation injury incidence on genetic effects was estimated in sensitivity analyses. Power calculations were also conducted. Results: The meta-analysis was conducted on the ATMmore » polymorphism rs1801516, including 5 studies with 1588 participants. For all studies, the cut-off for differentiating cases from controls was grade 2 acute radiation injuries. The primary meta-analysis showed a significant association with overall acute radiation injuries (allelic model: odds ratio = 1.33, 95% confidence interval: 1.04-1.71). Subgroup analyses detected an association between the rs1801516 polymorphism and a significant increase in urinary and lower gastrointestinal injuries and an increase in skin injury that was not statistically significant. There was no between-study heterogeneity in any meta-analyses. In the sensitivity analyses, small studies did not show larger effects than large studies. In addition, studies with high incidence of acute radiation injuries showed larger effects than studies with low incidence. Power calculations revealed that the statistical power of the primary meta-analysis was borderline, whereas there was adequate power for the subgroup analysis of studies with high incidence of acute radiation injuries. Conclusions: Our meta-analysis showed a consistency of the results from the overall and subgroup analyses. We also showed that the genetic effect of the rs1801516 polymorphism on acute radiation injuries was dependent on the incidence of the injury. These support the evidence of an association between the rs1801516 polymorphism and acute radiation injuries, encouraging further research of this topic.« less
40 CFR 63.10897 - What are my monitoring requirements?
Code of Federal Regulations, 2012 CFR
2012-07-01
... controls for corona power and rapper operation, that the corona wires are energized, and that adequate air... determine the condition and integrity of corona wires, collection plates, hopper, and air diffuser plates... daily inspection to verify the proper functioning of the electronic controls for corona power and rapper...
40 CFR 63.10897 - What are my monitoring requirements?
Code of Federal Regulations, 2011 CFR
2011-07-01
... controls for corona power and rapper operation, that the corona wires are energized, and that adequate air... determine the condition and integrity of corona wires, collection plates, hopper, and air diffuser plates... daily inspection to verify the proper functioning of the electronic controls for corona power and rapper...
40 CFR 63.10897 - What are my monitoring requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... controls for corona power and rapper operation, that the corona wires are energized, and that adequate air... determine the condition and integrity of corona wires, collection plates, hopper, and air diffuser plates... daily inspection to verify the proper functioning of the electronic controls for corona power and rapper...
40 CFR 63.10897 - What are my monitoring requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... controls for corona power and rapper operation, that the corona wires are energized, and that adequate air... determine the condition and integrity of corona wires, collection plates, hopper, and air diffuser plates... daily inspection to verify the proper functioning of the electronic controls for corona power and rapper...
40 CFR 63.10897 - What are my monitoring requirements?
Code of Federal Regulations, 2010 CFR
2010-07-01
... controls for corona power and rapper operation, that the corona wires are energized, and that adequate air... determine the condition and integrity of corona wires, collection plates, hopper, and air diffuser plates... daily inspection to verify the proper functioning of the electronic controls for corona power and rapper...
Photovoltaic power systems workshop
NASA Technical Reports Server (NTRS)
Killian, H. J.; Given, R. W.
1978-01-01
Discussions are presented on apparent deficiencies in NASA planning and technology development relating to a standard power module (25-35 kW) and to future photovoltaic power systems in general. Topics of discussion consider the following: (1) adequate studies on power systems; (2) whether a standard power system module should be developed from a standard spacecraft; (3) identification of proper approaches to cost reduction; (4) energy storage avoidance; (5) attitude control; (6) thermal effects of heat rejection on solar array configuration stability; (7) assembly of large power systems in space; and (8) factoring terrestrial photovoltaic work into space power systems for possible payoff.
Power and efficiency of insect flight muscle.
Ellington, C P
1985-03-01
The efficiency and mechanical power output of insect flight muscle have been estimated from a study of hovering flight. The maximum power output, calculated from the muscle properties, is adequate for the aerodynamic power requirements. However, the power output is insufficient to oscillate the wing mass as well unless there is good elastic storage of the inertial energy, and this is consistent with reports of elastic components in the flight system. A comparison of the mechanical power output with the metabolic power input to the flight muscles suggests that the muscle efficiency is quite low: less than 10%.
Replication Unreliability in Psychology: Elusive Phenomena or “Elusive” Statistical Power?
Tressoldi, Patrizio E.
2012-01-01
The focus of this paper is to analyze whether the unreliability of results related to certain controversial psychological phenomena may be a consequence of their low statistical power. Applying the Null Hypothesis Statistical Testing (NHST), still the widest used statistical approach, unreliability derives from the failure to refute the null hypothesis, in particular when exact or quasi-exact replications of experiments are carried out. Taking as example the results of meta-analyses related to four different controversial phenomena, subliminal semantic priming, incubation effect for problem solving, unconscious thought theory, and non-local perception, it was found that, except for semantic priming on categorization, the statistical power to detect the expected effect size (ES) of the typical study, is low or very low. The low power in most studies undermines the use of NHST to study phenomena with moderate or low ESs. We conclude by providing some suggestions on how to increase the statistical power or use different statistical approaches to help discriminate whether the results obtained may or may not be used to support or to refute the reality of a phenomenon with small ES. PMID:22783215
Biostatistics Series Module 5: Determining Sample Size
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 − β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the principles are long known, historically, sample size determination has been difficult, because of relatively complex mathematical considerations and numerous different formulas. However, of late, there has been remarkable improvement in the availability, capability, and user-friendliness of power and sample size determination software. Many can execute routines for determination of sample size and power for a wide variety of research designs and statistical tests. With the drudgery of mathematical calculation gone, researchers must now concentrate on determining appropriate sample size and achieving these targets, so that study conclusions can be accepted as meaningful. PMID:27688437
Ioannou, Christopher; Knight, Matthew; Daniele, Luca; Flueckiger, Lee; Tan, Ezekiel S L
2016-10-17
The objective of this study is to analyse the effectiveness of the surgical torque limiter during operative use. The study also investigates the potential differences in torque between hand and drill-based screw insertion into locking plates using a standardised torque limiter. Torque for both hand and power screw insertion was measured through a load cell, registering 6.66 points per second. This was performed in a controlled environment using synthetic bone, a locking plate and locking screws to simulate plate fixation. Screws were inserted by hand and by drill with torque values measured. The surgical torque limiter (1.5 Nm) was effective as the highest recorded reading in the study was 1.409 Nm. Comparatively, there is a statistically significant difference between screw insertion methods. Torque produced for manually driven screw insertion into locking plates was 1.289 Nm (95 % CI 1.269-1.308) with drill-powered screw insertion at 0.740 Nm (95 % CI 0.723-0.757). The surgical torque limiter proved to be effective as per product specifications. Screws inserted under power produce significantly less torque when compared to manual insertion by hand. This is likely related to the mechanism of the torque limiter when being used at higher speeds for which it was designed. We conclude that screws may be inserted using power to the plate with the addition of a torque limiter. It is recommended that all screws inserted by drill be hand tightened to achieve adequate torque values.
Kids Count: The State of the Child in Tennessee, 1996. A County-by-County Statistical Report.
ERIC Educational Resources Information Center
Tennessee State Commission on Children and Youth, Nashville.
This Kids Count report examines statewide trends from 1992 to 1996 in the well being of Tennessee's children. The statistical portrait is based on trends in 16 indicators of child well being: (1) enrollment in state health insurance program; (2) births lacking adequate prenatal care; (3) low-birthweight births; (4) infant mortality rate; (5) child…
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
Topological signatures of interstellar magnetic fields - I. Betti numbers and persistence diagrams
NASA Astrophysics Data System (ADS)
Makarenko, Irina; Shukurov, Anvar; Henderson, Robin; Rodrigues, Luiz F. S.; Bushby, Paul; Fletcher, Andrew
2018-04-01
The interstellar medium (ISM) is a magnetized system in which transonic or supersonic turbulence is driven by supernova explosions. This leads to the production of intermittent, filamentary structures in the ISM gas density, whilst the associated dynamo action also produces intermittent magnetic fields. The traditional theory of random functions, restricted to second-order statistical moments (or power spectra), does not adequately describe such systems. We apply topological data analysis (TDA), sensitive to all statistical moments and independent of the assumption of Gaussian statistics, to the gas density fluctuations in a magnetohydrodynamic simulation of the multiphase ISM. This simulation admits dynamo action, so produces physically realistic magnetic fields. The topology of the gas distribution, with and without magnetic fields, is quantified in terms of Betti numbers and persistence diagrams. Like the more standard correlation analysis, TDA shows that the ISM gas density is sensitive to the presence of magnetic fields. However, TDA gives us important additional information that cannot be obtained from correlation functions. In particular, the Betti numbers per correlation cell are shown to be physically informative. Magnetic fields make the ISM more homogeneous, reducing the abundance of both isolated gas clouds and cavities, with a stronger effect on the cavities. Remarkably, the modification of the gas distribution by magnetic fields is captured by the Betti numbers even in regions more than 300 pc from the mid-plane, where the magnetic field is weaker and correlation analysis fails to detect any signatures of magnetic effects.
Lotfipour, Farzaneh; Valizadeh, Hadi; Shademan, Shahin; Monajjemzadeh, Farnaz
2015-01-01
One of the most significant issues in pharmaceutical industries, prior to commercialization of a pharmaceutical preparation is the "preformulation" stage. However, far too attention has been paid to verification of the software assisted statistical designs in preformulation studies. The main aim of this study was to report a step by step preformulation approach for a semisolid preparation based on a statistical mixture design and to verify the predictions made by the software with an in-vitro efficacy bioassay test. Extreme vertices mixture design (4 factors, 4 levels) was applied for preformulation of a semisolid Povidone Iodine preparation as Water removable ointment using different PolyEthylenGlycoles. Software Assisted (Minitab) analysis was then performed using four practically assessed response values including; Available iodine, viscosity (N index and yield value) and water absorption capacity. Subsequently mixture analysis was performed and finally, an optimized formulation was proposed. The efficacy of this formulation was bio-assayed using microbial tests in-vitro and MIC values were calculated for Escherichia coli, pseudomonaaeruginosa, staphylococcus aureus and Candida albicans. Results indicated the acceptable conformity of the measured responses. Thus, it can be concluded that the proposed design had an adequate power to predict the responses in practice. Stability studies, proved no significant change during the one year study for the optimized formulation. Efficacy was eligible on all tested species and in the case of staphylococcus aureus; the prepared semisolid formulation was even more effective. PMID:26664368
A Bifactor Approach to Model Multifaceted Constructs in Statistical Mediation Analysis.
Gonzalez, Oscar; MacKinnon, David P
Statistical mediation analysis allows researchers to identify the most important mediating constructs in the causal process studied. Identifying specific mediators is especially relevant when the hypothesized mediating construct consists of multiple related facets. The general definition of the construct and its facets might relate differently to an outcome. However, current methods do not allow researchers to study the relationships between general and specific aspects of a construct to an outcome simultaneously. This study proposes a bifactor measurement model for the mediating construct as a way to parse variance and represent the general aspect and specific facets of a construct simultaneously. Monte Carlo simulation results are presented to help determine the properties of mediated effect estimation when the mediator has a bifactor structure and a specific facet of a construct is the true mediator. This study also investigates the conditions when researchers can detect the mediated effect when the multidimensionality of the mediator is ignored and treated as unidimensional. Simulation results indicated that the mediation model with a bifactor mediator measurement model had unbiased and adequate power to detect the mediated effect with a sample size greater than 500 and medium a - and b -paths. Also, results indicate that parameter bias and detection of the mediated effect in both the data-generating model and the misspecified model varies as a function of the amount of facet variance represented in the mediation model. This study contributes to the largely unexplored area of measurement issues in statistical mediation analysis.
A new u-statistic with superior design sensitivity in matched observational studies.
Rosenbaum, Paul R
2011-09-01
In an observational or nonrandomized study of treatment effects, a sensitivity analysis indicates the magnitude of bias from unmeasured covariates that would need to be present to alter the conclusions of a naïve analysis that presumes adjustments for observed covariates suffice to remove all bias. The power of sensitivity analysis is the probability that it will reject a false hypothesis about treatment effects allowing for a departure from random assignment of a specified magnitude; in particular, if this specified magnitude is "no departure" then this is the same as the power of a randomization test in a randomized experiment. A new family of u-statistics is proposed that includes Wilcoxon's signed rank statistic but also includes other statistics with substantially higher power when a sensitivity analysis is performed in an observational study. Wilcoxon's statistic has high power to detect small effects in large randomized experiments-that is, it often has good Pitman efficiency-but small effects are invariably sensitive to small unobserved biases. Members of this family of u-statistics that emphasize medium to large effects can have substantially higher power in a sensitivity analysis. For example, in one situation with 250 pair differences that are Normal with expectation 1/2 and variance 1, the power of a sensitivity analysis that uses Wilcoxon's statistic is 0.08 while the power of another member of the family of u-statistics is 0.66. The topic is examined by performing a sensitivity analysis in three observational studies, using an asymptotic measure called the design sensitivity, and by simulating power in finite samples. The three examples are drawn from epidemiology, clinical medicine, and genetic toxicology. © 2010, The International Biometric Society.
The Association of Health Literacy with the Management of Type 2 Diabetes
NASA Astrophysics Data System (ADS)
Kumar, Samita
Introduction: Type 2 Diabetes (T2D) is a chronic metabolic disease characterized by high blood glucose levels in the blood. It is associated with microvascular and macrovascular complications which can lead to potential threats such as to amputations and even death. The irony of the disease is that these complications are preventable with appropriate treatment and self-management. The Emergency Medicine Department (ED) at University of Southwestern Medical center conducted this study to assess health literacy in Parkland Memorial Hospital patients with T2D. The objective for the research study was to assess the association of health literacy with management of T2D. Methods: This was a prospective study with collection of personal health information (PHI) and 30 day-follow up for ED recidivism for patients with T2D presenting to ED with diabetic complications. Eligibility was assessed by pre-screening via EPIC (Electronic Medical Record System for Parkland). The tool for measuring health literacy was the Short Assessment of Health Literacy (SAHL) and data was collected. The cut-off used for the SAHL to determine adequate or inadequate health literacy was 15. Low health literacy is defined as a score of <15 on the short assessment of health literacy (SAHL) scale. Results: The total number of subjects enrolled was 23, with 43.48% males and 56.52% females who spoke either Spanish or English. Mean age of the subjects was 50 years with standard deviation of 10 years. About 74% were white hispanic males. According to the data collected, 30% of the patients demonstrated inadequate health literacy based on SAHL score survey. The total number of subjects required to have adequate power was 400. Since the study could not reach adequate power due to low enrollment, no significant associations could be made from this small sample size. Conclusions: Due to low enrollment period at this time the recommendation would be to continue collecting data to have a larger sample size to afford the observation of statistically relevant associations. If any statistically significant associations are found, then future studies will focus on improving diabetes outcomes through the development of educational tools at the individual patient's appropriate literacy level. There are many reasons to improve diabetes care and explore all possible factors that contribute to poor outcomes. Millions of people are living with uncontrolled diabetes and the burden is not only on the patient but also on the community as a whole. Quality care should aim for improved benchmarks for patients with diabetes and their knowledge about the disease, such as 1) obtaining HbA1c levels below 8%, 2) blood pressure in the normal range, 3) having regular foot exams to keep a check on any developing signs of pressure sores, and 4) most importantly, having dilated eye exam on a regular basis.
Is Vertical Jump Height an Indicator of Athletes' Power Output in Different Sport Modalities?
Kons, Rafael L; Ache-Dias, Jonathan; Detanico, Daniele; Barth, Jonathan; Dal Pupo, Juliano
2018-03-01
Kons, RL, Ache-Dias, J, Detanico, D, Barth, J, and Dal Pupo, J. Is vertical jump height an indicator of athletes' power output in different sports modalities? J Strength Cond Res 32(3): 708-715, 2018-This study aimed to identify whether the ratio standard is adequate for the scaling of peak power output (PPO) for body mass (BM) in athletes of different sports and to verify classification agreement for athletes involved in different sports using PPO scaled for BM and jump height (JH). One hundred and twenty-four male athletes divided into 3 different groups-combat sports, team sports, and runners-participated in this study. Participants performed the countermovement jump on a force plate. Peak power output and JH were calculated from the vertical ground reaction force. We found different allometric exponents for each modality, allowing the use of the ratio standard for team sports. For combat sports and runners, the ratio standard was not considered adequate, and therefore, a specific allometric exponent for these 2 groups was found. Significant correlations between adjusted PPO for BM (PPOADJ) and JH were found for all modalities, but it was higher for runners (r = 0.81) than team and combat sports (r = 0.63 and 0.65, respectively). Moderate agreement generated by the PPOADJ and JH was verified in team sports (k = 0.47) and running (k = 0.55) and fair agreement in combat sports (k = 0.29). We conclude that the ratio standard seems to be suitable only for team sports; for runners and combat sports, an allometric model seems adequate. The use of JH as an indicator of power output may be considered reasonable only for runners.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
Turner, Katrina; McCarthy, Valerie Lander
2017-01-01
Undergraduate nursing students experience significant stress and anxiety, inhibiting learning and increasing attrition. Twenty-six intervention studies were identified and evaluated, updating a previous systematic review which categorized interventions targeting: (1) stressors, (2) coping, or (3) appraisal. The majority of interventions in this review aimed to reduce numbers or intensity of stressors through curriculum development (12) or to improve students' coping skills (8). Two studies reported interventions using only cognitive reappraisal while three interventions combined reappraisal with other approaches. Strength of evidence was limited by choice of study design, sample size, and lack of methodological rigor. Some statistically significant support was found for interventions focused on reducing stressors through curriculum development or improving students' coping skills. No statistically significant studies using reappraisal, either alone or in combination with other approaches, were identified, although qualitative findings suggested the potential benefits of this approach do merit further study. Progress was noted since 2008 in the increased number of studies and greater use of validated outcome measures but the review concluded further methodologically sound, adequately powered studies, especially randomized controlled trials, are needed to determine which interventions are effective to address the issue of excessive stress and anxiety among undergraduate nursing students. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Chapman, C. P.; Chapman, P. D.; Lewison, A. H.
1982-01-01
A low power photovoltaic system was constructed with approximately 500 amp hours of battery energy storage to provide power to an emergency amateur radio communications center. The system can power the communications center for about 72 hours of continuous nonsun operation. Complete construction details and a design methodology algorithm are given with abundant engineering data and adequate theory to allow similar systems to be constructed, scaled up or down, with minimum design effort.
Evaluation of Contribution for Voltage Control Ancillary Services Based on Social Surplus
NASA Astrophysics Data System (ADS)
Ueki, Yuji; Hara, Ryoichi; Kita, Hiroyuki; Hasegawa, Jun
Reactive power supply plays an important role in active power supply with adequate system voltages. Various pricing mechanism for reactive power supply have been developed and some of them are adopted in some power systems, however they are in a trial stage. The authors also focus on development of a pricing method for reactive power ancillary services. This problem involves two technical issues: rational estimation of the cost associated with reactive power supply and fair and transparent allocation of the estimated cost among the market participants. This paper proposes methods for evaluating the contribution of generators and demands.
Relative risk estimates from spatial and space-time scan statistics: Are they biased?
Prates, Marcos O.; Kulldorff, Martin; Assunção, Renato M.
2014-01-01
The purely spatial and space-time scan statistics have been successfully used by many scientists to detect and evaluate geographical disease clusters. Although the scan statistic has high power in correctly identifying a cluster, no study has considered the estimates of the cluster relative risk in the detected cluster. In this paper we evaluate whether there is any bias on these estimated relative risks. Intuitively, one may expect that the estimated relative risks has upward bias, since the scan statistic cherry picks high rate areas to include in the cluster. We show that this intuition is correct for clusters with low statistical power, but with medium to high power the bias becomes negligible. The same behaviour is not observed for the prospective space-time scan statistic, where there is an increasing conservative downward bias of the relative risk as the power to detect the cluster increases. PMID:24639031
78 FR 46549 - Approval and Promulgation of Implementation Plans; Idaho: State Board Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-01
... such board or body or the head of an executive agency with similar powers be adequately disclosed.'' 42... Requirements Idaho Code Sec. 39-107, Board--Composition--Officers-- Compensation--Powers--Subpoena--Depositions... regard to their knowledge of and interest in solid waste; two (2) members shall be chosen for their...
The Power of the Test for Treatment Effects in Three-Level Block Randomized Designs
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2008-01-01
Experiments that involve nested structures may assign treatment conditions either to subgroups (such as classrooms) or individuals within subgroups (such as students). The design of such experiments requires knowledge of the intraclass correlation structure to compute the sample sizes necessary to achieve adequate power to detect the treatment…
reflect it. There are commercially available manual and powered suction devices on the market , and several are specifically advertised for use in...combine to suggest that no device on the market meets even the most basic requirements of being small, lightweight, rugged, and demonstrating adequate
The Checkered History of American Psychiatric Epidemiology
Horwitz, Allan V; Grob, Gerald N
2011-01-01
Context American psychiatry has been fascinated with statistics ever since the specialty was created in the early nineteenth century. Initially, psychiatrists hoped that statistics would reveal the benefits of institutional care. Nevertheless, their fascination with statistics was far removed from the growing importance of epidemiology generally. The impetus to create an epidemiology of mental disorders came from the emerging social sciences, whose members were concerned with developing a scientific understanding of individual and social behavior and applying it to a series of pressing social problems. Beginning in the 1920s, the interest of psychiatric epidemiologists shifted to the ways that social environments contributed to the development of mental disorders. This emphasis dramatically changed after 1980 when the policy focus of psychiatric epidemiology became the early identification and prevention of mental illness in individuals. Methods This article reviews the major developments in psychiatric epidemiology over the past century and a half. Findings The lack of an adequate classification system for mental illness has precluded the field of psychiatric epidemiology from providing causal understandings that could contribute to more adequate policies to remediate psychiatric disorders. Because of this gap, the policy influence of psychiatric epidemiology has stemmed more from institutional and ideological concerns than from knowledge about the causes of mental disorders. Conclusion Most of the problems that have bedeviled psychiatric epidemiology since its inception remain unresolved. In particular, until epidemiologists develop adequate methods to measure mental illnesses in community populations, the policy contributions of this field will not be fully realized. PMID:22188350
Hot rocket plume experiment - Survey and conceptual design. [of rhenium-iridium bipropellants
NASA Technical Reports Server (NTRS)
Millard, Jerry M.; Luan, Taylor W.; Dowdy, Mack W.
1992-01-01
Attention is given to a space-borne engine plume experiment study to fly an experiment which will both verify and quantify the reduced contamination from advanced rhenium-iridium earth-storable bipropellant rockets (hot rockets) and provide a correlation between high-fidelity, in-space measurements and theoretical plume and surface contamination models. The experiment conceptual design is based on survey results from plume and contamination technologists throughout the U.S. With respect to shuttle use, cursory investigations validate Hitchhiker availability and adaptability, adequate remote manipulator system (RMS) articulation and dynamic capability, acceptable RMS attachment capability, adequate power and telemetry capability, and adequate flight altitude and attitude/orbital capability.
Jan, Show-Li; Shieh, Gwowen
2016-08-31
The 2 × 2 factorial design is widely used for assessing the existence of interaction and the extent of generalizability of two factors where each factor had only two levels. Accordingly, research problems associated with the main effects and interaction effects can be analyzed with the selected linear contrasts. To correct for the potential heterogeneity of variance structure, the Welch-Satterthwaite test is commonly used as an alternative to the t test for detecting the substantive significance of a linear combination of mean effects. This study concerns the optimal allocation of group sizes for the Welch-Satterthwaite test in order to minimize the total cost while maintaining adequate power. The existing method suggests that the optimal ratio of sample sizes is proportional to the ratio of the population standard deviations divided by the square root of the ratio of the unit sampling costs. Instead, a systematic approach using optimization technique and screening search is presented to find the optimal solution. Numerical assessments revealed that the current allocation scheme generally does not give the optimal solution. Alternatively, the suggested approaches to power and sample size calculations give accurate and superior results under various treatment and cost configurations. The proposed approach improves upon the current method in both its methodological soundness and overall performance. Supplementary algorithms are also developed to aid the usefulness and implementation of the recommended technique in planning 2 × 2 factorial designs.
Westfall, Jacob; Kenny, David A; Judd, Charles M
2014-10-01
Researchers designing experiments in which a sample of participants responds to a sample of stimuli are faced with difficult questions about optimal study design. The conventional procedures of statistical power analysis fail to provide appropriate answers to these questions because they are based on statistical models in which stimuli are not assumed to be a source of random variation in the data, models that are inappropriate for experiments involving crossed random factors of participants and stimuli. In this article, we present new methods of power analysis for designs with crossed random factors, and we give detailed, practical guidance to psychology researchers planning experiments in which a sample of participants responds to a sample of stimuli. We extensively examine 5 commonly used experimental designs, describe how to estimate statistical power in each, and provide power analysis results based on a reasonable set of default parameter values. We then develop general conclusions and formulate rules of thumb concerning the optimal design of experiments in which a sample of participants responds to a sample of stimuli. We show that in crossed designs, statistical power typically does not approach unity as the number of participants goes to infinity but instead approaches a maximum attainable power value that is possibly small, depending on the stimulus sample. We also consider the statistical merits of designs involving multiple stimulus blocks. Finally, we provide a simple and flexible Web-based power application to aid researchers in planning studies with samples of stimuli.
Monte Carlo based statistical power analysis for mediation models: methods and software.
Zhang, Zhiyong
2014-12-01
The existing literature on statistical power analysis for mediation models often assumes data normality and is based on a less powerful Sobel test instead of the more powerful bootstrap test. This study proposes to estimate statistical power to detect mediation effects on the basis of the bootstrap method through Monte Carlo simulation. Nonnormal data with excessive skewness and kurtosis are allowed in the proposed method. A free R package called bmem is developed to conduct the power analysis discussed in this study. Four examples, including a simple mediation model, a multiple-mediator model with a latent mediator, a multiple-group mediation model, and a longitudinal mediation model, are provided to illustrate the proposed method.
Pérez-Báez, Wendy; García-Latorre, Ethel A; Maldonado-Martínez, Héctor Aquiles; Coronado-Martínez, Iris; Flores-García, Leonardo; Taja-Chayeb, Lucía
2017-10-01
Treatment in metastatic colorectal cancer (mCRC) has expanded with monoclonal antibodies targeting epidermal growth factor receptor, but is restricted to patients with a wild-type (WT) KRAS mutational status. The most sensitive assays for KRAS mutation detection in formalin-fixed paraffin embedded (FFPE) tissues are based on real-time PCR. Among them, high resolution melting analysis (HRMA), is a simple, fast, highly sensitive, specific and cost-effective method, proposed as adjunct for KRAS mutation detection. However the method to categorize WT vs mutant sequences in HRMA is not clearly specified in available studies, besides the impact of FFPE artifacts on HRMA performance hasn't been addressed either. Avowedly adequate samples from 104 consecutive mCRC patients were tested for KRAS mutations by Therascreen™ (FDA Validated test), HRMA, and HRMA with UDG pre-treatment to reverse FFPE fixation artifacts. Comparisons of KRAS status allocation among the three methods were done. Focusing on HRMA as screening test, ROC curve analyses were performed for HRMA and HMRA-UDG against Therascreen™, in order to evaluate their discriminative power and to determine the threshold of profile concordance between WT control and sample for KRAS status determination. Comparing HRMA and HRMA-UDG against Therascreen™ as surrogate gold standard, sensitivity was 1 for both HRMA and HRMA-UDG; and specificity and positive predictive values were respectively 0.838 and 0.939; and 0.777 and 0.913. As evaluated by the McNemar test, HRMA-UDG allocated samples to a WT/mutated genotype in a significatively different way from HRMA (p > 0.001). On the other hand HRMA-UDG did not differ from Therascreen™ (p = 0.125). ROC-curve analysis showed a significant discriminative power for both HRMA and HRMA-UDG against Therascreen™ (respectively, AUC of 0.978, p > 0.0001, CI 95% 0.957-0.999; and AUC of 0.98, p > 0.0001, CI 95% 0.000-1.0). For HRMA as a screening tool, the best threshold (degree of concordance between sample curves and WT control) was attained at 92.14% for HRMA (specificity of 0.887), and at 92.55% for HRMA-UDG (specificity of 0.952). HRMA is a highly sensitive method for KRAS mutation detection, with apparently adequate and statistically significant discriminative power. FFPE sample fixation artifacts have an impact on HRMA results, so for HRMA on FFPE samples pre-treatment with UDG should be strongly suggested. The choice of the threshold for melting curve concordance has also great impact on HRMA performance. A threshold of 93% or greater might be adequate if using HRMA as a screening tool. Further validation of this threshold is required. Copyright © 2017 Elsevier Ltd. All rights reserved.
Quantum fluctuation theorems and power measurements
NASA Astrophysics Data System (ADS)
Prasanna Venkatesh, B.; Watanabe, Gentaro; Talkner, Peter
2015-07-01
Work in the paradigm of the quantum fluctuation theorems of Crooks and Jarzynski is determined by projective measurements of energy at the beginning and end of the force protocol. In analogy to classical systems, we consider an alternative definition of work given by the integral of the supplied power determined by integrating up the results of repeated measurements of the instantaneous power during the force protocol. We observe that such a definition of work, in spite of taking account of the process dependence, has different possible values and statistics from the work determined by the conventional two energy measurement approach (TEMA). In the limit of many projective measurements of power, the system’s dynamics is frozen in the power measurement basis due to the quantum Zeno effect leading to statistics only trivially dependent on the force protocol. In general the Jarzynski relation is not satisfied except for the case when the instantaneous power operator commutes with the total Hamiltonian at all times. We also consider properties of the joint statistics of power-based definition of work and TEMA work in protocols where both values are determined. This allows us to quantify their correlations. Relaxing the projective measurement condition, weak continuous measurements of power are considered within the stochastic master equation formalism. Even in this scenario the power-based work statistics is in general not able to reproduce qualitative features of the TEMA work statistics.
Barber, Julie A; Thompson, Simon G
1998-01-01
Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main conclusions regarding costs were not justifiedThe analysis and reporting of health economic assessments within randomised controlled trials urgently need improving PMID:9794854
Statistical Power of Psychological Research: What Have We Gained in 20 Years?
ERIC Educational Resources Information Center
Rossi, Joseph S.
1990-01-01
Calculated power for 6,155 statistical tests in 221 journal articles published in 1982 volumes of "Journal of Abnormal Psychology,""Journal of Consulting and Clinical Psychology," and "Journal of Personality and Social Psychology." Power to detect small, medium, and large effects was .17, .57, and .83, respectively. Concluded that power of…
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
25 CFR 700.267 - Disclosure of records.
Code of Federal Regulations, 2010 CFR
2010-04-01
... system in which the record is maintained with advance adequate written assurance that the record will be used solely as a statistical research or reporting record, and the record is to be transferred in a...
28 CFR 512.15 - Access to Bureau of Prisons records.
Code of Federal Regulations, 2010 CFR
2010-07-01
... may receive records in a form not individually identifiable when advance adequate written assurance that the record will be used solely as a statistical research or reporting record is provided to the...
43 CFR 2.56 - Disclosure of records.
Code of Federal Regulations, 2010 CFR
2010-10-01
... responsible for the system in which the record is maintained with advance adequate written assurance that the record will be used solely as a statistical research or reporting record, and the record is to be...
4 CFR 83.4 - Conditions of disclosure.
Code of Federal Regulations, 2010 CFR
2010-01-01
...); or (d) To a recipient who has provided GAO with advance adequate written assurance that the record will be used solely as a statistical research or reporting record, and the record is to be transferred...
Gene–Physical Activity Interactions: Overview of Human Studies
Rankinen, Tuomo; Bouchard, Claude
2009-01-01
Physical activity level is an important component of the total daily energy expenditure and as such contributes to body weight regulation. A body of data indicates that the level of physical activity plays a role in the risk of excessive weight gain, in weight loss programs, and particularly in the prevention of weight regain. Most studies dealing with potential gene–physical activity interaction effects use an exercise and fitness or performance paradigm as opposed to an obesity-driven model. From these studies, it is clear that there are considerable individual differences in the response to an exercise regimen and that there is a substantial familial aggregation component to the observed heterogeneity. Few studies have focused on the role of specific genes in accounting for the highly prevalent gene–exercise interaction effects. Results for specific genes have been inconsistent with few exceptions. Progress is likely to come when studies will be designed to truly address gene–exercise or physical activity interaction issues and with sample sizes that will provide adequate statistical power. PMID:19037212
[Patient's individuality and application of guidelines in surgery].
Schulte, Michael
2005-01-01
Individual treatment decisions can become considerably conflictual in view of the co-existence of medical professional guidelines, recommendations based on evidence-based medicine (EBM), and juridical and economical directions. Medical guidelines are not subject to an external review process; also, due to reduced practicability, the surgeons' compliance with guidelines remains relatively low. Surgical treatment strategies can rely on randomized clinical trials (RCTs) in approximately 20% of the surgical procedures and on non-randomized trials in approximately 70% of the cases. No evidence is given in approximately 10% of the cases. Specific problems of implementation of EBM in surgical disciplines are represented by the difficulty of standardized procedures, the heterogeneity of the population, the impossibility to conduct double-blinded RCTs, a low statistical power, and a publication bias. Since individual diseases cannot be reduced to surgical cases manageable only by the application of guidelines, adequate treatment of individual patients requires the critical application of both external evidence and surgeon expertise (internal evidence).
Becker, Manuel; Klauer, Karl Christoph; Spruyt, Adriaan
2016-02-01
In a series of articles, Spruyt and colleagues have developed the Feature-Specific Attention Allocation framework, stating that the semantic analysis of task-irrelevant stimuli is critically dependent upon dimension-specific attention allocation. In an adversarial collaboration, we replicate one experiment supporting this theory (Spruyt, de Houwer, & Hermans, 2009; Exp. 3), in which semantic priming effects in the pronunciation task were found to be restricted to stimulus dimensions that were task-relevant on induction trials. Two pilot studies showed the capability of our laboratory to detect priming effects in the pronunciation task, but also suggested that the original effect may be difficult to replicate. In this study, we tried to replicate the original experiment while ensuring adequate statistical power. Results show little evidence for dimension-specific priming effects. The present results provide further insight into the malleability of early semantic encoding processes, but also show the need for further research on this topic.
Meta-STEPP: subpopulation treatment effect pattern plot for individual patient data meta-analysis.
Wang, Xin Victoria; Cole, Bernard; Bonetti, Marco; Gelber, Richard D
2016-09-20
We have developed a method, called Meta-STEPP (subpopulation treatment effect pattern plot for meta-analysis), to explore treatment effect heterogeneity across covariate values in the meta-analysis setting for time-to-event data when the covariate of interest is continuous. Meta-STEPP forms overlapping subpopulations from individual patient data containing similar numbers of events with increasing covariate values, estimates subpopulation treatment effects using standard fixed-effects meta-analysis methodology, displays the estimated subpopulation treatment effect as a function of the covariate values, and provides a statistical test to detect possibly complex treatment-covariate interactions. Simulation studies show that this test has adequate type-I error rate recovery as well as power when reasonable window sizes are chosen. When applied to eight breast cancer trials, Meta-STEPP suggests that chemotherapy is less effective for tumors with high estrogen receptor expression compared with those with low expression. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Patient Electronic Health Records as a Means to Approach Genetic Research in Gastroenterology.
Ananthakrishnan, Ashwin N; Lieberman, David
2015-10-01
Electronic health records (EHRs) are being increasingly utilized and form a unique source of extensive data gathered during routine clinical care. Through use of codified and free text concepts identified using clinical informatics tools, disease labels can be assigned with a high degree of accuracy. Analysis linking such EHR-assigned disease labels to a biospecimen repository has demonstrated that genetic associations identified in prospective cohorts can be replicated with adequate statistical power and novel phenotypic associations identified. In addition, genetic discovery research can be performed utilizing clinical, laboratory, and procedure data obtained during care. Challenges with such research include the need to tackle variability in quality and quantity of EHR data and importance of maintaining patient privacy and data security. With appropriate safeguards, this novel and emerging field of research offers considerable promise and potential to further scientific research in gastroenterology efficiently, cost-effectively, and with engagement of patients and communities. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
Probabilistic risk analysis of building contamination.
Bolster, D T; Tartakovsky, D M
2008-10-01
We present a general framework for probabilistic risk assessment (PRA) of building contamination. PRA provides a powerful tool for the rigorous quantification of risk in contamination of building spaces. A typical PRA starts by identifying relevant components of a system (e.g. ventilation system components, potential sources of contaminants, remediation methods) and proceeds by using available information and statistical inference to estimate the probabilities of their failure. These probabilities are then combined by means of fault-tree analyses to yield probabilistic estimates of the risk of system failure (e.g. building contamination). A sensitivity study of PRAs can identify features and potential problems that need to be addressed with the most urgency. Often PRAs are amenable to approximations, which can significantly simplify the approach. All these features of PRA are presented in this paper via a simple illustrative example, which can be built upon in further studies. The tool presented here can be used to design and maintain adequate ventilation systems to minimize exposure of occupants to contaminants.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-03
... at the ACSD's North Plant wastewater treatment plant (WWTP), and to produce electrical power for on... turbine generator manufactured in the United States is of adequate capacity to meet the electrical power..., (2) Ormat Technologies, Inc, in Israel, and (3) Adoratec, in Germany. This is a project specific...
An Ecological Perspective of Power in Transformational Learning: A Case Study of Ethical Vegans.
ERIC Educational Resources Information Center
McDonald, Barbara; Cervero, Ronald M.; Courtenay, Bradley C.
1999-01-01
In-depth interviews with 12 ethical vegans revealed the process of becoming vegetarian. Transformative learning proved to be a journey rather than a one-time decision. Mezirow's transformative theory does not adequately account for the power relations central to this process. Therefore, transformative learning should be viewed more holistically.…
Statistical power analysis of cardiovascular safety pharmacology studies in conscious rats.
Bhatt, Siddhartha; Li, Dingzhou; Flynn, Declan; Wisialowski, Todd; Hemkens, Michelle; Steidl-Nichols, Jill
2016-01-01
Cardiovascular (CV) toxicity and related attrition are a major challenge for novel therapeutic entities and identifying CV liability early is critical for effective derisking. CV safety pharmacology studies in rats are a valuable tool for early investigation of CV risk. Thorough understanding of data analysis techniques and statistical power of these studies is currently lacking and is imperative for enabling sound decision-making. Data from 24 crossover and 12 parallel design CV telemetry rat studies were used for statistical power calculations. Average values of telemetry parameters (heart rate, blood pressure, body temperature, and activity) were logged every 60s (from 1h predose to 24h post-dose) and reduced to 15min mean values. These data were subsequently binned into super intervals for statistical analysis. A repeated measure analysis of variance was used for statistical analysis of crossover studies and a repeated measure analysis of covariance was used for parallel studies. Statistical power analysis was performed to generate power curves and establish relationships between detectable CV (blood pressure and heart rate) changes and statistical power. Additionally, data from a crossover CV study with phentolamine at 4, 20 and 100mg/kg are reported as a representative example of data analysis methods. Phentolamine produced a CV profile characteristic of alpha adrenergic receptor antagonism, evidenced by a dose-dependent decrease in blood pressure and reflex tachycardia. Detectable blood pressure changes at 80% statistical power for crossover studies (n=8) were 4-5mmHg. For parallel studies (n=8), detectable changes at 80% power were 6-7mmHg. Detectable heart rate changes for both study designs were 20-22bpm. Based on our results, the conscious rat CV model is a sensitive tool to detect and mitigate CV risk in early safety studies. Furthermore, these results will enable informed selection of appropriate models and study design for early stage CV studies. Copyright © 2016 Elsevier Inc. All rights reserved.
40 CFR 35.713 - Eligible recipients.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Monitoring (section 28) § 35.713 Eligible recipients. (a) The Regional Administrator will treat a Tribe or... an existing government exercising substantial governmental duties and powers; (3) Has adequate...
Solar Power Satellite (SPS) solid-state antenna power combiner
NASA Technical Reports Server (NTRS)
1980-01-01
A low loss power-combining microstrip antenna suitable for solid state solar power satellite (SPS) application was developed. A unique approach for performing both the combining and radiating function in a single cavity-type circuit was verified, representing substantial refinements over previous demonstration models in terms of detailed geometry to obtain good matching and adequate bandwidth at the design frequency. The combiner circuit was designed, built, and tested and the overall results support the view that the solid state power-combining antenna approach is a viable candidate for a solid state SPS antenna building block.
How Many Studies Do You Need? A Primer on Statistical Power for Meta-Analysis
ERIC Educational Resources Information Center
Valentine, Jeffrey C.; Pigott, Therese D.; Rothstein, Hannah R.
2010-01-01
In this article, the authors outline methods for using fixed and random effects power analysis in the context of meta-analysis. Like statistical power analysis for primary studies, power analysis for meta-analysis can be done either prospectively or retrospectively and requires assumptions about parameters that are unknown. The authors provide…
Monitoring Statistics Which Have Increased Power over a Reduced Time Range.
ERIC Educational Resources Information Center
Tang, S. M.; MacNeill, I. B.
1992-01-01
The problem of monitoring trends for changes at unknown times is considered. Statistics that permit one to focus high power on a segment of the monitored period are studied. Numerical procedures are developed to compute the null distribution of these statistics. (Author)
The 1993 Mississippi river flood: A one hundred or a one thousand year event?
Malamud, B.D.; Turcotte, D.L.; Barton, C.C.
1996-01-01
Power-law (fractal) extreme-value statistics are applicable to many natural phenomena under a wide variety of circumstances. Data from a hydrologic station in Keokuk, Iowa, shows the great flood of the Mississippi River in 1993 has a recurrence interval on the order of 100 years using power-law statistics applied to partial-duration flood series and on the order of 1,000 years using a log-Pearson type 3 (LP3) distribution applied to annual series. The LP3 analysis is the federally adopted probability distribution for flood-frequency estimation of extreme events. We suggest that power-law statistics are preferable to LP3 analysis. As a further test of the power-law approach we consider paleoflood data from the Colorado River. We compare power-law and LP3 extrapolations of historical data with these paleo-floods. The results are remarkably similar to those obtained for the Mississippi River: Recurrence intervals from power-law statistics applied to Lees Ferry discharge data are generally consistent with inferred 100- and 1,000-year paleofloods, whereas LP3 analysis gives recurrence intervals that are orders of magnitude longer. For both the Keokuk and Lees Ferry gauges, the use of an annual series introduces an artificial curvature in log-log space that leads to an underestimate of severe floods. Power-law statistics are predicting much shorter recurrence intervals than the federally adopted LP3 statistics. We suggest that if power-law behavior is applicable, then the likelihood of severe floods is much higher. More conservative dam designs and land-use restrictions Nay be required.
46 CFR 503.61 - Conditions of disclosure.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 13 U.S.C.; (5) To a recipient who has provided the Commission with adequate advance written assurance that the record will be used solely as a statistical research or reporting record, and the record is to...
42 CFR 412.22 - Excluded hospitals and hospital units: General rules.
Code of Federal Regulations, 2010 CFR
2010-10-01
... must meet the governance and control requirements at paragraphs (e)(1)(i) through (e)(1)(iv) of this... allocates costs and maintains adequate statistical data to support the basis of allocation. (G) It reports...
He, Zongxiao; Zhang, Di; Renton, Alan E; Li, Biao; Zhao, Linhai; Wang, Gao T; Goate, Alison M; Mayeux, Richard; Leal, Suzanne M
2017-02-02
Whole-genome and exome sequence data can be cost-effectively generated for the detection of rare-variant (RV) associations in families. Causal variants that aggregate in families usually have larger effect sizes than those found in sporadic cases, so family-based designs can be a more powerful approach than population-based designs. Moreover, some family-based designs are robust to confounding due to population admixture or substructure. We developed a RV extension of the generalized disequilibrium test (GDT) to analyze sequence data obtained from nuclear and extended families. The GDT utilizes genotype differences of all discordant relative pairs to assess associations within a family, and the RV extension combines the single-variant GDT statistic over a genomic region of interest. The RV-GDT has increased power by efficiently incorporating information beyond first-degree relatives and allows for the inclusion of covariates. Using simulated genetic data, we demonstrated that the RV-GDT method has well-controlled type I error rates, even when applied to admixed populations and populations with substructure. It is more powerful than existing family-based RV association methods, particularly for the analysis of extended pedigrees and pedigrees with missing data. We analyzed whole-genome sequence data from families affected by Alzheimer disease to illustrate the application of the RV-GDT. Given the capability of the RV-GDT to adequately control for population admixture or substructure and analyze pedigrees with missing genotype data and its superior power over other family-based methods, it is an effective tool for elucidating the involvement of RVs in the etiology of complex traits. Copyright © 2017 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Radioactivity measurement of radioactive contaminated soil by using a fiber-optic radiation sensor
NASA Astrophysics Data System (ADS)
Joo, Hanyoung; Kim, Rinah; Moon, Joo Hyun
2016-06-01
A fiber-optic radiation sensor (FORS) was developed to measure the gamma radiation from radioactive contaminated soil. The FORS was fabricated using an inorganic scintillator (Lu,Y)2SiO5:Ce (LYSO:Ce), a mixture of epoxy resin and hardener, aluminum foil, and a plastic optical fiber. Before its real application, the FORS was tested to determine if it performed adequately. The test result showed that the measurements by the FORS adequately followed the theoretically estimated values. Then, the FORS was applied to measure the gamma radiation from radioactive contaminated soil. For comparison, a commercial radiation detector was also applied to measure the same soil samples. The measurement data were analyzed by using a statistical parameter, the critical level to determine if net radioactivity statistically different from background was present in the soil sample. The analysis showed that the soil sample had radioactivity distinguishable from background.
Soares, Micaela A R; Andrade, Sandra R; Martins, Rui C; Quina, Margarida J; Quinta-Ferreira, Rosa M
2012-01-01
Composting is one of the technologies recommended for pre-treating industrial eggshells (ES) before its application in soils, for calcium recycling. However, due to the high inorganic content of ES, a mixture of biodegradable materials is required to assure a successful procedure. In this study, an adequate organic blend composition containing potato peel (PP), grass clippings (GC) and wheat straw (WS) was determined by applying the simplex-centroid mixture design method to achieve a desired moisture content, carbon: nitrogen ratio and free air space for effective composting of ES. A blend of 56% PP, 37% GC and 7% WS was selected and tested in a self heating reactor, where 10% (w/w) of ES was incorporated. After 29 days of reactor operation, a dry matter reduction of 46% was achieved and thermophilic temperatures were maintained during 15 days, indicating that the blend selected by statistical approach was adequate for composting of ES.
Analysing Command Challenges Using the Command and Control Framework: Pilot Study Results
2003-02-01
allocation of resources (2), adequate staff, abuse of power Primary Rank Level 8 % Rank too low Abuse of power/authority (14), gender Use of Power 58...Advisory Board on Gender Integration and Employment Equity: 2000 Annual Report. Ottawa: Department of National Defence. 8. Adams-Roy, J.E., MacLennan...Opportunity (e.g., for socialisation ) Explain: = NoDl Yes R] :Other Explain: 36 DRDC Toronto TR 2003-034 PART D - GENERAL 6. Overall Assessment = In your
HOMER: The Micropower Optimization Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2004-03-01
HOMER, the micropower optimization model, helps users to design micropower systems for off-grid and grid-connected power applications. HOMER models micropower systems with one or more power sources including wind turbines, photovoltaics, biomass power, hydropower, cogeneration, diesel engines, cogeneration, batteries, fuel cells, and electrolyzers. Users can explore a range of design questions such as which technologies are most effective, what size should components be, how project economics are affected by changes in loads or costs, and is the renewable resource adequate.
40 CFR 35.693 - Eligible recipients.
Code of Federal Regulations, 2010 CFR
2010-07-01
...(g)) § 35.693 Eligible recipients. (a) The Regional Administrator will treat a Tribe or Intertribal... exercising substantial governmental duties and powers; (3) Has adequate authority to carry out the grant...
Robust inference for group sequential trials.
Ganju, Jitendra; Lin, Yunzhi; Zhou, Kefei
2017-03-01
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests. Copyright © 2017 John Wiley & Sons, Ltd.
Wicks, J
2000-01-01
The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs. PMID:10788332
Wicks, J
2000-06-01
The transmission/disequilibrium test (TDT) is a popular, simple, and powerful test of linkage, which can be used to analyze data consisting of transmissions to the affected members of families with any kind pedigree structure, including affected sib pairs (ASPs). Although it is based on the preferential transmission of a particular marker allele across families, it is not a valid test of association for ASPs. Martin et al. devised a similar statistic for ASPs, Tsp, which is also based on preferential transmission of a marker allele but which is a valid test of both linkage and association for ASPs. It is, however, less powerful than the TDT as a test of linkage for ASPs. What I show is that the differences between the TDT and Tsp are due to the fact that, although both statistics are based on preferential transmission of a marker allele, the TDT also exploits excess sharing in identity-by-descent transmissions to ASPs. Furthermore, I show that both of these statistics are members of a family of "TDT-like" statistics for ASPs. The statistics in this family are based on preferential transmission but also, to varying extents, exploit excess sharing. From this family of statistics, we see that, although the TDT exploits excess sharing to some extent, it is possible to do so to a greater extent-and thus produce a more powerful test of linkage, for ASPs, than is provided by the TDT. Power simulations conducted under a number of disease models are used to verify that the most powerful member of this family of TDT-like statistics is more powerful than the TDT for ASPs.
NASA Astrophysics Data System (ADS)
Bourke, Sarah A.; Hermann, Kristian J.; Hendry, M. Jim
2017-11-01
Elevated groundwater salinity associated with produced water, leaching from landfills or secondary salinity can degrade arable soils and potable water resources. Direct-push electrical conductivity (EC) profiling enables rapid, relatively inexpensive, high-resolution in-situ measurements of subsurface salinity, without requiring core collection or installation of groundwater wells. However, because the direct-push tool measures the bulk EC of both solid and liquid phases (ECa), incorporation of ECa data into regional or historical groundwater data sets requires the prediction of pore water EC (ECw) or chloride (Cl-) concentrations from measured ECa. Statistical linear regression and physically based models for predicting ECw and Cl- from ECa profiles were tested on a brine plume in central Saskatchewan, Canada. A linear relationship between ECa/ECw and porosity was more accurate for predicting ECw and Cl- concentrations than a power-law relationship (Archie's Law). Despite clay contents of up to 96%, the addition of terms to account for electrical conductance in the solid phase did not improve model predictions. In the absence of porosity data, statistical linear regression models adequately predicted ECw and Cl- concentrations from direct-push ECa profiles (ECw = 5.48 ECa + 0.78, R 2 = 0.87; Cl- = 1,978 ECa - 1,398, R 2 = 0.73). These statistical models can be used to predict ECw in the absence of lithologic data and will be particularly useful for initial site assessments. The more accurate linear physically based model can be used to predict ECw and Cl- as porosity data become available and the site-specific ECw-Cl- relationship is determined.
Coronado-Montoya, Stephanie; Levis, Alexander W; Kwakkenbos, Linda; Steele, Russell J; Turner, Erick H; Thombs, Brett D
2016-01-01
A large proportion of mindfulness-based therapy trials report statistically significant results, even in the context of very low statistical power. The objective of the present study was to characterize the reporting of "positive" results in randomized controlled trials of mindfulness-based therapy. We also assessed mindfulness-based therapy trial registrations for indications of possible reporting bias and reviewed recent systematic reviews and meta-analyses to determine whether reporting biases were identified. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS databases were searched for randomized controlled trials of mindfulness-based therapy. The number of positive trials was described and compared to the number that might be expected if mindfulness-based therapy were similarly effective compared to individual therapy for depression. Trial registries were searched for mindfulness-based therapy registrations. CINAHL, Cochrane CENTRAL, EMBASE, ISI, MEDLINE, PsycInfo, and SCOPUS were also searched for mindfulness-based therapy systematic reviews and meta-analyses. 108 (87%) of 124 published trials reported ≥1 positive outcome in the abstract, and 109 (88%) concluded that mindfulness-based therapy was effective, 1.6 times greater than the expected number of positive trials based on effect size d = 0.55 (expected number positive trials = 65.7). Of 21 trial registrations, 13 (62%) remained unpublished 30 months post-trial completion. No trial registrations adequately specified a single primary outcome measure with time of assessment. None of 36 systematic reviews and meta-analyses concluded that effect estimates were overestimated due to reporting biases. The proportion of mindfulness-based therapy trials with statistically significant results may overstate what would occur in practice.
Twenty First Century Cyberbullying Defined: An Analysis of Intent, Repetition and Emotional Response
ERIC Educational Resources Information Center
Walker, Carol Marie
2012-01-01
The purpose of this study was to analyze the extent and impact that cyberbullying has on the undergraduate college student and provide a current definition for the event. A priori power analysis guided this research to provide an 80 percent probability of detecting a real effect with medium effect size. Adequate research power was essential to…
NASA Astrophysics Data System (ADS)
Rimov, A. A.; Chukanova, T. I.; Trofimov, Yu. V.
2016-12-01
Data on the comparative analysis variants of the quality of power installations (benchmarking) applied in the power industry is systematized. It is shown that the most efficient variant of implementation of the benchmarking technique is the analysis of statistical distributions of the indicators in the composed homogenous group of the uniform power installations. The benchmarking technique aimed at revealing the available reserves on improvement of the reliability and heat efficiency indicators of the power installations of the thermal power plants is developed in the furtherance of this approach. The technique provides a possibility of reliable comparison of the quality of the power installations in their homogenous group limited by the number and adoption of the adequate decision on improving some or other technical characteristics of this power installation. The technique provides structuring of the list of the comparison indicators and internal factors affecting them represented according to the requirements of the sectoral standards and taking into account the price formation characteristics in the Russian power industry. The mentioned structuring ensures traceability of the reasons of deviation of the internal influencing factors from the specified values. The starting point for further detail analysis of the delay of the certain power installation indicators from the best practice expressed in the specific money equivalent is positioning of this power installation on distribution of the key indicator being a convolution of the comparison indicators. The distribution of the key indicator is simulated by the Monte-Carlo method after receiving the actual distributions of the comparison indicators: specific lost profit due to the short supply of electric energy and short delivery of power, specific cost of losses due to the nonoptimal expenditures for repairs, and specific cost of excess fuel equivalent consumption. The quality loss indicators are developed facilitating the analysis of the benchmarking results permitting to represent the quality loss of this power installation in the form of the difference between the actual value of the key indicator or comparison indicator and the best quartile of the existing distribution. The uncertainty of the obtained values of the quality loss indicators was evaluated by transforming the standard uncertainties of the input values into the expanded uncertainties of the output values with the confidence level of 95%. The efficiency of the technique is demonstrated in terms of benchmarking of the main thermal and mechanical equipment of the extraction power-generating units T-250 and power installations of the thermal power plants with the main steam pressure 130 atm.
ERIC Educational Resources Information Center
Cairney, John; Streiner, David L.
2011-01-01
Although statistics such as kappa and phi are commonly used to assess agreement between tests, in situations where the base rate of a disorder in a population is low or high, these statistics tend to underestimate actual agreement. This can occur even if the tests are good and the classification of subjects is adequate. Relative improvement over…
Engineering analysis of LANDSAT 1 data for Southeast Asian agriculture
NASA Technical Reports Server (NTRS)
Mcnair, A. J.; Heydt, H. L.; Liang, T.; Levine, G. (Principal Investigator)
1976-01-01
The author has identified the following significant results. LANDSAT spatial resolution was estimated to be adequate, but barely so, for the purpose of detailed assessment of rice or site status. This was due to the spatially fine grain, heterogenous nature of most rice areas. Use of two spectral bands of digital data (MSS 5 and MSS 6 or 7) appeared to be adequate for site recognition and gross site status assessment. Spectral/temporal signatures were found to be more powerful than spectra signatures alone and virtually essential for most analyses of rice growth and rice sites in the Philippine environment. Two band, two date signatures were estimated to be adequate for most purposes, although good results were achieved using one band two- or four-date signatures. A radiometric resolution of 64 levels in each band was found adequate for the analyses of LANDSAT digital data for site recognition and gross site or rice growth assessment.
Spurious correlations and inference in landscape genetics
Samuel A. Cushman; Erin L. Landguth
2010-01-01
Reliable interpretation of landscape genetic analyses depends on statistical methods that have high power to identify the correct process driving gene flow while rejecting incorrect alternative hypotheses. Little is known about statistical power and inference in individual-based landscape genetics. Our objective was to evaluate the power of causalmodelling with partial...
el Galta, Rachid; Uitte de Willige, Shirley; de Visser, Marieke C H; Helmer, Quinta; Hsu, Li; Houwing-Duistermaat, Jeanine J
2007-09-24
In this paper, we propose a one degree of freedom test for association between a candidate gene and a binary trait. This method is a generalization of Terwilliger's likelihood ratio statistic and is especially powerful for the situation of one associated haplotype. As an alternative to the likelihood ratio statistic, we derive a score statistic, which has a tractable expression. For haplotype analysis, we assume that phase is known. By means of a simulation study, we compare the performance of the score statistic to Pearson's chi-square statistic and the likelihood ratio statistic proposed by Terwilliger. We illustrate the method on three candidate genes studied in the Leiden Thrombophilia Study. We conclude that the statistic follows a chi square distribution under the null hypothesis and that the score statistic is more powerful than Terwilliger's likelihood ratio statistic when the associated haplotype has frequency between 0.1 and 0.4 and has a small impact on the studied disorder. With regard to Pearson's chi-square statistic, the score statistic has more power when the associated haplotype has frequency above 0.2 and the number of variants is above five.
A Production System Version of the Hearsay-II Speech Understanding System
1978-04-01
synchronization mechanisms turn out to be adequate with a rull complement of KSs. Finally, HSP is found to aid solution of the Small Address Problem, as it...much greater than that if HSP’s less powerful synchronization mechanisms turn out to be adequate with a full complement of KSs. Finally, HSP is...earlier version of HSII which had a limited set of KSs. With a richer set of KSs and/or a reduction (or elimination) of synchronization overheads, the
Colorado Springs dedicates zero-discharge coal plant. [Ray D. Nixon plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hennessy, M.; Zeien, C.T.
1980-12-01
The zero-discharge Ray D. Nixon coal-fired power plant was designed to treat and recycle effluents in a region with limited water supplies. The site purchase included groundwater rights and some diversion rights, but a properly-managed local aquifer was determined to be adequate. The closed-loop design recovers 95 percent of the water for reuse. The overall water-management system produces adequate water and treats effluents at less cost and with higher water-quality protection than alternate systems. (DCK)
NASA Technical Reports Server (NTRS)
1977-01-01
A slotted waveguide planar array was established as the baseline design for the spaceborne transmitter antenna. Key aspects of efficient energy conversion at both ends of the power transfer link were analyzed and optimized alternate approaches in the areas of antenna and tube design are discussed. An integrated design concept was developed which meets design requirements, observes structural and thermal constraints, exhibits good performance and was developed in adequate depth to permit cost estimating at the subsystem/component level.
Dilworth, R.H.; Borkowski, C.J.
1961-12-26
A transistorized, fountain pen type radiation monitor to be worn on the person is described. Radiation produces both light flashes in a small bulb and an audible warning tone, the frequency of both the tone and light flashes being proportional to radiation intensity. The device is powered by a battery and a blocking oscillator step-up power supply The oscillator frequency- is regulated to be proportional to the radiation intensity, to provide adequate power in high radiation fields, yet minimize battery drain at low operating intensities. (AEC)
14 CFR 25.1143 - Engine controls.
Code of Federal Regulations, 2014 CFR
2014-01-01
... means of controlling its engine. (d) For each fluid injection (other than fuel) system and its controls... injection fluid is adequately controlled. (e) If a power or thrust control incorporates a fuel shutoff...
14 CFR 25.1143 - Engine controls.
Code of Federal Regulations, 2013 CFR
2013-01-01
... means of controlling its engine. (d) For each fluid injection (other than fuel) system and its controls... injection fluid is adequately controlled. (e) If a power or thrust control incorporates a fuel shutoff...
Liem, Franziskus; Mérillat, Susan; Bezzola, Ladina; Hirsiger, Sarah; Philipp, Michel; Madhyastha, Tara; Jäncke, Lutz
2015-03-01
FreeSurfer is a tool to quantify cortical and subcortical brain anatomy automatically and noninvasively. Previous studies have reported reliability and statistical power analyses in relatively small samples or only selected one aspect of brain anatomy. Here, we investigated reliability and statistical power of cortical thickness, surface area, volume, and the volume of subcortical structures in a large sample (N=189) of healthy elderly subjects (64+ years). Reliability (intraclass correlation coefficient) of cortical and subcortical parameters is generally high (cortical: ICCs>0.87, subcortical: ICCs>0.95). Surface-based smoothing increases reliability of cortical thickness maps, while it decreases reliability of cortical surface area and volume. Nevertheless, statistical power of all measures benefits from smoothing. When aiming to detect a 10% difference between groups, the number of subjects required to test effects with sufficient power over the entire cortex varies between cortical measures (cortical thickness: N=39, surface area: N=21, volume: N=81; 10mm smoothing, power=0.8, α=0.05). For subcortical regions this number is between 16 and 76 subjects, depending on the region. We also demonstrate the advantage of within-subject designs over between-subject designs. Furthermore, we publicly provide a tool that allows researchers to perform a priori power analysis and sensitivity analysis to help evaluate previously published studies and to design future studies with sufficient statistical power. Copyright © 2014 Elsevier Inc. All rights reserved.
Implantable radio frequency identification sensors: wireless power and communication.
Hutchens, Chriswell; Rennaker, Robert L; Venkataraman, Srinivasan; Ahmed, Rehan; Liao, Ran; Ibrahim, Tamer
2011-01-01
There are significant technical challenges in the development of a fully implantable wirelessly powered neural interface. Challenges include wireless transmission of sufficient power to the implanted device to ensure reliable operation for decades without replacement, minimizing tissue heating, and adequate reliable communications bandwidth. Overcoming these challenges is essential for the development of implantable closed loop system for the treatment of disorders ranging from epilepsy, incontinence, stroke and spinal cord injury. We discuss the development of the wireless power, communication and control for a Radio-Frequency Identification Sensor (RFIDS) system with targeted power range for a 700 mV, 30 to 40 uA load attained at -2 dBm.
Ho, Lindsey A; Lange, Ethan M
2010-12-01
Genome-wide association (GWA) studies are a powerful approach for identifying novel genetic risk factors associated with human disease. A GWA study typically requires the inclusion of thousands of samples to have sufficient statistical power to detect single nucleotide polymorphisms that are associated with only modest increases in risk of disease given the heavy burden of a multiple test correction that is necessary to maintain valid statistical tests. Low statistical power and the high financial cost of performing a GWA study remains prohibitive for many scientific investigators anxious to perform such a study using their own samples. A number of remedies have been suggested to increase statistical power and decrease cost, including the utilization of free publicly available genotype data and multi-stage genotyping designs. Herein, we compare the statistical power and relative costs of alternative association study designs that use cases and screened controls to study designs that are based only on, or additionally include, free public control genotype data. We describe a novel replication-based two-stage study design, which uses free public control genotype data in the first stage and follow-up genotype data on case-matched controls in the second stage that preserves many of the advantages inherent when using only an epidemiologically matched set of controls. Specifically, we show that our proposed two-stage design can substantially increase statistical power and decrease cost of performing a GWA study while controlling the type-I error rate that can be inflated when using public controls due to differences in ancestry and batch genotype effects.
Multiplicative point process as a model of trading activity
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Powerlaw: a Python package for analysis of heavy-tailed distributions.
Alstott, Jeff; Bullmore, Ed; Plenz, Dietmar
2014-01-01
Power laws are theoretically interesting probability distributions that are also frequently used to describe empirical data. In recent years, effective statistical methods for fitting power laws have been developed, but appropriate use of these techniques requires significant programming and statistical insight. In order to greatly decrease the barriers to using good statistical methods for fitting power law distributions, we developed the powerlaw Python package. This software package provides easy commands for basic fitting and statistical analysis of distributions. Notably, it also seeks to support a variety of user needs by being exhaustive in the options available to the user. The source code is publicly available and easily extensible.
76 FR 30819 - Special Conditions: Turbomeca Arriel 2D Turboshaft Engine
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-27
... power for search and rescue missions. The applicable airworthiness regulations do not contain adequate... rating. This rating was requested by the applicant to support rotorcraft search and rescue missions that...
Stochastic Drought Risk Analysis and Projection Methods For Thermoelectric Power Systems
NASA Astrophysics Data System (ADS)
Bekera, Behailu Belamo
Combined effects of socio-economic, environmental, technological and political factors impact fresh cooling water availability, which is among the most important elements of thermoelectric power plant site selection and evaluation criteria. With increased variability and changes in hydrologic statistical stationarity, one concern is the increased occurrence of extreme drought events that may be attributable to climatic changes. As hydrological systems are altered, operators of thermoelectric power plants need to ensure a reliable supply of water for cooling and generation requirements. The effects of climate change are expected to influence hydrological systems at multiple scales, possibly leading to reduced efficiency of thermoelectric power plants. This study models and analyzes drought characteristics from a thermoelectric systems operational and regulation perspective. A systematic approach to characterize a stream environment in relation to extreme drought occurrence, duration and deficit-volume is proposed and demonstrated. More specifically, the objective of this research is to propose a stochastic water supply risk analysis and projection methods from thermoelectric power systems operation and management perspectives. The study defines thermoelectric drought as a shortage of cooling water due to stressed supply or beyond operable water temperature limits for an extended period of time requiring power plants to reduce production or completely shut down. It presents a thermoelectric drought risk characterization framework that considers heat content and water quantity facets of adequate water availability for uninterrupted operation of such plants and safety of its surroundings. In addition, it outlines mechanisms to identify rate of occurrences of the said droughts and stochastically quantify subsequent potential losses to the sector. This mechanism is enabled through a model based on compound Nonhomogeneous Poisson Process. This study also demonstrates how the systematic approach can be used for better understanding of pertinent vulnerabilities by providing risk-based information to stakeholders in the power sector. Vulnerabilities as well as our understanding of their extent and likelihood change over time. Keeping up with the changes and making informed decisions demands a time-dependent method that incorporates new evidence into risk assessment framework. This study presents a statistical time-dependent risk analysis approach, which allows for life cycle drought risk assessment of thermoelectric power systems. Also, a Bayesian Belief Network (BBN) extension to the proposed framework is developed. The BBN allows for incorporating new evidence, such as observing power curtailments due to extreme heat or lowflow situations, and updating our knowledge and understanding of the pertinent risk. In sum, the proposed approach can help improve adaptive capacity of the electric power infrastructure, thereby enhancing its resilience to events potentially threatening grid reliability and economic stability. The proposed drought characterization methodology is applied on a daily streamflow series obtained from three United States Geological Survey (USGS) water gauges on the Tennessee River basin. The stochastic water supply risk assessment and projection methods are demonstrated for two power plants on the White River, Indiana: Frank E. Ratts and Petersburg, using water temperature and streamflow time series data obtained from a nearby USGS gauge.
Statistical power as a function of Cronbach alpha of instrument questionnaire items.
Heo, Moonseong; Kim, Namhee; Faith, Myles S
2015-10-14
In countless number of clinical trials, measurements of outcomes rely on instrument questionnaire items which however often suffer measurement error problems which in turn affect statistical power of study designs. The Cronbach alpha or coefficient alpha, here denoted by C(α), can be used as a measure of internal consistency of parallel instrument items that are developed to measure a target unidimensional outcome construct. Scale score for the target construct is often represented by the sum of the item scores. However, power functions based on C(α) have been lacking for various study designs. We formulate a statistical model for parallel items to derive power functions as a function of C(α) under several study designs. To this end, we assume fixed true score variance assumption as opposed to usual fixed total variance assumption. That assumption is critical and practically relevant to show that smaller measurement errors are inversely associated with higher inter-item correlations, and thus that greater C(α) is associated with greater statistical power. We compare the derived theoretical statistical power with empirical power obtained through Monte Carlo simulations for the following comparisons: one-sample comparison of pre- and post-treatment mean differences, two-sample comparison of pre-post mean differences between groups, and two-sample comparison of mean differences between groups. It is shown that C(α) is the same as a test-retest correlation of the scale scores of parallel items, which enables testing significance of C(α). Closed-form power functions and samples size determination formulas are derived in terms of C(α), for all of the aforementioned comparisons. Power functions are shown to be an increasing function of C(α), regardless of comparison of interest. The derived power functions are well validated by simulation studies that show that the magnitudes of theoretical power are virtually identical to those of the empirical power. Regardless of research designs or settings, in order to increase statistical power, development and use of instruments with greater C(α), or equivalently with greater inter-item correlations, is crucial for trials that intend to use questionnaire items for measuring research outcomes. Further development of the power functions for binary or ordinal item scores and under more general item correlation strutures reflecting more real world situations would be a valuable future study.
Split-mouth design in Paediatric Dentistry clinical trials.
Pozos-Guillén, A; Chavarría-Bolaños, D; Garrocho-Rangel, A
2017-03-01
The aim of this article was to describe the essential concepts of the split-mouth design, its underlying assumptions, advantages, limitations, statistical considerations, and possible applications in Paediatric Dentistry clinical investigation. In Paediatric Dentistry clinical investigation, and as part of randomised controlled trials, the split-mouth design is commonly used. The design is characterised by subdividing the child's dentition into halves (right and left), where two different treatment modalities are assigned to one side randomly, in order to allow further outcome evaluation. Each participant acts as their own control by making within- patient rather than between-patient comparisons, thus diminishing inter-subject variability and increasing study accuracy and power. However, the main problem with this design comprises the potential contamination of the treatment effect from one side to the other, or the "carry-across effect"; likewise, this design is not indicated when the oral disease to be treated is not symmetrically distributed (e.g. severity) in the mouth of children. Thus, in spite of its advantages, the split-mouth design can only be applied in a limited number of strictly selected cases. In order to obtain valid and reliable data from split mouth design studies, it is necessary to evaluate the risk of carry-across effect as well as to carefully analise and select adequate inclusion criteria, sample-size calculation and method of statistical analysis.
Electrical Prototype Power Processor for the 30-cm Mercury electric propulsion engine
NASA Technical Reports Server (NTRS)
Biess, J. J.; Frye, R. J.
1978-01-01
An Electrical Prototpye Power Processor has been designed to the latest electrical and performance requirements for a flight-type 30-cm ion engine and includes all the necessary power, command, telemetry and control interfaces for a typical electric propulsion subsystem. The power processor was configured into seven separate mechanical modules that would allow subassembly fabrication, test and integration into a complete power processor unit assembly. The conceptual mechanical packaging of the electrical prototype power processor unit demonstrated the relative location of power, high voltage and control electronic components to minimize electrical interactions and to provide adequate thermal control in a vacuum environment. Thermal control was accomplished with a heat pipe simulator attached to the base of the modules.
The power and robustness of maximum LOD score statistics.
Yoo, Y J; Mendell, N R
2008-07-01
The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.
Power Enhancement in High Dimensional Cross-Sectional Tests
Fan, Jianqing; Liao, Yuan; Yao, Jiawei
2016-01-01
We propose a novel technique to boost the power of testing a high-dimensional vector H : θ = 0 against sparse alternatives where the null hypothesis is violated only by a couple of components. Existing tests based on quadratic forms such as the Wald statistic often suffer from low powers due to the accumulation of errors in estimating high-dimensional parameters. More powerful tests for sparse alternatives such as thresholding and extreme-value tests, on the other hand, require either stringent conditions or bootstrap to derive the null distribution and often suffer from size distortions due to the slow convergence. Based on a screening technique, we introduce a “power enhancement component”, which is zero under the null hypothesis with high probability, but diverges quickly under sparse alternatives. The proposed test statistic combines the power enhancement component with an asymptotically pivotal statistic, and strengthens the power under sparse alternatives. The null distribution does not require stringent regularity conditions, and is completely determined by that of the pivotal statistic. As specific applications, the proposed methods are applied to testing the factor pricing models and validating the cross-sectional independence in panel data models. PMID:26778846
Powered lower limb orthoses for gait rehabilitation
Ferris, Daniel P.; Sawicki, Gregory S.; Domingo, Antoinette
2006-01-01
Bodyweight supported treadmill training has become a prominent gait rehabilitation method in leading rehabilitation centers. This type of locomotor training has many functional benefits but the labor costs are considerable. To reduce therapist effort, several groups have developed large robotic devices for assisting treadmill stepping. A complementary approach that has not been adequately explored is to use powered lower limb orthoses for locomotor training. Recent advances in robotic technology have made lightweight powered orthoses feasible and practical. An advantage to using powered orthoses as rehabilitation aids is they allow practice starting, turning, stopping, and avoiding obstacles during overground walking. PMID:16568153
"Non-cold" dark matter at small scales: a general approach
NASA Astrophysics Data System (ADS)
Murgia, R.; Merle, A.; Viel, M.; Totzauer, M.; Schneider, A.
2017-11-01
Structure formation at small cosmological scales provides an important frontier for dark matter (DM) research. Scenarios with small DM particle masses, large momenta or hidden interactions tend to suppress the gravitational clustering at small scales. The details of this suppression depend on the DM particle nature, allowing for a direct link between DM models and astrophysical observations. However, most of the astrophysical constraints obtained so far refer to a very specific shape of the power suppression, corresponding to thermal warm dark matter (WDM), i.e., candidates with a Fermi-Dirac or Bose-Einstein momentum distribution. In this work we introduce a new analytical fitting formula for the power spectrum, which is simple yet flexible enough to reproduce the clustering signal of large classes of non-thermal DM models, which are not at all adequately described by the oversimplified notion of WDM . We show that the formula is able to fully cover the parameter space of sterile neutrinos (whether resonantly produced or from particle decay), mixed cold and warm models, fuzzy dark matter, as well as other models suggested by effective theory of structure formation (ETHOS). Based on this fitting formula, we perform a large suite of N-body simulations and we extract important nonlinear statistics, such as the matter power spectrum and the halo mass function. Finally, we present first preliminary astrophysical constraints, based on linear theory, from both the number of Milky Way satellites and the Lyman-α forest. This paper is a first step towards a general and comprehensive modeling of small-scale departures from the standard cold DM model.
Zhao, Ni; Chen, Jun; Carroll, Ian M.; Ringel-Kulka, Tamar; Epstein, Michael P.; Zhou, Hua; Zhou, Jin J.; Ringel, Yehuda; Li, Hongzhe; Wu, Michael C.
2015-01-01
High-throughput sequencing technology has enabled population-based studies of the role of the human microbiome in disease etiology and exposure response. Distance-based analysis is a popular strategy for evaluating the overall association between microbiome diversity and outcome, wherein the phylogenetic distance between individuals’ microbiome profiles is computed and tested for association via permutation. Despite their practical popularity, distance-based approaches suffer from important challenges, especially in selecting the best distance and extending the methods to alternative outcomes, such as survival outcomes. We propose the microbiome regression-based kernel association test (MiRKAT), which directly regresses the outcome on the microbiome profiles via the semi-parametric kernel machine regression framework. MiRKAT allows for easy covariate adjustment and extension to alternative outcomes while non-parametrically modeling the microbiome through a kernel that incorporates phylogenetic distance. It uses a variance-component score statistic to test for the association with analytical p value calculation. The model also allows simultaneous examination of multiple distances, alleviating the problem of choosing the best distance. Our simulations demonstrated that MiRKAT provides correctly controlled type I error and adequate power in detecting overall association. “Optimal” MiRKAT, which considers multiple candidate distances, is robust in that it suffers from little power loss in comparison to when the best distance is used and can achieve tremendous power gain in comparison to when a poor distance is chosen. Finally, we applied MiRKAT to real microbiome datasets to show that microbial communities are associated with smoking and with fecal protease levels after confounders are controlled for. PMID:25957468
Coman, Emil N; Iordache, Eugen; Dierker, Lisa; Fifield, Judith; Schensul, Jean J; Suggs, Suzanne; Barbour, Russell
2014-05-01
The advantages of modeling the unreliability of outcomes when evaluating the comparative effectiveness of health interventions is illustrated. Adding an action-research intervention component to a regular summer job program for youth was expected to help in preventing risk behaviors. A series of simple two-group alternative structural equation models are compared to test the effect of the intervention on one key attitudinal outcome in terms of model fit and statistical power with Monte Carlo simulations. Some models presuming parameters equal across the intervention and comparison groups were underpowered to detect the intervention effect, yet modeling the unreliability of the outcome measure increased their statistical power and helped in the detection of the hypothesized effect. Comparative Effectiveness Research (CER) could benefit from flexible multi-group alternative structural models organized in decision trees, and modeling unreliability of measures can be of tremendous help for both the fit of statistical models to the data and their statistical power.
Wagner, Tyler; Irwin, Brian J.; James R. Bence,; Daniel B. Hayes,
2016-01-01
Monitoring to detect temporal trends in biological and habitat indices is a critical component of fisheries management. Thus, it is important that management objectives are linked to monitoring objectives. This linkage requires a definition of what constitutes a management-relevant “temporal trend.” It is also important to develop expectations for the amount of time required to detect a trend (i.e., statistical power) and for choosing an appropriate statistical model for analysis. We provide an overview of temporal trends commonly encountered in fisheries management, review published studies that evaluated statistical power of long-term trend detection, and illustrate dynamic linear models in a Bayesian context, as an additional analytical approach focused on shorter term change. We show that monitoring programs generally have low statistical power for detecting linear temporal trends and argue that often management should be focused on different definitions of trends, some of which can be better addressed by alternative analytical approaches.
Statistical Analysis of Large-Scale Structure of Universe
NASA Astrophysics Data System (ADS)
Tugay, A. V.
While galaxy cluster catalogs were compiled many decades ago, other structural elements of cosmic web are detected at definite level only in the newest works. For example, extragalactic filaments were described by velocity field and SDSS galaxy distribution during the last years. Large-scale structure of the Universe could be also mapped in the future using ATHENA observations in X-rays and SKA in radio band. Until detailed observations are not available for the most volume of Universe, some integral statistical parameters can be used for its description. Such methods as galaxy correlation function, power spectrum, statistical moments and peak statistics are commonly used with this aim. The parameters of power spectrum and other statistics are important for constraining the models of dark matter, dark energy, inflation and brane cosmology. In the present work we describe the growth of large-scale density fluctuations in one- and three-dimensional case with Fourier harmonics of hydrodynamical parameters. In result we get power-law relation for the matter power spectrum.
NASA Technical Reports Server (NTRS)
Klebesadel, R. W.; Fenimore, E. E.; Laros, J.
1983-01-01
The log N-log S data acquired by the Pioneer Venus Orbiter Gamma Burst Detector (PVO) are presented and compared to similar data from the Soviet KONUS experiment. Although the PVO data are consistent with and suggestive of a -3/2 power law distribution, the results are not adequate at this state of observations to differentiate between a -3/2 and a -1 power law slope.
Solar- and wind-powered irrigation systems
NASA Astrophysics Data System (ADS)
Enochian, R. V.
1982-02-01
Five different direct solar and wind energy systems are technically feasible for powering irrigation pumps. However, with projected rates of fossil fuel costs, only two may produce significant unsubsidied energy for irrigation pumping before the turn of the century. These are photovoltaic systems with nonconcentrating collectors (providing that projected costs of manufacturing solar cells prove correct); and wind systems, especially in remote areas where adequate wind is available.
Statistics of multiply scattered broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2003-07-25
We describe the first measurements of the diffusion of broadband single-cycle optical pulses through a highly scattering medium. Using terahertz time-domain spectroscopy, we measure the electric field of a multiply scattered wave with a time resolution shorter than one optical cycle. This time-domain measurement provides information on the statistics of both the amplitude and phase distributions of the diffusive wave. We develop a theoretical description, suitable for broadband radiation, which adequately describes the experimental results.
1980-12-01
career retention rates , and to predict future career retention rates in the Navy. The statistical model utilizes economic variables as predictors...The model developed r has a high correlation with Navy career retention rates . The problem of Navy career retention has not been adequately studied, 0D...findings indicate Navy policymakers must be cognizant of the relationships of economic factors to Navy career retention rates . Accrzsiofl ’or NTIS GRA&I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bao, Rong; Li, Yongdong; Liu, Chunliang
2016-07-15
The output power fluctuations caused by weights of macro particles used in particle-in-cell (PIC) simulations of a backward wave oscillator and a travelling wave tube are statistically analyzed. It is found that the velocities of electrons passed a specific slow-wave structure form a specific electron velocity distribution. The electron velocity distribution obtained in PIC simulation with a relative small weight of macro particles is considered as an initial distribution. By analyzing this initial distribution with a statistical method, the estimations of the output power fluctuations caused by different weights of macro particles are obtained. The statistical method is verified bymore » comparing the estimations with the simulation results. The fluctuations become stronger with increasing weight of macro particles, which can also be determined reversely from estimations of the output power fluctuations. With the weights of macro particles optimized by the statistical method, the output power fluctuations in PIC simulations are relatively small and acceptable.« less
Coston, Bethany M
2017-08-01
While just over one in three heterosexual women will experience intimate partner violence (IPV) in her lifetime, 61% of bisexual women and 78% of non-monosexual women will. Combining previous research and theories on power, social resources, binegativity, and gender-based violence, this article analyzes the role of power and inequality in non-monosexual women's IPV victimization. Using data from the National Intimate Partner and Sexual Violence Survey, this article first examines rates of IPV victimization for statistically significant differences between monosexual (e.g., only have dating, romantic, and sexual partners of one sex/gender) and non-monosexual (e.g., have dating, romantic, and sexual partners of multiple sexes/genders) women in the United States and, second, introduces theoretically important variables to logistic regression analyses to determine the correlates of IPV victimization among non-monosexual women (age, race ethnicity, income, education, immigration status, and indigeneity; partner gender; sexual identity). Findings indicate that non-monosexual women are more likely to experience sexual, emotional, and psychological/control violence, and intimate stalking, but have an equivalent risk of experiencing physical violence. Moreover, having an abusive partner who is a man, having a lot of relative social power, and self-identifying as "bisexual" are all significant factors in violence victimization. Importantly, this is the first study using nationally representative data that confirms non-monosexual women are particularly at risk for sexual identity-based violence at the hands of their male/man partners, suggesting binegativity and biphobia may indeed be linked to hegemonic masculinity. Suggestions for moving research forward include improving data collection efforts such that we can disentangle gender from sex and individual aggregate power from relationship inequalities, as well as more adequately account for the timing of sexual identity disclosures within relationships, relative to the timing of violent episodes.
Effectiveness of touch and feel (TAF) technique on first aid measures for visually challenged.
Mary, Helen; Sasikalaz, D; Venkatesan, Latha
2013-01-01
There is a common perception that a blind person cannot even help his own self. In order to challenge that view, a workshop for visually-impaired people to develop the skills to be independent and productive members of society was conceived. An experimental study was conducted at National Institute of Visually Handicapped, Chennai with the objective to assess the effectiveness of Touch and Feel (TAF) technique on first aid measures for the visually challenged. Total 25 visually challenged people were selected by non-probability purposive sampling technique and data was collected using demographic variable and structured knowledge questionnaire. The score obtained was categorised into three levels: inadequate (0-8), moderately adequate (8 - 17), adequate (17 -25). The study revealed that most of the visually challenged (40%) had inadequate knowledge, and 56 percent had moderately adequate and only few (4%) had adequate knowledge in the pre-test, whereas most (68%) of them had adequate knowledge in the post-test which is statistically significant at p < 0.000 with t-value 6.779. This proves that TAF technique was effective for the visually challenged. There was no association between the demographic variables and their level of knowledge regarding first aid.
Jeffrey P. Prestemon
2009-01-01
Timber product markets are subject to large shocks deriving from natural disturbances and policy shifts. Statistical modeling of shocks is often done to assess their economic importance. In this article, I simulate the statistical power of univariate and bivariate methods of shock detection using time series intervention models. Simulations show that bivariate methods...
Properties of different selection signature statistics and a new strategy for combining them.
Ma, Y; Ding, X; Qanbari, S; Weigend, S; Zhang, Q; Simianer, H
2015-11-01
Identifying signatures of recent or ongoing selection is of high relevance in livestock population genomics. From a statistical perspective, determining a proper testing procedure and combining various test statistics is challenging. On the basis of extensive simulations in this study, we discuss the statistical properties of eight different established selection signature statistics. In the considered scenario, we show that a reasonable power to detect selection signatures is achieved with high marker density (>1 SNP/kb) as obtained from sequencing, while rather small sample sizes (~15 diploid individuals) appear to be sufficient. Most selection signature statistics such as composite likelihood ratio and cross population extended haplotype homozogysity have the highest power when fixation of the selected allele is reached, while integrated haplotype score has the highest power when selection is ongoing. We suggest a novel strategy, called de-correlated composite of multiple signals (DCMS) to combine different statistics for detecting selection signatures while accounting for the correlation between the different selection signature statistics. When examined with simulated data, DCMS consistently has a higher power than most of the single statistics and shows a reliable positional resolution. We illustrate the new statistic to the established selective sweep around the lactase gene in human HapMap data providing further evidence of the reliability of this new statistic. Then, we apply it to scan selection signatures in two chicken samples with diverse skin color. Our analysis suggests that a set of well-known genes such as BCO2, MC1R, ASIP and TYR were involved in the divergent selection for this trait.
Kaplan, Volkan; Eroğlu, Cennet Neslihan
2016-10-01
The aim of the present study was to compare the effects of daily single-dose use of flurbiprofen, diclofenac sodium, and tenoxicam on pain, swelling, and trismus that occur after surgical extraction of impacted wisdom teeth using local anesthesia. The present study included 3 groups with 30 patients in each group. Those volunteering to participate in this double-blind randomized study (n = 90) were selected from a patient population with an indication for extraction of impacted wisdom teeth. Group 1 patients received 200 mg flurbiprofen, group 2 patients received 100 mg diclofenac sodium, and group 3 patients received 20 mg tenoxicam. All doses were once a day, starting preoperatively. Pain was evaluated postoperatively at 1, 2, 3, 6, 8, and 24 hours and at 2 and 7 days using a visual analog scale (VAS). For comparison with the preoperative measurements, the patients were invited to postoperative follow-up visits 2 and 7 days after extraction to evaluate for swelling and trismus. The statistical analysis was performed using descriptive statistics in SAS, version 9.4 (SAS Institute, Cary, NC), software. Statistical analysis of the pain, swelling, and trismus data was performed using the Kruskal-Wallis, Dunn, and Wilcoxon-Mann-Whitney U tests. The statistical level of significance was accepted at P = .05 and power of 0.80. Clinically, tenoxicam showed better analgesic and anti-inflammatory efficacy compared with diclofenac sodium and, in particular, flurbiprofen. Although the VAS scores in the evaluation of pain showed statistically significant differences at 2 days, no statistically significant difference was found for swelling and trismus. Our study evaluated the analgesic and anti-inflammatory effects with a daily single dose of flurbiprofen, diclofenac sodium, and tenoxicam. Daily 20 mg tenoxicam can be accepted as an adequate and safe option for patients after a surgical procedure. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
An entropy-based statistic for genomewide association studies.
Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao
2005-07-01
Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.
NASA Astrophysics Data System (ADS)
Mortuza, M.; Demissie, D.
2013-12-01
According to the U.S. Department of Energy's annual wind technologies market report, the wind power capacity in the country grew from 2.5 gigawatts in early 2000 to 60 gigawatts in 2012, making it one of the largest new sources of electric capacity additions in the U.S. in recent years. With over 2.8 gigawatts of current capacity (eighth largest in the nation), Washington State plays a significant role in this rapidly increasing energy resource. To further expand and/or optimize these capacities, assessment of wind resource and its spatial and temporal variations are important. However, since at-site frequency analysis using meteorological data is not adequate for extending wind frequency to locations with no data, longer return period, and heterogeneous topography and surface, a regional frequency analysis based on L-moment method is adopted in this study to estimate regional wind speed patterns and return periods in Washington State using hourly mean wind speed data from 1979 - 2010. The analysis applies the k-means, hierarchical and self-organizing map clustering techniques to explore potential clusters or regions; statistical tests are then applied to identify homogeneous regions and appropriate probability distribution models. The result from the analysis is expected to provide essential knowledge about the areas with potential capacity of constructing wind power plants, which can also be readily extended to assist decisions on their daily operations.
The Ironic Effect of Significant Results on the Credibility of Multiple-Study Articles
ERIC Educational Resources Information Center
Schimmack, Ulrich
2012-01-01
Cohen (1962) pointed out the importance of statistical power for psychology as a science, but statistical power of studies has not increased, while the number of studies in a single article has increased. It has been overlooked that multiple studies with modest power have a high probability of producing nonsignificant results because power…
The Statistical Power of the Cluster Randomized Block Design with Matched Pairs--A Simulation Study
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2010-01-01
This study uses simulation techniques to examine the statistical power of the group- randomized design and the matched-pair (MP) randomized block design under various parameter combinations. Both nearest neighbor matching and random matching are used for the MP design. The power of each design for any parameter combination was calculated from…
Asking Sensitive Questions: A Statistical Power Analysis of Randomized Response Models
ERIC Educational Resources Information Center
Ulrich, Rolf; Schroter, Hannes; Striegel, Heiko; Simon, Perikles
2012-01-01
This article derives the power curves for a Wald test that can be applied to randomized response models when small prevalence rates must be assessed (e.g., detecting doping behavior among elite athletes). These curves enable the assessment of the statistical power that is associated with each model (e.g., Warner's model, crosswise model, unrelated…
Impact of specimen adequacy on the assessment of renal allograft biopsy specimens.
Cimen, S; Geldenhuys, L; Guler, S; Imamoglu, A; Molinari, M
2016-01-01
The Banff classification was introduced to achieve uniformity in the assessment of renal allograft biopsies. The primary aim of this study was to evaluate the impact of specimen adequacy on the Banff classification. All renal allograft biopsies obtained between July 2010 and June 2012 for suspicion of acute rejection were included. Pre-biopsy clinical data on suspected diagnosis and time from renal transplantation were provided to a nephropathologist who was blinded to the original pathological report. Second pathological readings were compared with the original to assess agreement stratified by specimen adequacy. Cohen's kappa test and Fisher's exact test were used for statistical analyses. Forty-nine specimens were reviewed. Among these specimens, 81.6% were classified as adequate, 6.12% as minimal, and 12.24% as unsatisfactory. The agreement analysis among the first and second readings revealed a kappa value of 0.97. Full agreement between readings was found in 75% of the adequate specimens, 66.7 and 50% for minimal and unsatisfactory specimens, respectively. There was no agreement between readings in 5% of the adequate specimens and 16.7% of the unsatisfactory specimens. For the entire sample full agreement was found in 71.4%, partial agreement in 20.4% and no agreement in 8.2% of the specimens. Statistical analysis using Fisher's exact test yielded a P value above 0.25 showing that - probably due to small sample size - the results were not statistically significant. Specimen adequacy may be a determinant of a diagnostic agreement in renal allograft specimen assessment. While additional studies including larger case numbers are required to further delineate the impact of specimen adequacy on the reliability of histopathological assessments, specimen quality must be considered during clinical decision making while dealing with biopsy reports based on minimal or unsatisfactory specimens.
Andrade, Carla Maria Araujo; Araujo Júnior, Edward; Torloni, Maria Regina; Moron, Antonio Fernandes; Guazzelli, Cristina Aparecida Falbo
2016-02-01
To compare the rates of success of two-dimensional (2D) and three-dimensional (3D) sonographic (US) examinations in locating and adequately visualizing levonorgestrel intrauterine devices (IUDs) and to explore factors associated with the unsuccessful viewing on 2D US. Transvaginal 2D and 3D US examinations were performed on all patients 1 month after insertion of levonorgestrel IUDs. The devices were considered adequately visualized on 2D US if both the vertical (shadow, upper and lower extremities) and the horizontal (two echogenic lines) shafts were identified. 3D volumes were also captured to assess the location of levonorgestrel IUDs on 3D US. Thirty women were included. The rates of adequate device visualization were 40% on 2D US (95% confidence interval [CI], 24.6; 57.7) and 100% on 3D US (95% CI, 88.6; 100.0). The device was not adequately visualized in all six women who had a retroflexed uterus, but it was adequately visualized in 12 of the 24 women (50%) who had a nonretroflexed uterus (95% CI, -68.6; -6.8). We found that 3D US is better than 2D US for locating and adequately visualizing levonorgestrel IUDs. Other well-designed studies with adequate power should be conducted to confirm this finding. © 2015 Wiley Periodicals, Inc.
On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.
Koyama, Shinsuke
2015-07-01
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
Joint probability of statistical success of multiple phase III trials.
Zhang, Jianliang; Zhang, Jenny J
2013-01-01
In drug development, after completion of phase II proof-of-concept trials, the sponsor needs to make a go/no-go decision to start expensive phase III trials. The probability of statistical success (PoSS) of the phase III trials based on data from earlier studies is an important factor in that decision-making process. Instead of statistical power, the predictive power of a phase III trial, which takes into account the uncertainty in the estimation of treatment effect from earlier studies, has been proposed to evaluate the PoSS of a single trial. However, regulatory authorities generally require statistical significance in two (or more) trials for marketing licensure. We show that the predictive statistics of two future trials are statistically correlated through use of the common observed data from earlier studies. Thus, the joint predictive power should not be evaluated as a simplistic product of the predictive powers of the individual trials. We develop the relevant formulae for the appropriate evaluation of the joint predictive power and provide numerical examples. Our methodology is further extended to the more complex phase III development scenario comprising more than two (K > 2) trials, that is, the evaluation of the PoSS of at least k₀ (k₀≤ K) trials from a program of K total trials. Copyright © 2013 John Wiley & Sons, Ltd.
Power of tests for comparing trend curves with application to national immunization survey (NIS).
Zhao, Zhen
2011-02-28
To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.
NASA Technical Reports Server (NTRS)
Krantz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type 1 censoring. The software was verified by reproducing results published by others.
NASA Astrophysics Data System (ADS)
Wimmer, G.
2008-01-01
In this paper we introduce two confidence and two prediction regions for statistical characterization of concentration measurements of product ions in order to discriminate various groups of persons for prospective better detection of primary lung cancer. Two MATLAB algorithms have been created for more adequate description of concentration measurements of volatile organic compounds in human breath gas for potential detection of primary lung cancer and for evaluation of the appropriate confidence and prediction regions.
NASA Technical Reports Server (NTRS)
Kranz, Timothy L.
2002-01-01
The Weibull distribution has been widely adopted for the statistical description and inference of fatigue data. This document provides user instructions, examples, and verification for software to analyze gear fatigue test data. The software was developed presuming the data are adequately modeled using a two-parameter Weibull distribution. The calculations are based on likelihood methods, and the approach taken is valid for data that include type I censoring. The software was verified by reproducing results published by others.
d'Entremont, Anna; Corgnale, Claudio; Hardy, Bruce; ...
2018-01-11
Concentrating solar power plants can achieve low cost and efficient renewable electricity production if equipped with adequate thermal energy storage systems. Metal hydride based thermal energy storage systems are appealing candidates due to their demonstrated potential for very high volumetric energy densities, high exergetic efficiencies, and low costs. The feasibility and performance of a thermal energy storage system based on NaMgH 2F hydride paired with TiCr 1.6Mn 0.2 is examined, discussing its integration with a solar-driven ultra-supercritical steam power plant. The simulated storage system is based on a laboratory-scale experimental apparatus. It is analyzed using a detailed transport model accountingmore » for the thermochemical hydrogen absorption and desorption reactions, including kinetics expressions adequate for the current metal hydride system. The results show that the proposed metal hydride pair can suitably be integrated with a high temperature steam power plant. The thermal energy storage system achieves output energy densities of 226 kWh/m 3, 9 times the DOE SunShot target, with moderate temperature and pressure swings. Also, simulations indicate that there is significant scope for performance improvement via heat-transfer enhancement strategies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
d'Entremont, Anna; Corgnale, Claudio; Hardy, Bruce
Concentrating solar power plants can achieve low cost and efficient renewable electricity production if equipped with adequate thermal energy storage systems. Metal hydride based thermal energy storage systems are appealing candidates due to their demonstrated potential for very high volumetric energy densities, high exergetic efficiencies, and low costs. The feasibility and performance of a thermal energy storage system based on NaMgH 2F hydride paired with TiCr 1.6Mn 0.2 is examined, discussing its integration with a solar-driven ultra-supercritical steam power plant. The simulated storage system is based on a laboratory-scale experimental apparatus. It is analyzed using a detailed transport model accountingmore » for the thermochemical hydrogen absorption and desorption reactions, including kinetics expressions adequate for the current metal hydride system. The results show that the proposed metal hydride pair can suitably be integrated with a high temperature steam power plant. The thermal energy storage system achieves output energy densities of 226 kWh/m 3, 9 times the DOE SunShot target, with moderate temperature and pressure swings. Also, simulations indicate that there is significant scope for performance improvement via heat-transfer enhancement strategies.« less
30 CFR 75.1101-5 - Installation of foam generator systems.
Code of Federal Regulations, 2012 CFR
2012-07-01
.... (b) Foam generator systems shall be equipped with a fire sensor which actuates the system, and each.... (d) Water, power, and chemicals required shall be adequate to maintain water or foam flow for no less...
30 CFR 75.1101-5 - Installation of foam generator systems.
Code of Federal Regulations, 2013 CFR
2013-07-01
.... (b) Foam generator systems shall be equipped with a fire sensor which actuates the system, and each.... (d) Water, power, and chemicals required shall be adequate to maintain water or foam flow for no less...
30 CFR 75.1101-5 - Installation of foam generator systems.
Code of Federal Regulations, 2014 CFR
2014-07-01
.... (b) Foam generator systems shall be equipped with a fire sensor which actuates the system, and each.... (d) Water, power, and chemicals required shall be adequate to maintain water or foam flow for no less...
AeroMACS Interference Simulations for Global Airports
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.; Apaza, Rafael D.
2012-01-01
Ran 18 scenarios with Visualyse Professional interference software (presented 2 most realistic scenarios). Scenario A: 85 large airports can transmit 1650 mW on each of 11 channels. 173 medium airports can transmit 825 mW on each of 6 channels. 5951 small airports can transmit 275 mW on one channel. Reducing power allowed for small airports in Scenario B increases allowable power for large and medium airports, but should not be necessary as Scenario A levels are more than adequate. These power limitations are conservative because we are assuming worst case with 100% duty.
31 CFR 1.24 - Disclosure of records to person other than the individual to whom they pertain.
Code of Federal Regulations, 2010 CFR
2010-07-01
... provided the component with advance adequate written assurance that the record will be used solely as a statistical research or reporting record, and the record is to be transferred in a form that is not...
49 CFR 802.5 - Procedures for requests pertaining to individual records in a record system.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) To a recipient who has provided the NTSB with advance adequate assurance that the record will be used solely as a statistical research or reporting record and that it is to be transferred in a form not...
Considerable concern has been raised regarding research reproducibility both within and outside the scientific community. Several factors possibly contribute to a lack of reproducibility, including a failure to adequately employ statistical considerations during study design, bia...
ERIC Educational Resources Information Center
Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.
2010-01-01
This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…
ERIC Educational Resources Information Center
Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben
2016-01-01
The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…
Pasaniuc, Bogdan; Zaitlen, Noah; Lettre, Guillaume; Chen, Gary K; Tandon, Arti; Kao, W H Linda; Ruczinski, Ingo; Fornage, Myriam; Siscovick, David S; Zhu, Xiaofeng; Larkin, Emma; Lange, Leslie A; Cupples, L Adrienne; Yang, Qiong; Akylbekova, Ermeg L; Musani, Solomon K; Divers, Jasmin; Mychaleckyj, Joe; Li, Mingyao; Papanicolaou, George J; Millikan, Robert C; Ambrosone, Christine B; John, Esther M; Bernstein, Leslie; Zheng, Wei; Hu, Jennifer J; Ziegler, Regina G; Nyante, Sarah J; Bandera, Elisa V; Ingles, Sue A; Press, Michael F; Chanock, Stephen J; Deming, Sandra L; Rodriguez-Gil, Jorge L; Palmer, Cameron D; Buxbaum, Sarah; Ekunwe, Lynette; Hirschhorn, Joel N; Henderson, Brian E; Myers, Simon; Haiman, Christopher A; Reich, David; Patterson, Nick; Wilson, James G; Price, Alkes L
2011-04-01
While genome-wide association studies (GWAS) have primarily examined populations of European ancestry, more recent studies often involve additional populations, including admixed populations such as African Americans and Latinos. In admixed populations, linkage disequilibrium (LD) exists both at a fine scale in ancestral populations and at a coarse scale (admixture-LD) due to chromosomal segments of distinct ancestry. Disease association statistics in admixed populations have previously considered SNP association (LD mapping) or admixture association (mapping by admixture-LD), but not both. Here, we introduce a new statistical framework for combining SNP and admixture association in case-control studies, as well as methods for local ancestry-aware imputation. We illustrate the gain in statistical power achieved by these methods by analyzing data of 6,209 unrelated African Americans from the CARe project genotyped on the Affymetrix 6.0 chip, in conjunction with both simulated and real phenotypes, as well as by analyzing the FGFR2 locus using breast cancer GWAS data from 5,761 African-American women. We show that, at typed SNPs, our method yields an 8% increase in statistical power for finding disease risk loci compared to the power achieved by standard methods in case-control studies. At imputed SNPs, we observe an 11% increase in statistical power for mapping disease loci when our local ancestry-aware imputation framework and the new scoring statistic are jointly employed. Finally, we show that our method increases statistical power in regions harboring the causal SNP in the case when the causal SNP is untyped and cannot be imputed. Our methods and our publicly available software are broadly applicable to GWAS in admixed populations.
Evaluation of leaf litter leaching kinetics through commonly-used mathematical models
NASA Astrophysics Data System (ADS)
Montoya, J. V.; Bastianoni, A.; Mendez, C.; Paolini, J.
2012-04-01
Leaching is defined as the abiotic process by which soluble compounds of the litter are released into the water. Most studies dealing with leaf litter breakdown and leaching kinetics apply the single exponential decay model since it corresponds well with the understanding of the biology of decomposition. However, during leaching important mass losses occur and mathematical models often fail in describing this process adequately. During the initial hours of leaching leaf litter experience high decay rates which are not properly modelled. Adjusting leaching losses to mathematical models has not been investigated thoroughly and the use of models assuming constant decay rates leads to inappropriate assessments of leaching kinetics. We aim to describe, assess, and compare different leaching kinetics models fitted to leaf litter mass losses from six Neotropical riparian forest species. Leaf litter from each species was collected in the lower reaches of San Miguel stream in Northern Venezuela. Air-dried leaves from each species were incubated in 250 ml of water in the dark at room temperature. At 1h, 6h, 1d, 2d, 4d, 8d and 15d, three jars were removed from the assay in a no-replacement experimental design. At each time leaves from each jar were removed and oven-dried. Afterwards, dried up leaves were weighed and remaining dry mass was determined and expressed as ash-free dry mass. Mass losses of leaf litter showed steep declines for the first two days followed by a steady decrease in mass loss. Data was fitted to three different models: single-exponential, power and rational. Our results showed that the mass loss predicted with the single-exponential model did not reflect the real data at any stage of the leaching process. The power model showed a better adjustment, but fails predicting successfully the behavior during leaching's early stages. To evaluate the performance of our models we used three criteria: Adj-R2, Akaike's Information Criteria (AIC), and residual distribution. Higher Adj-R2 were obtained for the power and the rational-type models. However, when AIC and residuals distribution were used, the only model that could satisfactory predict the behavior of our dataset was the rational-type. Even if the Adj-R2 was higher for some species when using the power model compared to the rational-type; our results showed that this criterion alone cannot demonstrate the predicting performance of any model. Usually Adj-R2 is used when assessing the goodness of fit for any mathematical model disregarding the fact that a good Adj-R2 could be obtained even when statistical assumptions required for the validity of the model are not satisfied. Our results showed that sampling at the initial stages of leaching is necessary to adequately describe this process. We also provided evidence that using traditional mathematical models is not the best option to evaluate leaching kinetics because of its mathematical inability to properly describe the abrupt changes that occur during the early stages of leaching. We also found useful applying different criteria to evaluate the goodness-of-fit and performance of any model considered taking into account both statistical and biological meaning of the results.
Santos-Jasso, Karla Alejandra; Arredondo-García, José Luis; Maza-Vallejos, Jorge; Lezama-Del Valle, Pablo
2017-01-01
Constipation is present in 80% of children with corrected anorectal malformations, usually associated to rectal dilation and hypomotility. Osmotic laxatives are routinely used for idiopathic constipation. Senna is a stimulant laxative that produces contractions improving colonic motility without affecting the stool consistency. We designed this trial to study the effectiveness of Senna versus polyethylene glycol for the treatment of constipation in children with anorectal malformation. A randomized controlled crossover design clinical trial, including a washout period, was conducted, including children with corrected anorectal malformations with fecal continence and constipation. The sample size was calculated for proportions (n=28) according to available data for Senna. Effectiveness of laxative therapy was measured with a three variable construct: 1) daily bowel movement, 2) fecal soiling, 3) a "clean" abdominal x-ray. Data analysis included descriptive statistics and a Fisher's exact test for the outcome variable (effectiveness). The study was terminated early because the interim analysis showed a clear benefit toward Senna (p = 0.026). The sample showed a normal statistical distribution for the variables age and presence of megarectum. The maximum daily dose of Senna (sennosides A and B) was 38.7mg and 17g for polyethylene glycol. No adverse effects were identified. Therapy with Senna should be the laxative treatment of choice as part of a bowel management program in children with repaired anorectal malformations and constipation, since the stimulation of colonic propulsion waves could lead to stool evacuation without modification of its consistency which can affect fecal continence. I - randomized controlled trial with adequate statistical power. Copyright © 2017 Elsevier Inc. All rights reserved.
Testosterone replacement therapy and the heart: friend, foe or bystander?
Canfield, Steven; Wang, Run
2016-01-01
The role of testosterone therapy (TTh) in cardiovascular disease (CVD) outcomes is still controversial, and it seems will remain inconclusive for the moment. An extensive body of literature has investigated the association of endogenous testosterone and use of TTh with CVD events including several meta-analyses. In some instances, a number of studies reported beneficial effects of TTh on CVD events and in other instances the body of literature reported detrimental effects or no effects at all. Yet, no review article has scrutinized this body of literature using the magnitude of associations and statistical significance reported from this relationship. We critically reviewed the previous and emerging body of literature that investigated the association of endogenous testosterone and use of TTh with CVD events (only fatal and nonfatal). These studies were divided into three groups, “beneficial (friendly use)”, “detrimental (foe)” and “no effects at all (bystander)”, based on their magnitude of associations and statistical significance from original research studies and meta-analyses of epidemiological studies and of randomized controlled trials (RCT’s). In this review article, the studies reporting a significant association of high levels of testosterone with a reduced risk of CVD events in original prospective studies and meta-analyses of cross-sectional and prospective studies seems to be more consistent. However, the number of meta-analyses of RCT’s does not provide a clear picture after we divided it into the beneficial, detrimental or no effects all groups using their magnitudes of association and statistical significance. From this review, we suggest that we need a study or number of studies that have the adequate power, epidemiological, and clinical data to provide a definitive conclusion on whether the effect of TTh on the natural history of CVD is real or not. PMID:28078222
Testosterone replacement therapy and the heart: friend, foe or bystander?
Lopez, David S; Canfield, Steven; Wang, Run
2016-12-01
The role of testosterone therapy (TTh) in cardiovascular disease (CVD) outcomes is still controversial, and it seems will remain inconclusive for the moment. An extensive body of literature has investigated the association of endogenous testosterone and use of TTh with CVD events including several meta-analyses. In some instances, a number of studies reported beneficial effects of TTh on CVD events and in other instances the body of literature reported detrimental effects or no effects at all. Yet, no review article has scrutinized this body of literature using the magnitude of associations and statistical significance reported from this relationship. We critically reviewed the previous and emerging body of literature that investigated the association of endogenous testosterone and use of TTh with CVD events (only fatal and nonfatal). These studies were divided into three groups, "beneficial (friendly use)", "detrimental (foe)" and "no effects at all (bystander)", based on their magnitude of associations and statistical significance from original research studies and meta-analyses of epidemiological studies and of randomized controlled trials (RCT's). In this review article, the studies reporting a significant association of high levels of testosterone with a reduced risk of CVD events in original prospective studies and meta-analyses of cross-sectional and prospective studies seems to be more consistent. However, the number of meta-analyses of RCT's does not provide a clear picture after we divided it into the beneficial, detrimental or no effects all groups using their magnitudes of association and statistical significance. From this review, we suggest that we need a study or number of studies that have the adequate power, epidemiological, and clinical data to provide a definitive conclusion on whether the effect of TTh on the natural history of CVD is real or not.
Estimating extreme river discharges in Europe through a Bayesian network
NASA Astrophysics Data System (ADS)
Paprotny, Dominik; Morales-Nápoles, Oswaldo
2017-06-01
Large-scale hydrological modelling of flood hazards requires adequate extreme discharge data. In practise, models based on physics are applied alongside those utilizing only statistical analysis. The former require enormous computational power, while the latter are mostly limited in accuracy and spatial coverage. In this paper we introduce an alternate, statistical approach based on Bayesian networks (BNs), a graphical model for dependent random variables. We use a non-parametric BN to describe the joint distribution of extreme discharges in European rivers and variables representing the geographical characteristics of their catchments. Annual maxima of daily discharges from more than 1800 river gauges (stations with catchment areas ranging from 1.4 to 807 000 km2) were collected, together with information on terrain, land use and local climate. The (conditional) correlations between the variables are modelled through copulas, with the dependency structure defined in the network. The results show that using this method, mean annual maxima and return periods of discharges could be estimated with an accuracy similar to existing studies using physical models for Europe and better than a comparable global statistical model. Performance of the model varies slightly between regions of Europe, but is consistent between different time periods, and remains the same in a split-sample validation. Though discharge prediction under climate change is not the main scope of this paper, the BN was applied to a large domain covering all sizes of rivers in the continent both for present and future climate, as an example. Results show substantial variation in the influence of climate change on river discharges. The model can be used to provide quick estimates of extreme discharges at any location for the purpose of obtaining input information for hydraulic modelling.
NASA Office of Aeronautics and Space Technology Summer Workshop. Volume 4: Power technology panel
NASA Technical Reports Server (NTRS)
1975-01-01
Technology requirements in the areas of energy sources and conversion, power processing, distribution, conversion, and transmission, and energy storage are identified for space shuttle payloads. It is concluded that the power system technology currently available is adequate to accomplish all missions in the 1973 Mission Model, but that further development is needed to support space opportunities of the future as identified by users. Space experiments are proposed in the following areas: power generation in space, advanced photovoltaic energy converters, solar and nuclear thermoelectric technology, nickel-cadmium batteries, flywheels (mechanical storage), satellite-to-ground transmission and reconversion systems, and regenerative fuel cells.
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
Detecting Genomic Clustering of Risk Variants from Sequence Data: Cases vs. Controls
Schaid, Daniel J.; Sinnwell, Jason P.; McDonnell, Shannon K.; Thibodeau, Stephen N.
2013-01-01
As the ability to measure dense genetic markers approaches the limit of the DNA sequence itself, taking advantage of possible clustering of genetic variants in, and around, a gene would benefit genetic association analyses, and likely provide biological insights. The greatest benefit might be realized when multiple rare variants cluster in a functional region. Several statistical tests have been developed, one of which is based on the popular Kulldorff scan statistic for spatial clustering of disease. We extended another popular spatial clustering method – Tango’s statistic – to genomic sequence data. An advantage of Tango’s method is that it is rapid to compute, and when single test statistic is computed, its distribution is well approximated by a scaled chi-square distribution, making computation of p-values very rapid. We compared the Type-I error rates and power of several clustering statistics, as well as the omnibus sequence kernel association test (SKAT). Although our version of Tango’s statistic, which we call “Kernel Distance” statistic, took approximately half the time to compute than the Kulldorff scan statistic, it had slightly less power than the scan statistic. Our results showed that the Ionita-Laza version of Kulldorff’s scan statistic had the greatest power over a range of clustering scenarios. PMID:23842950
The light pollution as a surrogate for urban population of the US cities
NASA Astrophysics Data System (ADS)
Operti, Felipe G.; Oliveira, Erneson A.; Carmona, Humberto A.; Machado, Javam C.; Andrade, José S.
2018-02-01
We show that the definition of the city boundaries can have a dramatic influence on the scaling behavior of the night-time light (NTL) as a function of population (POP) in the US. Precisely, our results show that the arbitrary geopolitical definition based on the Metropolitan/Consolidated Metropolitan Statistical Areas (MSA/CMSA) leads to a sublinear power-law growth of NTL with POP. On the other hand, when cities are defined according to a more natural agglomeration criteria, namely, the City Clustering Algorithm (CCA), an isometric relation emerges between NTL and population. This discrepancy is compatible with results from previous works showing that the scaling behaviors of various urban indicators with population can be substantially different for distinct definitions of city boundaries. Moreover, considering the CCA definition as more adequate than the MSA/CMSA one because the former does not violate the expected extensivity between land population and area of their generated clusters, we conclude that, without loss of generality, the CCA measures of light pollution and population could be interchangeably utilized in future studies.
Poggi, L A; Malizia, A; Ciparisse, J F; Gaudio, P
2016-10-01
An open issue still under investigation by several international entities working on the safety and security field for the foreseen nuclear fusion reactors is the estimation of source terms that are a hazard for the operators and public, and for the machine itself in terms of efficiency and integrity in case of severe accident scenarios. Source term estimation is a crucial key safety issue to be addressed in the future reactors safety assessments, and the estimates available at the time are not sufficiently satisfactory. The lack of neutronic data along with the insufficiently accurate methodologies used until now, calls for an integrated methodology for source term estimation that can provide predictions with an adequate accuracy. This work proposes a complete methodology to estimate dust source terms starting from a broad information gathering. The wide number of parameters that can influence dust source term production is reduced with statistical tools using a combination of screening, sensitivity analysis, and uncertainty analysis. Finally, a preliminary and simplified methodology for dust source term production prediction for future devices is presented.
Hoare, Jacqueline; Carey, Paul; Joska, John A; Carrara, Henri; Sorsdahl, Katherine; Stein, Dan J
2014-02-01
Depression can be a chronic and impairing illness in people with human immunodeficiency virus (HIV)/acquired immunodeficiency syndrome. Large randomized studies of newer selective serotonin reuptake inhibitors such as escitalopram in the treatment of depression in HIV, examining comparative treatment efficacy and safety, have yet to be done in HIV-positive patients. This was a fixed-dose, placebo-controlled, randomized, double-blind study to investigate the efficacy of escitalopram in HIV-seropositive subjects with Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, major depressive disorder. One hundred two participants were randomly assigned to either 10 mg of escitalopram or placebo for 6 weeks. An analysis of covariance of the completers found that there was no advantage for escitalopram over placebo on the Montgomery-Asberg Depression Rating Scale (p = 0.93). Sixty-two percent responded to escitalopram and 59% responded to placebo on the Clinical Global Impression Scale. Given the relatively high placebo response, future trials in this area need to be selective in participant recruitment and to be adequately powered.
Ilesanmi, O S; Alele, F O
2015-01-01
The role of Medical Audit in patient care needs to beexplored. This study aimed to determine doctors' knowledge and practice of Medical Audit in a tertiary health facility in South West Nigeria. Across-sectional study of 115 consenting doctors at Federal Medical Centre Owo was conducted. A semi-structured, self-administered questionnaire was used. Data was analyzed using SPSS version 21. Descriptive statistics were presented using frequency tables and bar chart, age and year of practice were summarized as mean and standard deviation. Chi square-test was used to compare sociodemographic variables with doctor's knowledge of MedicalAudit. Level of statistical significant was 5%. The mean age of the respondents was 32.5 ± 5.8 years. Males were 78%, and 61.7% were married. The mean duration of practice was 3.3 ± 2.2 years. Adequate knowledge of Medical Audit was found in 79% of the respondents while only 53% had practiced it. Formal training on Medical Audit has not been received by 91.3% of the respondents, 80.9% requested for training on Medical Audit. In all, 88.0% who had ≥ 3-years of practice had adequate knowledge compared with only 72.3% of those who had less than three years of practice (p = 0.040). Practice of MedicalAudit is low though adequate knowledge exist.Training of doctors on Medical Audit is required.
Egbewale, Bolaji E; Lewis, Martyn; Sim, Julius
2014-04-09
Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. 126 hypothetical trial scenarios were evaluated (126,000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power.
2014-01-01
Background Analysis of variance (ANOVA), change-score analysis (CSA) and analysis of covariance (ANCOVA) respond differently to baseline imbalance in randomized controlled trials. However, no empirical studies appear to have quantified the differential bias and precision of estimates derived from these methods of analysis, and their relative statistical power, in relation to combinations of levels of key trial characteristics. This simulation study therefore examined the relative bias, precision and statistical power of these three analyses using simulated trial data. Methods 126 hypothetical trial scenarios were evaluated (126 000 datasets), each with continuous data simulated by using a combination of levels of: treatment effect; pretest-posttest correlation; direction and magnitude of baseline imbalance. The bias, precision and power of each method of analysis were calculated for each scenario. Results Compared to the unbiased estimates produced by ANCOVA, both ANOVA and CSA are subject to bias, in relation to pretest-posttest correlation and the direction of baseline imbalance. Additionally, ANOVA and CSA are less precise than ANCOVA, especially when pretest-posttest correlation ≥ 0.3. When groups are balanced at baseline, ANCOVA is at least as powerful as the other analyses. Apparently greater power of ANOVA and CSA at certain imbalances is achieved in respect of a biased treatment effect. Conclusions Across a range of correlations between pre- and post-treatment scores and at varying levels and direction of baseline imbalance, ANCOVA remains the optimum statistical method for the analysis of continuous outcomes in RCTs, in terms of bias, precision and statistical power. PMID:24712304
Code of Federal Regulations, 2012 CFR
2012-01-01
...: (1) Airspeed indicator. (2) Altimeter. (3) Magnetic direction indicator. (4) Tachometer for each... aircraft is operated for hire, one electric landing light. (5) An adequate source of electrical energy for...
Code of Federal Regulations, 2014 CFR
2014-01-01
...: (1) Airspeed indicator. (2) Altimeter. (3) Magnetic direction indicator. (4) Tachometer for each... aircraft is operated for hire, one electric landing light. (5) An adequate source of electrical energy for...
Code of Federal Regulations, 2011 CFR
2011-01-01
...: (1) Airspeed indicator. (2) Altimeter. (3) Magnetic direction indicator. (4) Tachometer for each... aircraft is operated for hire, one electric landing light. (5) An adequate source of electrical energy for...
Code of Federal Regulations, 2013 CFR
2013-01-01
...: (1) Airspeed indicator. (2) Altimeter. (3) Magnetic direction indicator. (4) Tachometer for each... aircraft is operated for hire, one electric landing light. (5) An adequate source of electrical energy for...
Harnessing Pavement Power : Developing Renewable Energy Technology in the Public Right-of-Way
DOT National Transportation Integrated Search
2013-09-18
Intelligent Compaction (IC) of soil and asphalt mixes is an innovative approach that has been utilized to achieve uniform, adequate compaction of pavement layers during construction. Commercially available IC products provide machine specific compact...
29 CFR Section 1607.16 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... action are open to users. T. Skill. A present, observable competence to perform a learned psychomoter act... criterion-related validity studies. These conditions include: (1) An adequate sample of persons available for the study to achieve findings of statistical significance; (2) having or being able to obtain a...
On the structure and phase transitions of power-law Poissonian ensembles
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Oshanin, Gleb
2012-10-01
Power-law Poissonian ensembles are Poisson processes that are defined on the positive half-line, and that are governed by power-law intensities. Power-law Poissonian ensembles are stochastic objects of fundamental significance; they uniquely display an array of fractal features and they uniquely generate a span of important applications. In this paper we apply three different methods—oligarchic analysis, Lorenzian analysis and heterogeneity analysis—to explore power-law Poissonian ensembles. The amalgamation of these analyses, combined with the topology of power-law Poissonian ensembles, establishes a detailed and multi-faceted picture of the statistical structure and the statistical phase transitions of these elemental ensembles.
Miniature Radioisotope Thermoelectric Power Cubes
NASA Technical Reports Server (NTRS)
Patel, Jagdish U.; Fleurial, Jean-Pierre; Snyder, G. Jeffrey; Caillat, Thierry
2004-01-01
Cube-shaped thermoelectric devices energized by a particles from radioactive decay of Cm-244 have been proposed as long-lived sources of power. These power cubes are intended especially for incorporation into electronic circuits that must operate in dark, extremely cold locations (e.g., polar locations or deep underwater on Earth, or in deep interplanetary space). Unlike conventional radioisotope thermoelectric generators used heretofore as central power sources in some spacecraft, the proposed power cubes would be small enough (volumes would range between 0.1 and 0.2 cm3) to play the roles of batteries that are parts of, and dedicated to, individual electronic-circuit packages. Unlike electrochemical batteries, these power cubes would perform well at low temperatures. They would also last much longer: given that the half-life of Cm-244 is 18 years, a power cube could remain adequate as a power source for years, depending on the power demand in its particular application.
NASA Astrophysics Data System (ADS)
Asal, F. F.
2012-07-01
Digital elevation data obtained from different Engineering Surveying techniques is utilized in generating Digital Elevation Model (DEM), which is employed in many Engineering and Environmental applications. This data is usually in discrete point format making it necessary to utilize an interpolation approach for the creation of DEM. Quality assessment of the DEM is a vital issue controlling its use in different applications; however this assessment relies heavily on statistical methods with neglecting the visual methods. The research applies visual analysis investigation on DEMs generated using IDW interpolator of varying powers in order to examine their potential in the assessment of the effects of the variation of the IDW power on the quality of the DEMs. Real elevation data has been collected from field using total station instrument in a corrugated terrain. DEMs have been generated from the data at a unified cell size using IDW interpolator with power values ranging from one to ten. Visual analysis has been undertaken using 2D and 3D views of the DEM; in addition, statistical analysis has been performed for assessment of the validity of the visual techniques in doing such analysis. Visual analysis has shown that smoothing of the DEM decreases with the increase in the power value till the power of four; however, increasing the power more than four does not leave noticeable changes on 2D and 3D views of the DEM. The statistical analysis has supported these results where the value of the Standard Deviation (SD) of the DEM has increased with increasing the power. More specifically, changing the power from one to two has produced 36% of the total increase (the increase in SD due to changing the power from one to ten) in SD and changing to the powers of three and four has given 60% and 75% respectively. This refers to decrease in DEM smoothing with the increase in the power of the IDW. The study also has shown that applying visual methods supported by statistical analysis has proven good potential in the DEM quality assessment.
NASA Astrophysics Data System (ADS)
Chang, Xiaoyen Y.; Sewell, Thomas D.; Raff, Lionel M.; Thompson, Donald L.
1992-11-01
The possibility of utilizing different types of power spectra obtained from classical trajectories as a diagnostic tool to identify the presence of nonstatistical dynamics is explored by using the unimolecular bond-fission reactions of 1,2-difluoroethane and the 2-chloroethyl radical as test cases. In previous studies, the reaction rates for these systems were calculated by using a variational transition-state theory and classical trajectory methods. A comparison of the results showed that 1,2-difluoroethane is a nonstatistical system, while the 2-chloroethyl radical behaves statistically. Power spectra for these two systems have been generated under various conditions. The characteristics of these spectra are as follows: (1) The spectra for the 2-chloroethyl radical are always broader and more coupled to other modes than is the case for 1,2-difluoroethane. This is true even at very low levels of excitation. (2) When an internal energy near or above the dissociation threshold is initially partitioned into a local C-H stretching mode, the power spectra for 1,2-difluoroethane broaden somewhat, but discrete and somewhat isolated bands are still clearly evident. In contrast, the analogous power spectra for the 2-chloroethyl radical exhibit a near complete absence of isolated bands. The general appearance of the spectrum suggests a very high level of mode-to-mode coupling, large intramolecular vibrational energy redistribution (IVR) rates, and global statistical behavior. (3) The appearance of the power spectrum for the 2-chloroethyl radical is unaltered regardless of whether the initial C-H excitation is in the CH2 or the CH2Cl group. This result also suggests statistical behavior. These results are interpreted to mean that power spectra may be used as a diagnostic tool to assess the statistical character of a system. The presence of a diffuse spectrum exhibiting a nearly complete loss of isolated structures indicates that the dissociation dynamics of the molecule will be well described by statistical theories. If, however, the power spectrum maintains its discrete, isolated character, as is the case for 1,2-difluoroethane, the opposite conclusion is suggested. Since power spectra are very easily computed, this diagnostic method may prove to be useful.
A powerful approach for association analysis incorporating imprinting effects
Xia, Fan; Zhou, Ji-Yuan; Fung, Wing Kam
2011-01-01
Motivation: For a diallelic marker locus, the transmission disequilibrium test (TDT) is a simple and powerful design for genetic studies. The TDT was originally proposed for use in families with both parents available (complete nuclear families) and has further been extended to 1-TDT for use in families with only one of the parents available (incomplete nuclear families). Currently, the increasing interest of the influence of parental imprinting on heritability indicates the importance of incorporating imprinting effects into the mapping of association variants. Results: In this article, we extend the TDT-type statistics to incorporate imprinting effects and develop a series of new test statistics in a general two-stage framework for association studies. Our test statistics enjoy the nature of family-based designs that need no assumption of Hardy–Weinberg equilibrium. Also, the proposed methods accommodate complete and incomplete nuclear families with one or more affected children. In the simulation study, we verify the validity of the proposed test statistics under various scenarios, and compare the powers of the proposed statistics with some existing test statistics. It is shown that our methods greatly improve the power for detecting association in the presence of imprinting effects. We further demonstrate the advantage of our methods by the application of the proposed test statistics to a rheumatoid arthritis dataset. Contact: wingfung@hku.hk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:21798962
A powerful approach for association analysis incorporating imprinting effects.
Xia, Fan; Zhou, Ji-Yuan; Fung, Wing Kam
2011-09-15
For a diallelic marker locus, the transmission disequilibrium test (TDT) is a simple and powerful design for genetic studies. The TDT was originally proposed for use in families with both parents available (complete nuclear families) and has further been extended to 1-TDT for use in families with only one of the parents available (incomplete nuclear families). Currently, the increasing interest of the influence of parental imprinting on heritability indicates the importance of incorporating imprinting effects into the mapping of association variants. In this article, we extend the TDT-type statistics to incorporate imprinting effects and develop a series of new test statistics in a general two-stage framework for association studies. Our test statistics enjoy the nature of family-based designs that need no assumption of Hardy-Weinberg equilibrium. Also, the proposed methods accommodate complete and incomplete nuclear families with one or more affected children. In the simulation study, we verify the validity of the proposed test statistics under various scenarios, and compare the powers of the proposed statistics with some existing test statistics. It is shown that our methods greatly improve the power for detecting association in the presence of imprinting effects. We further demonstrate the advantage of our methods by the application of the proposed test statistics to a rheumatoid arthritis dataset. wingfung@hku.hk Supplementary data are available at Bioinformatics online.
Dafni, Urania; Karlis, Dimitris; Pedeli, Xanthi; Bogaerts, Jan; Pentheroudakis, George; Tabernero, Josep; Zielinski, Christoph C; Piccart, Martine J; de Vries, Elisabeth G E; Latino, Nicola Jane; Douillard, Jean-Yves; Cherny, Nathan I
2017-01-01
The European Society for Medical Oncology (ESMO) has developed the ESMO Magnitude of Clinical Benefit Scale (ESMO-MCBS), a tool to assess the magnitude of clinical benefit from new cancer therapies. Grading is guided by a dual rule comparing the relative benefit (RB) and the absolute benefit (AB) achieved by the therapy to prespecified threshold values. The ESMO-MCBS v1.0 dual rule evaluates the RB of an experimental treatment based on the lower limit of the 95%CI (LL95%CI) for the hazard ratio (HR) along with an AB threshold. This dual rule addresses two goals: inclusiveness: not unfairly penalising experimental treatments from trials designed with adequate power targeting clinically meaningful relative benefit; and discernment: penalising trials designed to detect a small inconsequential benefit. Based on 50 000 simulations of plausible trial scenarios, the sensitivity and specificity of the LL95%CI rule and the ESMO-MCBS dual rule, the robustness of their characteristics for reasonable power and range of targeted and true HRs, are examined. The per cent acceptance of maximal preliminary grade is compared with other dual rules based on point estimate (PE) thresholds for RB. For particularly small or particularly large studies, the observed benefit needs to be relatively big for the ESMO-MCBS dual rule to be satisfied and the maximal grade awarded. Compared with approaches that evaluate RB using the PE thresholds, simulations demonstrate that the MCBS approach better exhibits the desired behaviour achieving the goals of both inclusiveness and discernment. RB assessment using the LL95%CI for HR rather than a PE threshold has two advantages: it diminishes the probability of excluding big benefit positive studies from achieving due credit and, when combined with the AB assessment, it increases the probability of downgrading a trial with a statistically significant but clinically insignificant observed benefit.
Dafni, Urania; Karlis, Dimitris; Pedeli, Xanthi; Bogaerts, Jan; Pentheroudakis, George; Tabernero, Josep; Zielinski, Christoph C; Piccart, Martine J; de Vries, Elisabeth G E; Latino, Nicola Jane; Douillard, Jean-Yves; Cherny, Nathan I
2017-01-01
Background The European Society for Medical Oncology (ESMO) has developed the ESMO Magnitude of Clinical Benefit Scale (ESMO-MCBS), a tool to assess the magnitude of clinical benefit from new cancer therapies. Grading is guided by a dual rule comparing the relative benefit (RB) and the absolute benefit (AB) achieved by the therapy to prespecified threshold values. The ESMO-MCBS v1.0 dual rule evaluates the RB of an experimental treatment based on the lower limit of the 95%CI (LL95%CI) for the hazard ratio (HR) along with an AB threshold. This dual rule addresses two goals: inclusiveness: not unfairly penalising experimental treatments from trials designed with adequate power targeting clinically meaningful relative benefit; and discernment: penalising trials designed to detect a small inconsequential benefit. Methods Based on 50 000 simulations of plausible trial scenarios, the sensitivity and specificity of the LL95%CI rule and the ESMO-MCBS dual rule, the robustness of their characteristics for reasonable power and range of targeted and true HRs, are examined. The per cent acceptance of maximal preliminary grade is compared with other dual rules based on point estimate (PE) thresholds for RB. Results For particularly small or particularly large studies, the observed benefit needs to be relatively big for the ESMO-MCBS dual rule to be satisfied and the maximal grade awarded. Compared with approaches that evaluate RB using the PE thresholds, simulations demonstrate that the MCBS approach better exhibits the desired behaviour achieving the goals of both inclusiveness and discernment. Conclusions RB assessment using the LL95%CI for HR rather than a PE threshold has two advantages: it diminishes the probability of excluding big benefit positive studies from achieving due credit and, when combined with the AB assessment, it increases the probability of downgrading a trial with a statistically significant but clinically insignificant observed benefit. PMID:29067214
Bodapati, Rohan K; Kizer, Jorge R; Kop, Willem J; Kamel, Hooman; Stein, Phyllis K
2017-07-21
Heart rate variability (HRV) characterizes cardiac autonomic functioning. The association of HRV with stroke is uncertain. We examined whether 24-hour HRV added predictive value to the Cardiovascular Health Study clinical stroke risk score (CHS-SCORE), previously developed at the baseline examination. N=884 stroke-free CHS participants (age 75.3±4.6), with 24-hour Holters adequate for HRV analysis at the 1994-1995 examination, had 68 strokes over ≤8 year follow-up (median 7.3 [interquartile range 7.1-7.6] years). The value of adding HRV to the CHS-SCORE was assessed with stepwise Cox regression analysis. The CHS-SCORE predicted incident stroke (HR=1.06 per unit increment, P =0.005). Two HRV parameters, decreased coefficient of variance of NN intervals (CV%, P =0.031) and decreased power law slope (SLOPE, P =0.033) also entered the model, but these did not significantly improve the c-statistic ( P =0.47). In a secondary analysis, dichotomization of CV% (LOWCV% ≤12.8%) was found to maximally stratify higher-risk participants after adjustment for CHS-SCORE. Similarly, dichotomizing SLOPE (LOWSLOPE <-1.4) maximally stratified higher-risk participants. When these HRV categories were combined (eg, HIGHCV% with HIGHSLOPE), the c-statistic for the model with the CHS-SCORE and combined HRV categories was 0.68, significantly higher than 0.61 for the CHS-SCORE alone ( P =0.02). In this sample of older adults, 2 HRV parameters, CV% and power law slope, emerged as significantly associated with incident stroke when added to a validated clinical risk score. After each parameter was dichotomized based on its optimal cut point in this sample, their composite significantly improved prediction of incident stroke during ≤8-year follow-up. These findings will require validation in separate, larger cohorts. © 2017 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley.
Succession Planning in State Health Agencies in the United States: A Brief Report.
Harper, Elizabeth; Leider, Jonathon P; Coronado, Fatima; Beck, Angela J
2017-11-02
Approximately 25% of the public health workforce plans to retire by 2020. Succession planning is a core capability of the governmental public health enterprise; however, limited data are available regarding these efforts in state health agencies (SHAs). We analyzed 2016 Workforce Gaps Survey data regarding succession planning in SHAs using the US Office of Personnel Management's (OPM's) succession planning model, including 6 domains and 27 activities. Descriptive statistics were calculated for all 41 responding SHAs. On average, SHAs self-reported adequately addressing 11 of 27 succession planning activities, with 93% of SHAs adequately addressing 1 or more activities and 61% adequately addressing 1 or more activities in each domain. The majority of OPM-recommended succession planning activities are not being addressed, and limited succession planning occurs across SHAs. Greater activity in the OPM-identified succession planning domains may help SHAs contend with significant turnover and better preserve institutional knowledge.
NASA Technical Reports Server (NTRS)
Mitcham, Grady L.
1949-01-01
A preliminary analysis of the flying qualities of the Consolidated Vultee MX-813 delta-wing airplane configuration has been made based on the results obtained from the first two 1/8 scale models flown at the NACA Pilotless Aircraft Research Station, Wallop's Island, VA. The Mach number range covered in the tests was from 0.9 to 1.2. The analysis indicates adequate elevator control for trim in level flight over the speed range investigated. Through the transonic range there is a mild trim change with a slight tucking-under tendency. The elevator control effectiveness in the supersonic range is reduced to about one-half the subsonic value although sufficient control for maneuvering is available as indicated by the fact that 10 deg elevator deflection produced 5g acceleration at Mach number of 1.2 at 40,000 feet.The elevator control forces are high and indicate the power required of the boost system. The damping. of the short-period oscillation is adequate at sea-level but is reduced at 40,000 feet. The directional stability appears adequate for the speed range and angles of attack covered.
Determination of the cumulus size distribution from LANDSAT pictures
NASA Technical Reports Server (NTRS)
Karg, E.; Mueller, H.; Quenzel, H.
1983-01-01
Varying insolation causes undesirable thermic stress to the receiver of a solar power plant. The rapid change of insolation depends on the size distribution of the clouds; in order to measure these changes, it is suitable to determine typical cumulus size distributions. For this purpose, LANDSAT-images are adequate. Several examples of cumulus size distributions will be presented and their effects on the operation of a solar power plant are discussed.
Electric vehicle power train instrumentation: Some constraints and considerations
NASA Technical Reports Server (NTRS)
Triner, J. E.; Hansen, I. G.
1977-01-01
The application of pulse modulation control (choppers) to dc motors creates unique instrumentation problems. In particular, the high harmonic components contained in the current waveforms require frequency response accommodations not normally considered in dc instrumentation. In addition to current sensing, accurate power measurement requires not only adequate frequency response but must also address phase errors caused by the finite bandwidths and component characteristics involved. The implications of these problems are assessed.
Building 865 Hypersonic Wind Tunnel Power System Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Larry X.
2015-02-01
This report documents the characterization and analysis of a high current power supply for the building 865 Hypersonic Wind Tunnel at Sandia National Laboratories. The system described in this report became operational in 2013, replacing the original 1968 system which employed an induction voltage regulator. This analysis and testing was completed to help the parent organization understand why an updated and redesigned power system was not delivering adequate power to resistive heater elements in the HWT. This analysis led to an improved understanding of the design and operation of the revised 2013 power supply system and identifies several reasons themore » revised system failed to achieve the performance of the original power supply installation. Design modifications to improve the performance of this system are discussed.« less
Power allocation and range performance considerations for a dual-frequency EBPSK/MPPSK system
NASA Astrophysics Data System (ADS)
Yao, Yu; Wu, Lenan; Zhao, Junhui
2017-12-01
Extended binary phase shift keying/M-ary position phase shift keying (EBPSK/MPPSK)-MODEM provides radar and communication functions on a single hardware platform with a single waveform. However, its range estimation accuracy is worse than continuous-wave (CW) radar because of the imbalance of power in two carrier frequencies. In this article, the power allocation method for dual-frequency EBPSK/MPPSK modulated systems is presented. The power of two signal transmitters is adequately allocated to ensure that the power in two carrier frequencies is equal. The power allocation ratios for two types of modulation systems are obtained. Moreover, considerations regarding the range of operation of the dual-frequency system are analysed. In addition to theoretical considerations, computer simulations are provided to illustrate the performance.
Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics
Dowding, Irene; Haufe, Stefan
2018-01-01
Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
10 CFR 1304.110 - Disclosure of records to third parties.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Board with adequate advance written assurance that the record will be used solely as a statistical... under the control of the United States for a civil or criminal law enforcement activity, if the activity... record is disclosed under such compulsory legal process, the Board shall make reasonable efforts to...
The Evaluation and Selection of Adequate Causal Models: A Compensatory Education Example.
ERIC Educational Resources Information Center
Tanaka, Jeffrey S.
1982-01-01
Implications of model evaluation (using traditional chi square goodness of fit statistics, incremental fit indices for covariance structure models, and latent variable coefficients of determination) on substantive conclusions are illustrated with an example examining the effects of participation in a compensatory education program on posttreatment…
Quasi-Experimental Analysis: A Mixture of Methods and Judgment.
ERIC Educational Resources Information Center
Cordray, David S.
1986-01-01
The role of human judgment in the development and synthesis of evidence has not been adequately developed or acknowledged within quasi-experimental analysis. Corrective solutions need to confront the fact that causal analysis within complex environments will require a more active assessment that entails reasoning and statistical modeling.…
Sustaining School Achievement in California's Elementary Schools after State Monitoring
ERIC Educational Resources Information Center
McCabe, Molly
2010-01-01
This study examined the Academic Performance Index (API) and Adequate Yearly Progress (AYP) achievement trends between 2004 and 2006 of 58 California public elementary schools after exiting state monitoring and investigated practices for sustaining consistent achievement growth. Statistical methods were used to analyze statewide achievement trends…
3D Self-Localisation From Angle of Arrival Measurements
2009-04-01
systems can provide precise position information. However, there are situations where GPS is not adequate such as indoor, underwater, extraterrestrial or...Transactions on Pattern Analysis and Machine Intelligence , Vol. 22, No. 6, June 2000, pp 610-622. 7. Torrieri, D.J., "Statistical Theory of Passive Location
Code of Federal Regulations, 2010 CFR
2010-01-01
... borrower developed an adequate supporting database and analyzed a reasonable range of relevant assumptions and alternative futures; (d) The borrower adopted methods and procedures in general use by the...
NASA Astrophysics Data System (ADS)
Tang, Jiayu; Kayo, Issha; Takada, Masahiro
2011-09-01
We develop a maximum likelihood based method of reconstructing the band powers of the density and velocity power spectra at each wavenumber bin from the measured clustering features of galaxies in redshift space, including marginalization over uncertainties inherent in the small-scale, non-linear redshift distortion, the Fingers-of-God (FoG) effect. The reconstruction can be done assuming that the density and velocity power spectra depend on the redshift-space power spectrum having different angular modulations of μ with μ2n (n= 0, 1, 2) and that the model FoG effect is given as a multiplicative function in the redshift-space spectrum. By using N-body simulations and the halo catalogues, we test our method by comparing the reconstructed power spectra with the spectra directly measured from the simulations. For the spectrum of μ0 or equivalently the density power spectrum Pδδ(k), our method recovers the amplitudes to an accuracy of a few per cent up to k≃ 0.3 h Mpc-1 for both dark matter and haloes. For the power spectrum of μ2, which is equivalent to the density-velocity power spectrum Pδθ(k) in the linear regime, our method can recover, within the statistical errors, the input power spectrum for dark matter up to k≃ 0.2 h Mpc-1 and at both redshifts z= 0 and 1, if the adequate FoG model being marginalized over is employed. However, for the halo spectrum that is least affected by the FoG effect, the reconstructed spectrum shows greater amplitudes than the spectrum Pδθ(k) inferred from the simulations over a range of wavenumbers 0.05 ≤k≤ 0.3 h Mpc-1. We argue that the disagreement may be ascribed to a non-linearity effect that arises from the cross-bispectra of density and velocity perturbations. Using the perturbation theory and assuming Einstein gravity as in simulations, we derive the non-linear correction term to the redshift-space spectrum, and find that the leading-order correction term is proportional to μ2 and increases the μ2-power spectrum amplitudes more significantly at larger k, at lower redshifts and for more massive haloes. We find that adding the non-linearity correction term to the simulation Pδθ(k) can fairly well reproduce the reconstructed Pδθ(k) for haloes up to k≃ 0.2 h Mpc-1.
Determination of Type I Error Rates and Power of Answer Copying Indices under Various Conditions
ERIC Educational Resources Information Center
Yormaz, Seha; Sünbül, Önder
2017-01-01
This study aims to determine the Type I error rates and power of S[subscript 1] , S[subscript 2] indices and kappa statistic at detecting copying on multiple-choice tests under various conditions. It also aims to determine how copying groups are created in order to calculate how kappa statistics affect Type I error rates and power. In this study,…
ERIC Educational Resources Information Center
Texeira, Antonio; Rosa, Alvaro; Calapez, Teresa
2009-01-01
This article presents statistical power analysis (SPA) based on the normal distribution using Excel, adopting textbook and SPA approaches. The objective is to present the latter in a comparative way within a framework that is familiar to textbook level readers, as a first step to understand SPA with other distributions. The analysis focuses on the…
On the analysis of very small samples of Gaussian repeated measurements: an alternative approach.
Westgate, Philip M; Burchett, Woodrow W
2017-03-15
The analysis of very small samples of Gaussian repeated measurements can be challenging. First, due to a very small number of independent subjects contributing outcomes over time, statistical power can be quite small. Second, nuisance covariance parameters must be appropriately accounted for in the analysis in order to maintain the nominal test size. However, available statistical strategies that ensure valid statistical inference may lack power, whereas more powerful methods may have the potential for inflated test sizes. Therefore, we explore an alternative approach to the analysis of very small samples of Gaussian repeated measurements, with the goal of maintaining valid inference while also improving statistical power relative to other valid methods. This approach uses generalized estimating equations with a bias-corrected empirical covariance matrix that accounts for all small-sample aspects of nuisance correlation parameter estimation in order to maintain valid inference. Furthermore, the approach utilizes correlation selection strategies with the goal of choosing the working structure that will result in the greatest power. In our study, we show that when accurate modeling of the nuisance correlation structure impacts the efficiency of regression parameter estimation, this method can improve power relative to existing methods that yield valid inference. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Précis of statistical significance: rationale, validity, and utility.
Chow, S L
1998-04-01
The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.
Efficacy of mindfulness meditation for smoking cessation: A systematic review and meta-analysis.
Maglione, Margaret A; Maher, Alicia Ruelaz; Ewing, Brett; Colaiaco, Benjamin; Newberry, Sydne; Kandrack, Ryan; Shanman, Roberta M; Sorbero, Melony E; Hempel, Susanne
2017-06-01
Smokers increasingly seek alternative interventions to assist in cessation or reduction efforts. Mindfulness meditation, which facilitates detached observation and paying attention to the present moment with openness, curiosity, and acceptance, has recently been studied as a smoking cessation intervention. This review synthesizes randomized controlled trials (RCTs) of mindfulness meditation (MM) interventions for smoking cessation. Five electronic databases were searched from inception to October 2016 to identify English-language RCTs evaluating the efficacy and safety of MM interventions for smoking cessation, reduction, or a decrease in nicotine cravings. Two independent reviewers screened literature using predetermined eligibility criteria, abstracted study-level information, and assessed the quality of included studies. Meta-analyses used the Hartung-Knapp-Sidik-Jonkman method for random-effects models. The quality of evidence was assessed using the GRADE approach. Ten RCTs of MM interventions for tobacco use met inclusion criteria. Intervention duration, intensity, and comparison conditions varied considerably. Studies used diverse comparators such as the American Lung Association's Freedom from Smoking (FFS) program, quitline counseling, interactive learning, or treatment as usual (TAU). Only one RCT was rated as good quality and reported power calculations indicating sufficient statistical power. Publication bias was detected. Overall, mindfulness meditation did not have significant effects on abstinence or cigarettes per day, relative to comparator groups. The small number of studies and heterogeneity in interventions, comparators, and outcomes precluded detecting systematic differences between adjunctive and monotherapy interventions. No serious adverse events were reported. MM did not differ significantly from comparator interventions in their effects on tobacco use. Low-quality evidence, variability in study design among the small number of existing studies, and publication bias suggest that additional, high-quality adequately powered RCTs should be conducted. Copyright © 2017 Elsevier Ltd. All rights reserved.
Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I
2013-05-01
When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.
NASA Astrophysics Data System (ADS)
Orhan, K.; Mayerle, R.
2016-12-01
A methodology comprising of the estimates of power yield, evaluation of the effects of power extraction on flow conditions, and near-field investigations to deliver wake characteritics, recovery and interactions is described and applied to several straits in Indonesia. Site selection is done with high-resolution, three-dimensional flow models providing sufficient spatiotemporal coverage. Much attention has been given to the meteorological forcing, and conditions at the open sea boundaries to adequately capture the density gradients and flow fields. Model verification using tidal records shows excellent agreement. Sites with adequate depth for the energy conversion using horizontal axis tidal turbines, average kinetic power density greater than 0.5 kW/m2, and surface area larger than 0.5km2 are defined as energy hotspots. Spatial variation of the average extractable electric power is determined, and annual tidal energy resource is estimated for the straits in question. The results showed that the potential for tidal power generation in Indonesia is likely to exceed previous predictions reaching around 4,800MW. To assess the impact of the devices, flexible mesh models with higher resolutions have been developed. Effects on flow conditions, and near-field turbine wakes are resolved in greater detail with triangular horizontal grids. The energy is assumed to be removed uniformly by sub-grid scale arrays of turbines, and calculations are made based on velocities at the hub heights of the devices. An additional drag force resulting in dissipation of the pre-existing kinetic power from %10 to %60 within a flow cross-section is introduced to capture the impacts. It was found that the effect of power extraction on water levels and flow speeds in adjacent areas is not significant. Results show the effectivess of the method to capture wake characteritics and recovery reasonably well with low computational cost.
Traceable measurements of the electrical parameters of solid-state lighting products
NASA Astrophysics Data System (ADS)
Zhao, D.; Rietveld, G.; Braun, J.-P.; Overney, F.; Lippert, T.; Christensen, A.
2016-12-01
In order to perform traceable measurements of the electrical parameters of solid-state lighting (SSL) products, it is necessary to technically adequately define the measurement procedures and to identify the relevant uncertainty sources. The present published written standard for SSL products specifies test conditions, but it lacks an explanation of how adequate these test conditions are. More specifically, both an identification of uncertainty sources and a quantitative uncertainty analysis are absent. This paper fills the related gap in the present written standard. New uncertainty sources with respect to conventional lighting sources are determined and their effects are quantified. It shows that for power measurements, the main uncertainty sources are temperature deviation, power supply voltage distortion, and instability of the SSL product. For current RMS measurements, the influence of bandwidth, shunt resistor, power supply source impedance and ac frequency flatness are significant as well. The measurement uncertainty depends not only on the test equipment but is also a function of the characteristics of the device under test (DUT), for example, current harmonics spectrum and input impedance. Therefore, an online calculation tool is provided to help non-electrical experts. Following our procedures, unrealistic uncertainty estimations, unnecessary procedures and expensive equipment can be prevented.
New heterogeneous test statistics for the unbalanced fixed-effect nested design.
Guo, Jiin-Huarng; Billard, L; Luh, Wei-Ming
2011-05-01
When the underlying variances are unknown or/and unequal, using the conventional F test is problematic in the two-factor hierarchical data structure. Prompted by the approximate test statistics (Welch and Alexander-Govern methods), the authors develop four new heterogeneous test statistics to test factor A and factor B nested within A for the unbalanced fixed-effect two-stage nested design under variance heterogeneity. The actual significance levels and statistical power of the test statistics were compared in a simulation study. The results show that the proposed procedures maintain better Type I error rate control and have greater statistical power than those obtained by the conventional F test in various conditions. Therefore, the proposed test statistics are recommended in terms of robustness and easy implementation. ©2010 The British Psychological Society.
Correlation techniques and measurements of wave-height statistics
NASA Technical Reports Server (NTRS)
Guthart, H.; Taylor, W. C.; Graf, K. A.; Douglas, D. G.
1972-01-01
Statistical measurements of wave height fluctuations have been made in a wind wave tank. The power spectral density function of temporal wave height fluctuations evidenced second-harmonic components and an f to the minus 5th power law decay beyond the second harmonic. The observations of second harmonic effects agreed very well with a theoretical prediction. From the wave statistics, surface drift currents were inferred and compared to experimental measurements with satisfactory agreement. Measurements were made of the two dimensional correlation coefficient at 15 deg increments in angle with respect to the wind vector. An estimate of the two-dimensional spatial power spectral density function was also made.
Identifying predictors of time-inhomogeneous viral evolutionary processes.
Bielejec, Filip; Baele, Guy; Rodrigo, Allen G; Suchard, Marc A; Lemey, Philippe
2016-07-01
Various factors determine the rate at which mutations are generated and fixed in viral genomes. Viral evolutionary rates may vary over the course of a single persistent infection and can reflect changes in replication rates and selective dynamics. Dedicated statistical inference approaches are required to understand how the complex interplay of these processes shapes the genetic diversity and divergence in viral populations. Although evolutionary models accommodating a high degree of complexity can now be formalized, adequately informing these models by potentially sparse data, and assessing the association of the resulting estimates with external predictors, remains a major challenge. In this article, we present a novel Bayesian evolutionary inference method, which integrates multiple potential predictors and tests their association with variation in the absolute rates of synonymous and non-synonymous substitutions along the evolutionary history. We consider clinical and virological measures as predictors, but also changes in population size trajectories that are simultaneously inferred using coalescent modelling. We demonstrate the potential of our method in an application to within-host HIV-1 sequence data sampled throughout the infection of multiple patients. While analyses of individual patient populations lack statistical power, we detect significant evidence for an abrupt drop in non-synonymous rates in late stage infection and a more gradual increase in synonymous rates over the course of infection in a joint analysis across all patients. The former is predicted by the immune relaxation hypothesis while the latter may be in line with increasing replicative fitness during the asymptomatic stage.
Comparative chronic toxicity of three neonicotinoids on New Zealand packaged honey bees.
Wood, Sarah C; Kozii, Ivanna V; Koziy, Roman V; Epp, Tasha; Simko, Elemir
2018-01-01
Thiamethoxam, clothianidin, and imidacloprid are the most commonly used neonicotinoid insecticides on the Canadian prairies. There is widespread contamination of nectar and pollen with neonicotinoids, at concentrations which are sublethal for honey bees (Apis mellifera Linnaeus). We compared the effects of chronic, sublethal exposure to the three most commonly used neonicotinoids on honey bee colonies established from New Zealand packaged bees using colony weight gain, brood area, and population size as measures of colony performance. From May 7 to July 29, 2016 (12 weeks), sixty-eight colonies received weekly feedings of sugar syrup and pollen patties containing 0 nM, 20 nM (median environmental dose), or 80 nM (high environmental dose) of one of three neonicotinoids (thiamethoxam, clothianidin, and imidacloprid). Colonies were weighed at three-week intervals. Brood area and population size were determined from digital images of colonies at week 12. Statistical analyses were performed by ANOVA and mixed models. There was a significant negative effect (-30%, p<0.01) on colony weight gain (honey production) after 9 and 12 weeks of exposure to 80 nM of thiamethoxam, clothianidin, or imidacloprid and on bee cluster size (-21%, p<0.05) after 12 weeks. Analysis of brood area and number of adult bees lacked adequate (>80%) statistical power to detect an effect. Chronic exposure of honey bees to high environmental doses of neonicotinoids has negative effects on honey production. Brood area appears to be less sensitive to detect sublethal effects of neonicotinoids.
Van Nuffel, A; Tuyttens, F A M; Van Dongen, S; Talloen, W; Van Poucke, E; Sonck, B; Lens, L
2007-12-01
Nonidentical development of bilateral traits due to disturbing genetic or developmental factors is called fluctuating asymmetry (FA) if such deviations are continuously distributed. Fluctuating asymmetry is believed to be a reliable indicator of the fitness and welfare of an animal. Despite an increasing body of research, the link between FA and animal performance or welfare is reported to be inconsistent, possibly, among other reasons, due to inaccurate measuring protocols or incorrect statistical analyses. This paper reviews problems of interpreting FA results in poultry and provides guidelines for the measurement and analysis of FA, applied to broilers. A wide range of morphological traits were measured by 7 different techniques (ranging from measurements on living broilers or intact carcasses to X-rays, bones, and digital images) and evaluated for their applicability to estimate FA. Following 4 selection criteria (significant FA, absence of directional asymmetry or antisymmetry, absence of between-trait correlation in signed FA values, and high signal-to-noise ratio), from 3 to 14 measurements per method were found suitable for estimating the degree of FA. The accuracy of FA estimates was positively related to the complexity and time investment of the measuring method. In addition, our study clearly shows the importance of securing adequate statistical power when designing FA studies. Repeatability analyses of FA estimates indicated the need for larger sample sizes, more repeated measurements, or both, than are commonly used in FA studies.
NASA Astrophysics Data System (ADS)
Sadeghi, Hamed; Lavoie, Philippe; Pollard, Andrew
2018-03-01
The effect of finite hot-wire spatial resolution on turbulence statistics and velocity spectra in a round turbulent free jet is investigated. To quantify spatial resolution effects, measurements were taken using a nano-scale thermal anemometry probe (NSTAP) and compared to results from conventional hot-wires with sensing lengths of l=0.5 and 1 mm. The NSTAP has a sensing length significantly smaller than the Kolmogorov length scale η for the present experimental conditions, whereas the sensing lengths for the conventional probes are larger than η. The spatial resolution is found to have a significant impact on the dissipation both on and off the jet centreline with the NSTAP results exceeding those obtained from the conventional probes. The resolution effects along the jet centreline are adequately predicted using a Wyngaard-type spectral technique (Wyngaard in J Sci Instr 1(2):1105-1108,1968), but additional attenuation on the measured turbulence quantities are observed off the centreline. The magnitude of this attenuation is a function of both the ratio of wire length to Kolmogorov length scale and the magnitude of the shear. The effect of spatial resolution is noted to have an impact on the power-law decay parameters for the turbulent kinetic energy that is computed. The effect of spatial filtering on the streamwise dissipation energy spectra is also considered. Empirical functions are proposed to estimate the effect of finite resolution, which take into account the mean shear.
Comparison of existing digital image analysis systems for the analysis of Thematic Mapper data
NASA Technical Reports Server (NTRS)
Likens, W. C.; Wrigley, R. C.
1984-01-01
Most existing image analysis systems were designed with the Landsat Multi-Spectral Scanner in mind, leaving open the question of whether or not these systems could adequately process Thematic Mapper data. In this report, both hardware and software systems have been evaluated for compatibility with TM data. Lack of spectral analysis capability was not found to be a problem, though techniques for spatial filtering and texture varied. Computer processing speed and data storage of currently existing mini-computer based systems may be less than adequate. Upgrading to more powerful hardware may be required for many TM applications.
Power technologies and the space future
NASA Technical Reports Server (NTRS)
Faymon, Karl A.; Fordyce, J. Stuart; Brandhorst, Henry W., Jr.
1991-01-01
Advancements in space power and energy technologies are critical to serve space development needs and help solve problems on Earth. The availability of low cost power and energy in space will be the hallmark of this advance. Space power will undergo a dramatic change for future space missions. The power systems which have served the U.S. space program so well in the past will not suffice for the missions of the future. This is especially true if the space commercialization is to become a reality. New technologies, and new and different space power architectures and topologies will replace the lower power, low-voltage systems of the past. Efficiencies will be markedly improved, specific powers will be greatly increased, and system lifetimes will be markedly extended. Space power technology is discussed - its past, its current status, and predictions about where it will go in the future. A key problem for power and energy is its cost of affordability. Power must be affordable or it will not serve future needs adequately. This aspect is also specifically addressed.
Gaskin, Cadeyrn J; Happell, Brenda
2014-05-01
To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Ecological statistics of Gestalt laws for the perceptual organization of contours.
Elder, James H; Goldberg, Richard M
2002-01-01
Although numerous studies have measured the strength of visual grouping cues for controlled psychophysical stimuli, little is known about the statistical utility of these various cues for natural images. In this study, we conducted experiments in which human participants trace perceived contours in natural images. These contours are automatically mapped to sequences of discrete tangent elements detected in the image. By examining relational properties between pairs of successive tangents on these traced curves, and between randomly selected pairs of tangents, we are able to estimate the likelihood distributions required to construct an optimal Bayesian model for contour grouping. We employed this novel methodology to investigate the inferential power of three classical Gestalt cues for contour grouping: proximity, good continuation, and luminance similarity. The study yielded a number of important results: (1) these cues, when appropriately defined, are approximately uncorrelated, suggesting a simple factorial model for statistical inference; (2) moderate image-to-image variation of the statistics indicates the utility of general probabilistic models for perceptual organization; (3) these cues differ greatly in their inferential power, proximity being by far the most powerful; and (4) statistical modeling of the proximity cue indicates a scale-invariant power law in close agreement with prior psychophysics.
Vajawat, Mayuri; Deepika, P. C.; Kumar, Vijay; Rajeshwari, P.
2015-01-01
Aim: To compare the efficacy of powered toothbrushes in improving gingival health and reducing salivary red complex counts as compared to manual toothbrushes, among autistic individuals. Materials and Methods: Forty autistics was selected. Test group received powered toothbrushes, and control group received manual toothbrushes. Plaque index and gingival index were recorded. Unstimulated saliva was collected for analysis of red complex organisms using polymerase chain reaction. Results: A statistically significant reduction in the plaque scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.002 for controls). This reduction was statistically more significant in the test group (P = 0.024). A statistically significant reduction in the gingival scores was seen over a period of 12 weeks in both the groups (P < 0.001 for tests and P = 0.001 for controls). This reduction was statistically more significant in the test group (P = 0.042). No statistically significant reduction in the detection rate of red complex organisms were seen at 4 weeks in both the groups. Conclusion: Powered toothbrushes result in a significant overall improvement in gingival health when constant reinforcement of oral hygiene instructions is given. PMID:26681855
NASA Astrophysics Data System (ADS)
Schroeder, C. B.; Fawley, W. M.; Esarey, E.
2003-07-01
We investigate the statistical properties (e.g., shot-to-shot power fluctuations) of the radiation from a high-gain free-electron laser (FEL) operating in the nonlinear regime. We consider the case of an FEL amplifier reaching saturation whose shot-to-shot fluctuations in input radiation power follow a gamma distribution. We analyze the corresponding output power fluctuations at and beyond saturation, including beam energy spread effects, and find that there are well-characterized values of undulator length for which the fluctuations reach a minimum.
Reliability and economy -- Hydro electricity for Iran
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jahromi-Shirazi, M.J.; Zarbakhsh, M.H.
1998-12-31
Reliability is the probability that a device or system will perform its function adequately, for the period of time intended, under the operating conditions intended. Reliability and economy are two important factors in operating any system, especially in power generation. Due to the high rate in population growth in Iran, the experts have estimated that the demand for electricity will be about 63,000 MW in the next 25 years, the installed power is now about 26,000 MW. Therefore, the energy policy decision made in Iran is to go to power generation by hydroelectric plants because of reliability, availability of watermore » resources and the economics of hydroelectric power.« less
STS/DBS power subsystem end-to-end stability margin
NASA Astrophysics Data System (ADS)
Devaux, R. N.; Vattimo, R. J.; Peck, S. R.; Baker, W. E.
Attention is given to a full-up end-to-end subsystem stability test which was performed with a flight solar array providing power to a fully operational spacecraft. The solar array simulator is described, and a comparison is made between test results obtained with the simulator and those obtained with the actual array. It is concluded that stability testing with a fully integrated spacecraft is necessary to ensure that all elements have been adequately modeled.
Laser-Powered Thrusters for High Efficiency Variable Specific Impulse Missions (Preprint)
2007-04-10
technology. However, a laser-ablation propulsion engine using a set of diode-pumped glass fiber amplifiers with a total of 350-W optical power can...in a single device using low-mass diode-pumped glass fiber laser amplifiers to operate in either long- or short-pulse regimes at will. Adequate fiber...pulsewidth glass fiber oscillator-amplifiers, rather than the diodes used in the µ LPT, to achieve Table 2. Demonstrated technology basis Ablation Fuel Gold
Phelps, Geoffrey; Kelcey, Benjamin; Jones, Nathan; Liu, Shuangshuang
2016-10-03
Mathematics professional development is widely offered, typically with the goal of improving teachers' content knowledge, the quality of teaching, and ultimately students' achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials. © The Author(s) 2016.
Why Does Trigonometric Substitution Work?
ERIC Educational Resources Information Center
Cunningham, Daniel W.
2018-01-01
Modern calculus textbooks carefully illustrate how to perform integration by trigonometric substitution. Unfortunately, most of these books do not adequately justify this powerful technique of integration. In this article, we present an accessible proof that establishes the validity of integration by trigonometric substitution. The proof offers…
Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.
Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg
2009-11-01
G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.
ERIC Educational Resources Information Center
Amundson, Vickie E.; Bernstein, Ira H.
1973-01-01
Authors note that Fehrer and Biederman's two statistical tests were not of equal power and that their conclusion could be a statistical artifact of both the lesser power of the verbal report comparison and the insensitivity of their particular verbal report indicator. (Editor)
Alignment-free sequence comparison (II): theoretical power of comparison statistics.
Wan, Lin; Reinert, Gesine; Sun, Fengzhu; Waterman, Michael S
2010-11-01
Rapid methods for alignment-free sequence comparison make large-scale comparisons between sequences increasingly feasible. Here we study the power of the statistic D2, which counts the number of matching k-tuples between two sequences, as well as D2*, which uses centralized counts, and D2S, which is a self-standardized version, both from a theoretical viewpoint and numerically, providing an easy to use program. The power is assessed under two alternative hidden Markov models; the first one assumes that the two sequences share a common motif, whereas the second model is a pattern transfer model; the null model is that the two sequences are composed of independent and identically distributed letters and they are independent. Under the first alternative model, the means of the tuple counts in the individual sequences change, whereas under the second alternative model, the marginal means are the same as under the null model. Using the limit distributions of the count statistics under the null and the alternative models, we find that generally, asymptotically D2S has the largest power, followed by D2*, whereas the power of D2 can even be zero in some cases. In contrast, even for sequences of length 140,000 bp, in simulations D2* generally has the largest power. Under the first alternative model of a shared motif, the power of D2*approaches 100% when sufficiently many motifs are shared, and we recommend the use of D2* for such practical applications. Under the second alternative model of pattern transfer,the power for all three count statistics does not increase with sequence length when the sequence is sufficiently long, and hence none of the three statistics under consideration canbe recommended in such a situation. We illustrate the approach on 323 transcription factor binding motifs with length at most 10 from JASPAR CORE (October 12, 2009 version),verifying that D2* is generally more powerful than D2. The program to calculate the power of D2, D2* and D2S can be downloaded from http://meta.cmb.usc.edu/d2. Supplementary Material is available at www.liebertonline.com/cmb.
[Big data in official statistics].
Zwick, Markus
2015-08-01
The concept of "big data" stands to change the face of official statistics over the coming years, having an impact on almost all aspects of data production. The tasks of future statisticians will not necessarily be to produce new data, but rather to identify and make use of existing data to adequately describe social and economic phenomena. Until big data can be used correctly in official statistics, a lot of questions need to be answered and problems solved: the quality of data, data protection, privacy, and the sustainable availability are some of the more pressing issues to be addressed. The essential skills of official statisticians will undoubtedly change, and this implies a number of challenges to be faced by statistical education systems, in universities, and inside the statistical offices. The national statistical offices of the European Union have concluded a concrete strategy for exploring the possibilities of big data for official statistics, by means of the Big Data Roadmap and Action Plan 1.0. This is an important first step and will have a significant influence on implementing the concept of big data inside the statistical offices of Germany.
BioDry: An Inexpensive, Low-Power Method to Preserve Aquatic Microbial Biomass at Room Temperature.
Tuorto, Steven J; Brown, Chris M; Bidle, Kay D; McGuinness, Lora R; Kerkhof, Lee J
2015-01-01
This report describes BioDry (patent pending), a method for reliably preserving the biomolecules associated with aquatic microbial biomass samples, without the need of hazardous materials (e.g. liquid nitrogen, preservatives, etc.), freezing, or bulky storage/sampling equipment. Gel electrophoresis analysis of nucleic acid extracts from samples treated in the lab with the BioDry method indicated that molecular integrity was protected in samples stored at room temperature for up to 30 days. Analysis of 16S/18S rRNA genes for presence/absence and relative abundance of microorganisms using both 454-pyrosequencing and TRFLP profiling revealed statistically indistinguishable communities from control samples that were frozen in liquid nitrogen immediately after collection. Seawater and river water biomass samples collected with a portable BioDry "field unit", constructed from off-the-shelf materials and a battery-operated pumping system, also displayed high levels of community rRNA preservation, despite a slight decrease in nucleic acid recovery over the course of storage for 30 days. Functional mRNA and protein pools from the field samples were also effectively conserved with BioDry, as assessed by respective RT-PCR amplification and western blot of ribulose-1-5-bisphosphate carboxylase/oxygenase. Collectively, these results demonstrate that BioDry can adequately preserve a suite of biomolecules from aquatic biomass at ambient temperatures for up to a month, giving it great potential for high resolution sampling in remote locations or on autonomous platforms where space and power are limited.
Follow-up of colorectal cancer patients after resection with curative intent-the GILDA trial.
Grossmann, Erik M; Johnson, Frank E; Virgo, Katherine S; Longo, Walter E; Fossati, Rolando
2004-01-01
Surgery remains the primary treatment of colorectal cancer. Data are lacking to delineate the optimal surveillance strategy following resection. A large-scale multi-center European study is underway to address this issue (Gruppo Italiano di Lavoro per la Diagnosi Anticipata-GILDA). Following primary surgery with curative intent, stratification, and randomization at GILDA headquarters, colon cancer patients are then assigned to a more intensive or less intensive surveillance regimen. Rectal cancer patients undergoing curative resection are similarly randomized, with their follow-up regimens placing more emphasis on detection of local recurrence. Target recruitment for the study will be 1500 patients to achieve a statistical power of 80% (assuming an alpha of 0.05 and a hazard-rate reduction of >24%). Since the trial opened in 1998, 985 patients have been randomized from 41 centers as of February 2004. There were 496 patients randomized to the less intensive regimens, and 489 randomized to the more intensive regimens. The mean duration of follow-up is 14 months. 75 relapses (15%) and 32 deaths (7%) had been observed in the two more intensive follow-up arms, while 64 relapses (13%) and 24 deaths (5%) had been observed in the two less intensive arms as of February 2004. This trial should provide the first evidence based on an adequately powered randomized trial to determine the optimal follow-up strategy for colorectal cancer patients. This trial is open to US centers, and recruitment continues.
BioDry: An Inexpensive, Low-Power Method to Preserve Aquatic Microbial Biomass at Room Temperature
Tuorto, Steven J.; Brown, Chris M.; Bidle, Kay D.; McGuinness, Lora R.; Kerkhof, Lee J.
2015-01-01
This report describes BioDry (patent pending), a method for reliably preserving the biomolecules associated with aquatic microbial biomass samples, without the need of hazardous materials (e.g. liquid nitrogen, preservatives, etc.), freezing, or bulky storage/sampling equipment. Gel electrophoresis analysis of nucleic acid extracts from samples treated in the lab with the BioDry method indicated that molecular integrity was protected in samples stored at room temperature for up to 30 days. Analysis of 16S/18S rRNA genes for presence/absence and relative abundance of microorganisms using both 454-pyrosequencing and TRFLP profiling revealed statistically indistinguishable communities from control samples that were frozen in liquid nitrogen immediately after collection. Seawater and river water biomass samples collected with a portable BioDry “field unit", constructed from off-the-shelf materials and a battery-operated pumping system, also displayed high levels of community rRNA preservation, despite a slight decrease in nucleic acid recovery over the course of storage for 30 days. Functional mRNA and protein pools from the field samples were also effectively conserved with BioDry, as assessed by respective RT-PCR amplification and western blot of ribulose-1-5-bisphosphate carboxylase/oxygenase. Collectively, these results demonstrate that BioDry can adequately preserve a suite of biomolecules from aquatic biomass at ambient temperatures for up to a month, giving it great potential for high resolution sampling in remote locations or on autonomous platforms where space and power are limited. PMID:26710122
On the origin of non-exponential fluorescence decays in enzyme-ligand complex
NASA Astrophysics Data System (ADS)
Wlodarczyk, Jakub; Kierdaszuk, Borys
2004-05-01
Complex fluorescence decays have usually been analyzed with the aid of a multi-exponential model, but interpretation of the individual exponential terms has not been adequately characterized. In such cases the intensity decays were also analyzed in terms of the continuous lifetime distribution as a consequence of an interaction of fluorophore with environment, conformational heterogeneity or their dynamical nature. We show that non-exponential fluorescence decay of the enzyme-ligand complexes may results from time dependent energy transport. The latter, to our opinion, may be accounted for by electron transport from the protein tyrosines to their neighbor residues. We introduce the time-dependent hopping rate in the form v(t)~(a+bt)-1. This in turn leads to the luminescence decay function in the form I(t)=Ioexp(-t/τ1)(1+lt/γτ2)-γ. Such a decay function provides good fits to highly complex fluorescence decays. The power-like tail implies the time hierarchy in migration energy process due to the hierarchical energy-level structure. Moreover, such a power-like term is a manifestation of so called Tsallis nonextensive statistic and is suitable for description of the systems with long-range interactions, memory effect as well as with fluctuations of characteristic lifetime of fluorescence. The proposed decay function was applied in analysis of fluorescence decays of tyrosine protein, i.e. the enzyme purine nucleoside phosphorylase from E. coli in a complex with formycin A (an inhibitor) and orthophosphate (a co-substrate).
NASA Astrophysics Data System (ADS)
Hnydiuk-Stefan, Anna; Składzień, Jan
2015-03-01
The paper examines from the thermodynamic point of view operation of coal fired power unit cooperating with the cryogenic oxygen unit, with a particular emphasis on the characteristic performance parameters of the oxygen unit. The relatively high purity technical oxygen produced in the oxygen unit is then used as the oxidant in the fluidized bed boiler of the modern coal fired power unit with electric power output of approximately 460 MW. The analyzed oxygen unit has a classical two-column structure with an expansion turbine (turboexpander), which allows the use of relatively low pressure initially compressed air. Multivariant calculations were performed, the main result being the loss of power and efficiency of the unit due to the need to ensure adequate driving power to the compressor system of the oxygen generating plant.
NASA Astrophysics Data System (ADS)
Krokhin, G.; Pestunov, A.; Arakelyan, E.; Mukhin, V.
2017-11-01
During the last decades, there can be noticed an increase of interest concerning various aspects of intellectual diagnostics and management in thermal power engineering according the hybrid principle. It is conditioned by the fact that conservative static methods does not allow to reflect the actual power installation state adequately. In order to improve the diagnostics quality, we use various fuzzy systems apparatus. In this paper, we introduce the intellectual system, called SKAIS, which is intended for quick and precise diagnostics of thermal power equipment. This system was developed as the result of the research carried out by specialists from National Research University “Moscow Power Engineering Institute” and Novosibirsk State University of Economics and Management. It drastically increases the level of intelligence of the automatic power plant control system.
Got power? A systematic review of sample size adequacy in health professions education research.
Cook, David A; Hatala, Rose
2015-03-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Gene-Based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions.
Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y; Chen, Wei
2016-02-01
Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, here we develop Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT), which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. © 2016 WILEY PERIODICALS, INC.
Gene-based Association Analysis for Censored Traits Via Fixed Effect Functional Regressions
Fan, Ruzong; Wang, Yifan; Yan, Qi; Ding, Ying; Weeks, Daniel E.; Lu, Zhaohui; Ren, Haobo; Cook, Richard J; Xiong, Momiao; Swaroop, Anand; Chew, Emily Y.; Chen, Wei
2015-01-01
Summary Genetic studies of survival outcomes have been proposed and conducted recently, but statistical methods for identifying genetic variants that affect disease progression are rarely developed. Motivated by our ongoing real studies, we develop here Cox proportional hazard models using functional regression (FR) to perform gene-based association analysis of survival traits while adjusting for covariates. The proposed Cox models are fixed effect models where the genetic effects of multiple genetic variants are assumed to be fixed. We introduce likelihood ratio test (LRT) statistics to test for associations between the survival traits and multiple genetic variants in a genetic region. Extensive simulation studies demonstrate that the proposed Cox RF LRT statistics have well-controlled type I error rates. To evaluate power, we compare the Cox FR LRT with the previously developed burden test (BT) in a Cox model and sequence kernel association test (SKAT) which is based on mixed effect Cox models. The Cox FR LRT statistics have higher power than or similar power as Cox SKAT LRT except when 50%/50% causal variants had negative/positive effects and all causal variants are rare. In addition, the Cox FR LRT statistics have higher power than Cox BT LRT. The models and related test statistics can be useful in the whole genome and whole exome association studies. An age-related macular degeneration dataset was analyzed as an example. PMID:26782979
The statistical overlap theory of chromatography using power law (fractal) statistics.
Schure, Mark R; Davis, Joe M
2011-12-30
The chromatographic dimensionality was recently proposed as a measure of retention time spacing based on a power law (fractal) distribution. Using this model, a statistical overlap theory (SOT) for chromatographic peaks is developed that estimates the number of peak maxima as a function of the chromatographic dimension, saturation and scale. Power law models exhibit a threshold region whereby below a critical saturation value no loss of peak maxima due to peak fusion occurs as saturation increases. At moderate saturation, behavior is similar to the random (Poisson) peak model. At still higher saturation, the power law model shows loss of peaks nearly independent of the scale and dimension of the model. The physicochemical meaning of the power law scale parameter is discussed and shown to be equal to the Boltzmann-weighted free energy of transfer over the scale limits. The scale is discussed. Small scale range (small β) is shown to generate more uniform chromatograms. Large scale range chromatograms (large β) are shown to give occasional large excursions of retention times; this is a property of power laws where "wild" behavior is noted to occasionally occur. Both cases are shown to be useful depending on the chromatographic saturation. A scale-invariant model of the SOT shows very simple relationships between the fraction of peak maxima and the saturation, peak width and number of theoretical plates. These equations provide much insight into separations which follow power law statistics. Copyright © 2011 Elsevier B.V. All rights reserved.
The relation between statistical power and inference in fMRI
Wager, Tor D.; Yarkoni, Tal
2017-01-01
Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843
Webster, R J; Williams, A; Marchetti, F; Yauk, C L
2018-07-01
Mutations in germ cells pose potential genetic risks to offspring. However, de novo mutations are rare events that are spread across the genome and are difficult to detect. Thus, studies in this area have generally been under-powered, and no human germ cell mutagen has been identified. Whole Genome Sequencing (WGS) of human pedigrees has been proposed as an approach to overcome these technical and statistical challenges. WGS enables analysis of a much wider breadth of the genome than traditional approaches. Here, we performed power analyses to determine the feasibility of using WGS in human families to identify germ cell mutagens. Different statistical models were compared in the power analyses (ANOVA and multiple regression for one-child families, and mixed effect model sampling between two to four siblings per family). Assumptions were made based on parameters from the existing literature, such as the mutation-by-paternal age effect. We explored two scenarios: a constant effect due to an exposure that occurred in the past, and an accumulating effect where the exposure is continuing. Our analysis revealed the importance of modeling inter-family variability of the mutation-by-paternal age effect. Statistical power was improved by models accounting for the family-to-family variability. Our power analyses suggest that sufficient statistical power can be attained with 4-28 four-sibling families per treatment group, when the increase in mutations ranges from 40 to 10% respectively. Modeling family variability using mixed effect models provided a reduction in sample size compared to a multiple regression approach. Much larger sample sizes were required to detect an interaction effect between environmental exposures and paternal age. These findings inform study design and statistical modeling approaches to improve power and reduce sequencing costs for future studies in this area. Crown Copyright © 2018. Published by Elsevier B.V. All rights reserved.
Comparison of Time-to-First Event and Recurrent Event Methods in Randomized Clinical Trials.
Claggett, Brian; Pocock, Stuart; Wei, L J; Pfeffer, Marc A; McMurray, John J V; Solomon, Scott D
2018-03-27
Background -Most Phase-3 trials feature time-to-first event endpoints for their primary and/or secondary analyses. In chronic diseases where a clinical event can occur more than once, recurrent-event methods have been proposed to more fully capture disease burden and have been assumed to improve statistical precision and power compared to conventional "time-to-first" methods. Methods -To better characterize factors that influence statistical properties of recurrent-events and time-to-first methods in the evaluation of randomized therapy, we repeatedly simulated trials with 1:1 randomization of 4000 patients to active vs control therapy, with true patient-level risk reduction of 20% (i.e. RR=0.80). For patients who discontinued active therapy after a first event, we assumed their risk reverted subsequently to their original placebo-level risk. Through simulation, we varied a) the degree of between-patient heterogeneity of risk and b) the extent of treatment discontinuation. Findings were compared with those from actual randomized clinical trials. Results -As the degree of between-patient heterogeneity of risk was increased, both time-to-first and recurrent-events methods lost statistical power to detect a true risk reduction and confidence intervals widened. The recurrent-events analyses continued to estimate the true RR=0.80 as heterogeneity increased, while the Cox model produced estimates that were attenuated. The power of recurrent-events methods declined as the rate of study drug discontinuation post-event increased. Recurrent-events methods provided greater power than time-to-first methods in scenarios where drug discontinuation was ≤30% following a first event, lesser power with drug discontinuation rates of ≥60%, and comparable power otherwise. We confirmed in several actual trials in chronic heart failure that treatment effect estimates were attenuated when estimated via the Cox model and that increased statistical power from recurrent-events methods was most pronounced in trials with lower treatment discontinuation rates. Conclusions -We find that the statistical power of both recurrent-events and time-to-first methods are reduced by increasing heterogeneity of patient risk, a parameter not included in conventional power and sample size formulas. Data from real clinical trials are consistent with simulation studies, confirming that the greatest statistical gains from use of recurrent-events methods occur in the presence of high patient heterogeneity and low rates of study drug discontinuation.
A Note on Comparing the Power of Test Statistics at Low Significance Levels.
Morris, Nathan; Elston, Robert
2011-01-01
It is an obvious fact that the power of a test statistic is dependent upon the significance (alpha) level at which the test is performed. It is perhaps a less obvious fact that the relative performance of two statistics in terms of power is also a function of the alpha level. Through numerous personal discussions, we have noted that even some competent statisticians have the mistaken intuition that relative power comparisons at traditional levels such as α = 0.05 will be roughly similar to relative power comparisons at very low levels, such as the level α = 5 × 10 -8 , which is commonly used in genome-wide association studies. In this brief note, we demonstrate that this notion is in fact quite wrong, especially with respect to comparing tests with differing degrees of freedom. In fact, at very low alpha levels the cost of additional degrees of freedom is often comparatively low. Thus we recommend that statisticians exercise caution when interpreting the results of power comparison studies which use alpha levels that will not be used in practice.
Wang, Zhifang; Zhu, Wenming; Mo, Zhe; Wang, Yuanyang; Mao, Guangming; Wang, Xiaofeng; Lou, Xiaoming
2017-01-01
Universal salt iodization (USI) has been implemented for two decades in China. It is crucial to periodically monitor iodine status in the most vulnerable population, such as pregnant women. A cross-sectional study was carried out in an evidence-proved iodine-sufficient province to evaluate iodine intake in pregnancy. According to the WHO/UNICEF/ICCIDD recommendation criteria of adequate iodine intake in pregnancy (150–249 µg/L), the median urinary iodine concentration (UIC) of the total 8159 recruited pregnant women was 147.5 µg/L, which indicated pregnant women had iodine deficiency at the province level. Overall, 51.0% of the total study participants had iodine deficiency with a UIC < 150 µg/L and only 32.9% of them had adequate iodine. Participants living in coastal areas had iodine deficiency with a median UIC of 130.1 µg/L, while those in inland areas had marginally adequate iodine intake with a median UIC of 158.1 µg/L (p < 0.001). Among the total study participants, 450 pregnant women consuming non-iodized salt had mild-moderate iodine deficiency with a median UIC of 99.6 µg/L; 7363 pregnant women consuming adequately iodized salt had a lightly statistically higher median UIC of 151.9 µg/L, compared with the recommended adequate level by the WHO/UNICEF/ICCIDD (p < 0.001). Consuming adequately iodized salt seemed to lightly increase the median UIC level, but it may not be enough to correct iodine nutrition status to an optimum level as recommended by the WHO/UNICEF/ICCIDD. We therefore suggest that, besides strengthening USI policy, additional interventive measure may be needed to improve iodine intake in pregnancy. PMID:28230748
Wang, Zhifang; Zhu, Wenming; Mo, Zhe; Wang, Yuanyang; Mao, Guangming; Wang, Xiaofeng; Lou, Xiaoming
2017-02-20
Universal salt iodization (USI) has been implemented for two decades in China. It is crucial to periodically monitor iodine status in the most vulnerable population, such as pregnant women. A cross-sectional study was carried out in an evidence-proved iodine-sufficient province to evaluate iodine intake in pregnancy. According to the WHO/UNICEF/ICCIDD recommendation criteria of adequate iodine intake in pregnancy (150-249 µg/L), the median urinary iodine concentration (UIC) of the total 8159 recruited pregnant women was 147.5 µg/L, which indicated pregnant women had iodine deficiency at the province level. Overall, 51.0% of the total study participants had iodine deficiency with a UIC < 150 µg/L and only 32.9% of them had adequate iodine. Participants living in coastal areas had iodine deficiency with a median UIC of 130.1 µg/L, while those in inland areas had marginally adequate iodine intake with a median UIC of 158.1 µg/L ( p < 0.001). Among the total study participants, 450 pregnant women consuming non-iodized salt had mild-moderate iodine deficiency with a median UIC of 99.6 µg/L; 7363 pregnant women consuming adequately iodized salt had a lightly statistically higher median UIC of 151.9 µg/L, compared with the recommended adequate level by the WHO/UNICEF/ICCIDD ( p < 0.001). Consuming adequately iodized salt seemed to lightly increase the median UIC level, but it may not be enough to correct iodine nutrition status to an optimum level as recommended by the WHO/UNICEF/ICCIDD. We therefore suggest that, besides strengthening USI policy, additional interventive measure may be needed to improve iodine intake in pregnancy.
A Comparison of Latent Growth Models for Constructs Measured by Multiple Items
ERIC Educational Resources Information Center
Leite, Walter L.
2007-01-01
Univariate latent growth modeling (LGM) of composites of multiple items (e.g., item means or sums) has been frequently used to analyze the growth of latent constructs. This study evaluated whether LGM of composites yields unbiased parameter estimates, standard errors, chi-square statistics, and adequate fit indexes. Furthermore, LGM was compared…
Predicting fire spread in Arizona's oak chaparral
A. W. Lindenmuth; James R. Davis
1973-01-01
Five existing fire models, both experimental and theoretical, did not adequately predict rate-of-spread (ROS) when tested on single- and multiclump fires in oak chaparral in Arizona. A statistical model developed using essentially the same input variables but weighted differently accounted for 81 percent ofthe variation in ROS. A chemical coefficient that accounts for...
Height and Weight of Southeast Asian Preschool Children in Northern California.
ERIC Educational Resources Information Center
Dewey, Kathryn G.; And Others
1986-01-01
Anthropometric data were obtained from 526 Southeast Asian preschool children during 1980-84. Mean weights and heights were substantially below the National Center for Health Statistics (NCHS) 50th percentile, but rates of weight and height gain were similar to reference values, indicating adequate growth after arrival in the United States.…
A Call for a New National Norming Methodology.
ERIC Educational Resources Information Center
Ligon, Glynn; Mangino, Evangelina
Issues related to achieving adequate national norms are reviewed, and a new methodology is proposed that would work to provide a true measure of national achievement levels on an annual basis and would enable reporting results in current-year norms. Statistical methodology and technology could combine to create a national norming process that…
Are Academic Programs Adequate for the Software Profession?
ERIC Educational Resources Information Center
Koster, Alexis
2010-01-01
According to the Bureau of Labor Statistics, close to 1.8 million people, or 77% of all computer professionals, were working in the design, development, deployment, maintenance, and management of software in 2006. The ACM [Association for Computing Machinery] model curriculum for the BS in computer science proposes that about 42% of the core body…
What Response Rates Are Needed to Make Reliable Inferences from Student Evaluations of Teaching?
ERIC Educational Resources Information Center
Zumrawi, Abdel Azim; Bates, Simon P.; Schroeder, Marianne
2014-01-01
This paper addresses the determination of statistically desirable response rates in students' surveys, with emphasis on assessing the effect of underlying variability in the student evaluation of teaching (SET). We discuss factors affecting the determination of adequate response rates and highlight challenges caused by non-response and lack of…
ERIC Educational Resources Information Center
Benton-Borghi, Beatrice Hope; Chang, Young Mi
2011-01-01
The National Center for Educational Statistics (NCES, 2010) continues to report substantial underachievement of diverse student populations in the nation's schools. After decades of focus on diversity and multicultural education, with integrating field and clinical practice, candidates continue to graduate without adequate knowledge, skills and…
Safe delivery of optical power from space.
Smith, M; Fork, R L; Cole, S
2001-05-07
More than a billion gigawatts of sunlight pass through the area extending from Earth out to geostationary orbit. A small fraction of this clean renewable power appears more than adequate to satisfy the projected needs of Earth, and of human exploration and development of space far into the future. Recent studies suggest safe and efficient access to this power can be achieved within 10 to 40 years. Light, enhanced in spatial and temporal coherence, as compared to natural sunlight, offers a means, and probably the only practical means, of usefully transmitting this power to Earth. We describe safety standards for satellite constellations and Earth based sites designed, respectively, to transmit, and receive this power. The spectral properties, number of satellites, and angle subtended at Earth that are required for safe delivery are identified and discussed.
Improved control strategy for wind-powered refrigerated storage of apples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, J.D.C.
1979-01-01
The need for an improved control strategy for the operation of a wind-powered refrigeration system for the storage of apples was investigated. The results are applicable to other systems which employ intermittently available power sources, battery and thermal storage, and an auxiliary, direct current power supply. Tests were conducted on the wind-powered refrigeration system at the Virginia Polytechnic Institute and State University Horticulture Research Farm in Blacksburg, Virginia. Tests were conducted on the individual components of the system. In situ windmill performance was also conducted. The results of these tests have been presented. An improved control strategy was developed tomore » improve the utilization of available wind energy and to reduce the need for electrical energy from an external source while maintaining an adequate apple storage environment.« less
Joining the Dots: The Challenge of Creating Coherent School Improvement
ERIC Educational Resources Information Center
Robinson, Viviane; Bendikson, Linda; McNaughton, Stuart; Wilson, Aaron; Zhu, Tong
2017-01-01
Background/Context: Sustained school improvement requires adequate organizational and instructional coherence, yet, in typical high schools, subject department organization, norms of teacher professional autonomy, and involvement in multiple initiatives present powerful obstacles to forging a coherent approach to improvement. This study examines…
46 CFR 183.360 - Semiconductor rectifier systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 7 2011-10-01 2011-10-01 false Semiconductor rectifier systems. 183.360 Section 183.360... TONS) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.360 Semiconductor rectifier systems. (a) Each semiconductor rectifier system must have an adequate heat removal system that prevents...
Thermal Pollution Impact upon Aquatic Life.
ERIC Educational Resources Information Center
Shiomoto, Gail T.; Olson, Betty H.
1978-01-01
Conventional and nuclear power plants release waste heat to cooling water which then returns to receiving bodies of surface water. This thermal pollution causes a variety of effects in the aquatic ecosystem. More must be learned about these effects to ensure adequate regulation of thermal discharges. (RE)
Advances in electrometer vacuum tube design
NASA Technical Reports Server (NTRS)
1970-01-01
Single-ended, miniature-cathode tube with a relatively low grid current level is constructed. Adequate cathode temperature at relatively low heater power drain is provided by designing the supporting spacers to provide a square cathode hole. Method of assembling the mount and bonding the elements is discussed.
46 CFR 183.360 - Semiconductor rectifier systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 7 2010-10-01 2010-10-01 false Semiconductor rectifier systems. 183.360 Section 183.360... TONS) ELECTRICAL INSTALLATION Power Sources and Distribution Systems § 183.360 Semiconductor rectifier systems. (a) Each semiconductor rectifier system must have an adequate heat removal system that prevents...
Detecting higher spin fields through statistical anisotropy in the CMB and galaxy power spectra
NASA Astrophysics Data System (ADS)
Bartolo, Nicola; Kehagias, Alex; Liguori, Michele; Riotto, Antonio; Shiraishi, Maresuke; Tansella, Vittorio
2018-01-01
Primordial inflation may represent the most powerful collider to test high-energy physics models. In this paper we study the impact on the inflationary power spectrum of the comoving curvature perturbation in the specific model where massive higher spin fields are rendered effectively massless during a de Sitter epoch through suitable couplings to the inflaton field. In particular, we show that such fields with spin s induce a distinctive statistical anisotropic signal on the power spectrum, in such a way that not only the usual g2 M-statistical anisotropy coefficients, but also higher-order ones (i.e., g4 M,g6 M,…,g(2 s -2 )M and g(2 s )M) are nonvanishing. We examine their imprints in the cosmic microwave background and galaxy power spectra. Our Fisher matrix forecasts indicate that the detectability of gL M depends very weakly on L : all coefficients could be detected in near future if their magnitudes are bigger than about 10-3.
Wyka, Joanna; Biernat, Jadwiga; Mikołajczak, Jolanta; Piotrowska, Ewa
2012-01-01
The proportion of elderly people in the global population is rapidly increasing. Their nutritional status indicates many deficiencies that are risky to health. The aim of this paper was to assess the nutrition and nutritional status in elderly individuals above 60 years old living in their family houses in rural areas. Dietary intake and nutritional status were measured in 174 elderly women and 64 men living in the rural areas of Oleśnica (near Wrocław, SW Poland). Energy intake, consumption of nutrients, selected anthropometric and biochemical indicators, were measured in two groups: one at risk of malnutrition and one with adequate nutrition. Using the mini nutritional assessment (MNA) questionnaire, 238 persons over 60 years of age were qualified according to their nutritional status. Anthropometric and biochemical parameters were measured. The group of women at risk of malnutrition (n=30) showed a statistically significantly lower energy intake in their diet (1,127 kcal) compared to women with adequate nutrition (1,351 kcal). The entire group of examined individuals showed a too low consumption of fiber, calcium, vitamins C and D, and folates. Most of the examined women had a too high body mass index (BMI) (on average 28.8), waist circumference was 96.3 cm, and the triceps skinfold (TSF) was 25.2mm thick. Women at a risk of malnutrition had statistically significantly lower lipid parameters than those with adequate nutrition (respectively: TC 191.1 vs. 219.1m/dl, p<0.001, LDL-cholesterol 107.1 vs. 125.1m/dl, p<0.008, TG 129 vs. 143 mg/dl). Men with a risk of malnutrition had a statistically significantly lower BMI (26.0 vs. 28.7, p<0.04), and also lower waist and arm perimeters compared to men with correct nutrition. According to the Charlson comorbidity index (CCI), 8.2% of person with adequate nutrition had poor prognostic indicator for overall survival. All the examined individuals showed many significant nutritional deficiencies. The group with nutritional risk had more pronounced nutritional deficiencies. Despite a too low energy value of foods among individuals with correct nutrition, their anthropometric parameters paradoxically showed the presence of excessive fatty tissue. The most frequent diseases existed in examined group were coronary artery disease and congestive heart failure. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Hoyle, R H
1991-02-01
Indirect measures of psychological constructs are vital to clinical research. On occasion, however, the meaning of indirect measures of psychological constructs is obfuscated by statistical procedures that do not account for the complex relations between items and latent variables and among latent variables. Covariance structure analysis (CSA) is a statistical procedure for testing hypotheses about the relations among items that indirectly measure a psychological construct and relations among psychological constructs. This article introduces clinical researchers to the strengths and limitations of CSA as a statistical procedure for conceiving and testing structural hypotheses that are not tested adequately with other statistical procedures. The article is organized around two empirical examples that illustrate the use of CSA for evaluating measurement models with correlated error terms, higher-order factors, and measured and latent variables.
Rajadhyaksha, Milind
2012-01-01
Abstract. Coherent speckle influences the resulting image when narrow spectral line-width and single spatial mode illumination are used, though these are the same light-source properties that provide the best radiance-to-cost ratio. However, a suitable size of the detection pinhole can be chosen to maintain adequate optical sectioning while making the probability density of the speckle noise more normal and reducing its effect. The result is a qualitatively better image with improved contrast, which is easier to read. With theoretical statistics and experimental results, we show that the detection pinhole size is a fundamental parameter for designing imaging systems for use in turbid media. PMID:23224184
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-06-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
NASA Astrophysics Data System (ADS)
Arya, Sabha Raj; Patel, Ashish; Giri, Ashutosh
2018-03-01
This paper deals wind energy based power generation system using Permanent Magnet Synchronous Generator (PMSG). It is controlled using advanced enhanced phase-lock loop for power quality features using distribution static compensator to eliminate the harmonics and to provide KVAR compensation as well as load balancing. It also manages rated potential at the point of common interface under linear and non-linear loads. In order to have better efficiency and reliable operation of PMSG driven by wind turbine, it is necessary to analyze the governing equation of wind based turbine and PMSG under fixed and variable wind speed. For handling power quality problems, power electronics based shunt connected custom power device is used in three wire system. The simulations in MATLAB/Simulink environment have been carried out in order to demonstrate this model and control approach used for the power quality enhancement. The performance results show the adequate performance of PMSG based power generation system and control algorithm.
1981-02-01
primary parameters affecting the SNR. For an earth-based interferometer, the physical aperture may usually be constructed adequately large to keep the...bandwidth Av cent--.c. on vo0 by an interferometer with frequency characteristic F(v) and primary power pattern G(s-s ) (defined as the product of the...infinitely narrow beam for the primary power pattern, G(g- 0 ) = (;-S )] we have where we have assumed a flat frequency response and included as a
Risk management and regulations for lower limb medical exoskeletons: a review
He, Yongtian; Eguren, David; Luu, Trieu Phat; Contreras-Vidal, Jose L
2017-01-01
Gait disability is a major health care problem worldwide. Powered exoskeletons have recently emerged as devices that can enable users with gait disabilities to ambulate in an upright posture, and potentially bring other clinical benefits. In 2014, the US Food and Drug Administration approved marketing of the ReWalk™ Personal Exoskeleton as a class II medical device with special controls. Since then, Indego™ and Ekso™ have also received regulatory approval. With similar trends worldwide, this industry is likely to grow rapidly. On the other hand, the regulatory science of powered exoskeletons is still developing. The type and extent of probable risks of these devices are yet to be understood, and industry standards are yet to be developed. To address this gap, Manufacturer and User Facility Device Experience, Clinicaltrials.gov, and PubMed databases were searched for reports of adverse events and inclusion and exclusion criteria involving the use of lower limb powered exoskeletons. Current inclusion and exclusion criteria, which can determine probable risks, were found to be diverse. Reported adverse events and identified risks of current devices are also wide-ranging. In light of these findings, current regulations, standards, and regulatory procedures for medical device applications in the USA, Europe, and Japan were also compared. There is a need to raise awareness of probable risks associated with the use of powered exoskeletons and to develop adequate countermeasures, standards, and regulations for these human-machine systems. With appropriate risk mitigation strategies, adequate standards, comprehensive reporting of adverse events, and regulatory oversight, powered exoskeletons may one day allow individuals with gait disabilities to safely and independently ambulate. PMID:28533700
Preliminary design of a space system operating a ground-penetrating radar
NASA Astrophysics Data System (ADS)
D'Errico, Marco; Ponte, Salvatore; Grassi, Michele; Moccia, Antonio
2005-12-01
Ground-penetrating radars (GPR) are currently used only in ground campaigns or in few airborne installations. A feasibility analysis of a space mission operating a GPR for archaeological applications is presented in this work with emphasis on spacecraft critical aspects: antenna dimension and power required for achieving adequate depth and accuracy. Sensor parametric design is performed considering two operating altitudes (250 and 500 km) and user requirements, such as minimum skin depth, vertical and horizontal resolution. A 500-km altitude, 6 a.m.-6 p.m. sun-synchronous orbit is an adequate compromise between atmospheric drag and payload transmitted average power (12 kW) to achieve a 3-m penetration depth. The satellite bus preliminary design is then performed, with focus on critical subsystems and technologies. The payload average power requirement can be kept within feasible limits (1 kW) by using NiH2 batteries to supply the radar transmitter, and with a strong reduction of the mission duty cycle ( 40km×1100km are observed per orbit). As for the electric power subsystem, a dual-voltage strategy is adopted, with the battery charge regulator supplied at 126 V and the bus loads at 50 V. The overall average power (1.9 kW), accounting for both payload and bus needs, can be supplied by a 20m2 GaAs solar panel for a three-year lifetime. Finally, the satellite mass is kept within reasonable limits (1.6 tons) using inflatable-rigidisable structure for both the payload antenna and the solar panels.
1990-03-01
equation of the statistical energy analysis (SEA) using the procedure indicated in equation (13) [8, 9]. Similarly, one may state the quantities (. (X-)) and...CONGRESS ON ACOUSTICS, July 24-31 1986, Toronto, Canada, Paper D6-1. 5. CUSCHIERI, J.M., Power flow as a compliment to statistical energy analysis and...34Random response of identical one-dimensional subsystems", Journal of Sound and Vibration, 1980, Vol. 70, p. 343-353. 8. LYON, R.H., Statistical Energy Analysis of
NASA Technical Reports Server (NTRS)
Zimmerman, G. A.; Olsen, E. T.
1992-01-01
Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.
Palmisano, Aldo N.; Elder, N.E.
2001-01-01
We examined, under standardized conditions, seawater survival of chinook salmon Oncorhynchus tshawytscha at the smolt stage to evaluate the experimental hatchery practices applied to their rearing. The experimental rearing practices included rearing fish at different densities; attempting to control bacterial kidney disease with broodstock segregation, erythromycin injection, and an experimental diet; rearing fish on different water sources; and freeze branding the fish. After application of experimental rearing practices in hatcheries, smolts were transported to a rearing facility for about 2-3 months of seawater rearing. Of 16 experiments, 4 yielded statistically significant differences in seawater survival. In general we found that high variability among replicates, plus the low numbers of replicates available, resulted in low statistical power. We recommend including four or five replicates and using ?? = 0.10 in 1-tailed tests of hatchery experiments to try to increase the statistical power to 0.80.
Marino, Michael J
2018-05-01
There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.
Assessment of the relative merits of a few methods to detect evolutionary trends.
Laurin, Michel
2010-12-01
Some of the most basic questions about the history of life concern evolutionary trends. These include determining whether or not metazoans have become more complex over time, whether or not body size tends to increase over time (the Cope-Depéret rule), or whether or not brain size has increased over time in various taxa, such as mammals and birds. Despite the proliferation of studies on such topics, assessment of the reliability of results in this field is hampered by the variability of techniques used and the lack of statistical validation of these methods. To solve this problem, simulations are performed using a variety of evolutionary models (gradual Brownian motion, speciational Brownian motion, and Ornstein-Uhlenbeck), with or without a drift of variable amplitude, with variable variance of tips, and with bounds placed close or far from the starting values and final means of simulated characters. These are used to assess the relative merits (power, Type I error rate, bias, and mean absolute value of error on slope estimate) of several statistical methods that have recently been used to assess the presence of evolutionary trends in comparative data. Results show widely divergent performance of the methods. The simple, nonphylogenetic regression (SR) and variance partitioning using phylogenetic eigenvector regression (PVR) with a broken stick selection procedure have greatly inflated Type I error rate (0.123-0.180 at a 0.05 threshold), which invalidates their use in this context. However, they have the greatest power. Most variants of Felsenstein's independent contrasts (FIC; five of which are presented) have adequate Type I error rate, although two have a slightly inflated Type I error rate with at least one of the two reference trees (0.064-0.090 error rate at a 0.05 threshold). The power of all contrast-based methods is always much lower than that of SR and PVR, except under Brownian motion with a strong trend and distant bounds. Mean absolute value of error on slope of all FIC methods is slightly higher than that of phylogenetic generalized least squares (PGLS), SR, and PVR. PGLS performs well, with low Type I error rate, low error on regression coefficient, and power comparable with some FIC methods. Four variants of skewness analysis are examined, and a new method to assess significance of results is presented. However, all have consistently low power, except in rare combinations of trees, trend strength, and distance between final means and bounds. Globally, the results clearly show that FIC-based methods and PGLS are globally better than nonphylogenetic methods and variance partitioning with PVR. FIC methods and PGLS are sensitive to the model of evolution (and, hence, to branch length errors). Our results suggest that regressing raw character contrasts against raw geological age contrasts yields a good combination of power and Type I error rate. New software to facilitate batch analysis is presented.
NASA Astrophysics Data System (ADS)
Kumar, Jagadish; Ananthakrishna, G.
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Kumar, Jagadish; Ananthakrishna, G
2018-01-01
Scale-invariant power-law distributions for acoustic emission signals are ubiquitous in several plastically deforming materials. However, power-law distributions for acoustic emission energies are reported in distinctly different plastically deforming situations such as hcp and fcc single and polycrystalline samples exhibiting smooth stress-strain curves and in dilute metallic alloys exhibiting discontinuous flow. This is surprising since the underlying dislocation mechanisms in these two types of deformations are very different. So far, there have been no models that predict the power-law statistics for discontinuous flow. Furthermore, the statistics of the acoustic emission signals in jerky flow is even more complex, requiring multifractal measures for a proper characterization. There has been no model that explains the complex statistics either. Here we address the problem of statistical characterization of the acoustic emission signals associated with the three types of the Portevin-Le Chatelier bands. Following our recently proposed general framework for calculating acoustic emission, we set up a wave equation for the elastic degrees of freedom with a plastic strain rate as a source term. The energy dissipated during acoustic emission is represented by the Rayleigh-dissipation function. Using the plastic strain rate obtained from the Ananthakrishna model for the Portevin-Le Chatelier effect, we compute the acoustic emission signals associated with the three Portevin-Le Chatelier bands and the Lüders-like band. The so-calculated acoustic emission signals are used for further statistical characterization. Our results show that the model predicts power-law statistics for all the acoustic emission signals associated with the three types of Portevin-Le Chatelier bands with the exponent values increasing with increasing strain rate. The calculated multifractal spectra corresponding to the acoustic emission signals associated with the three band types have a maximum spread for the type C bands and decreasing with types B and A. We further show that the acoustic emission signals associated with Lüders-like band also exhibit a power-law distribution and multifractality.
Statistical issues on the analysis of change in follow-up studies in dental research.
Blance, Andrew; Tu, Yu-Kang; Baelum, Vibeke; Gilthorpe, Mark S
2007-12-01
To provide an overview to the problems in study design and associated analyses of follow-up studies in dental research, particularly addressing three issues: treatment-baselineinteractions; statistical power; and nonrandomization. Our previous work has shown that many studies purport an interacion between change (from baseline) and baseline values, which is often based on inappropriate statistical analyses. A priori power calculations are essential for randomized controlled trials (RCTs), but in the pre-test/post-test RCT design it is not well known to dental researchers that the choice of statistical method affects power, and that power is affected by treatment-baseline interactions. A common (good) practice in the analysis of RCT data is to adjust for baseline outcome values using ancova, thereby increasing statistical power. However, an important requirement for ancova is there to be no interaction between the groups and baseline outcome (i.e. effective randomization); the patient-selection process should not cause differences in mean baseline values across groups. This assumption is often violated for nonrandomized (observational) studies and the use of ancova is thus problematic, potentially giving biased estimates, invoking Lord's paradox and leading to difficulties in the interpretation of results. Baseline interaction issues can be overcome by use of statistical methods; not widely practiced in dental research: Oldham's method and multilevel modelling; the latter is preferred for its greater flexibility to deal with more than one follow-up occasion as well as additional covariates To illustrate these three key issues, hypothetical examples are considered from the fields of periodontology, orthodontics, and oral implantology. Caution needs to be exercised when considering the design and analysis of follow-up studies. ancova is generally inappropriate for nonrandomized studies and causal inferences from observational data should be avoided.
46 CFR 111.75-16 - Lighting of survival craft and rescue boats.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Section 111.75-16 Shipping COAST GUARD, DEPARTMENT OF HOMELAND SECURITY (CONTINUED) ELECTRICAL ENGINEERING ELECTRIC SYSTEMS-GENERAL REQUIREMENTS Lighting Circuits and Protection § 111.75-16 Lighting of survival... be adequately illuminated by lighting supplied from the emergency power source. (b) The arrangement...
29 CFR 1910.254 - Arc welding and cutting.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 80 volts. (c) Installation of arc welding equipment—(1) General. Installation including power supply... mechanically strong and electrically adequate for the required current. (3) Supply connections and conductors...-carrying capacity of the supply conductors shall be not less than the rated primary current of the welding...
46 CFR 129.360 - Semiconductor-rectifier systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Semiconductor-rectifier systems. 129.360 Section 129.360... INSTALLATIONS Power Sources and Distribution Systems § 129.360 Semiconductor-rectifier systems. (a) Each semiconductor-rectifier system must have an adequate heat-removal system to prevent overheating. (b) If a...
46 CFR 120.360 - Semiconductor rectifier systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Semiconductor rectifier systems. 120.360 Section 120.360... INSTALLATION Power Sources and Distribution Systems § 120.360 Semiconductor rectifier systems. (a) Each semiconductor rectifier system must have an adequate heat removal system that prevents overheating. (b) Where a...
46 CFR 97.80-1 - Special operating conditions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... exhausts of power-operated industrial trucks shall have adequate ventilation. The senior deck officer shall... which persons are working, by persons acquainted with the test equipment and procedure. The carbon monoxide concentration in the holds and intermediate decks where persons are working shall be maintained at...
40 CFR 256.21 - Requirements for State regulatory powers.
Code of Federal Regulations, 2010 CFR
2010-07-01
... WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid Waste... be adequate to enforce solid waste disposal standards which are equivalent to or more stringent than the criteria for classification of solid waste disposal facilities (40 CFR part 257). Such authority...
40 CFR 256.21 - Requirements for State regulatory powers.
Code of Federal Regulations, 2011 CFR
2011-07-01
... WASTES GUIDELINES FOR DEVELOPMENT AND IMPLEMENTATION OF STATE SOLID WASTE MANAGEMENT PLANS Solid Waste... be adequate to enforce solid waste disposal standards which are equivalent to or more stringent than the criteria for classification of solid waste disposal facilities (40 CFR part 257). Such authority...
46 CFR 129.360 - Semiconductor-rectifier systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Semiconductor-rectifier systems. 129.360 Section 129.360... INSTALLATIONS Power Sources and Distribution Systems § 129.360 Semiconductor-rectifier systems. (a) Each semiconductor-rectifier system must have an adequate heat-removal system to prevent overheating. (b) If a...
46 CFR 120.360 - Semiconductor rectifier systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Semiconductor rectifier systems. 120.360 Section 120.360... INSTALLATION Power Sources and Distribution Systems § 120.360 Semiconductor rectifier systems. (a) Each semiconductor rectifier system must have an adequate heat removal system that prevents overheating. (b) Where a...
Robust Statistical Detection of Power-Law Cross-Correlation.
Blythe, Duncan A J; Nikulin, Vadim V; Müller, Klaus-Robert
2016-06-02
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram.
Robust Statistical Detection of Power-Law Cross-Correlation
Blythe, Duncan A. J.; Nikulin, Vadim V.; Müller, Klaus-Robert
2016-01-01
We show that widely used approaches in statistical physics incorrectly indicate the existence of power-law cross-correlations between financial stock market fluctuations measured over several years and the neuronal activity of the human brain lasting for only a few minutes. While such cross-correlations are nonsensical, no current methodology allows them to be reliably discarded, leaving researchers at greater risk when the spurious nature of cross-correlations is not clear from the unrelated origin of the time series and rather requires careful statistical estimation. Here we propose a theory and method (PLCC-test) which allows us to rigorously and robustly test for power-law cross-correlations, correctly detecting genuine and discarding spurious cross-correlations, thus establishing meaningful relationships between processes in complex physical systems. Our method reveals for the first time the presence of power-law cross-correlations between amplitudes of the alpha and beta frequency ranges of the human electroencephalogram. PMID:27250630
Fractal analysis of the short time series in a visibility graph method
NASA Astrophysics Data System (ADS)
Li, Ruixue; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Chen, Yingyuan
2016-05-01
The aim of this study is to evaluate the performance of the visibility graph (VG) method on short fractal time series. In this paper, the time series of Fractional Brownian motions (fBm), characterized by different Hurst exponent H, are simulated and then mapped into a scale-free visibility graph, of which the degree distributions show the power-law form. The maximum likelihood estimation (MLE) is applied to estimate power-law indexes of degree distribution, and in this progress, the Kolmogorov-Smirnov (KS) statistic is used to test the performance of estimation of power-law index, aiming to avoid the influence of droop head and heavy tail in degree distribution. As a result, we find that the MLE gives an optimal estimation of power-law index when KS statistic reaches its first local minimum. Based on the results from KS statistic, the relationship between the power-law index and the Hurst exponent is reexamined and then amended to meet short time series. Thus, a method combining VG, MLE and KS statistics is proposed to estimate Hurst exponents from short time series. Lastly, this paper also offers an exemplification to verify the effectiveness of the combined method. In addition, the corresponding results show that the VG can provide a reliable estimation of Hurst exponents.
Jiang, Wei; Yu, Weichuan
2017-02-15
In genome-wide association studies (GWASs) of common diseases/traits, we often analyze multiple GWASs with the same phenotype together to discover associated genetic variants with higher power. Since it is difficult to access data with detailed individual measurements, summary-statistics-based meta-analysis methods have become popular to jointly analyze datasets from multiple GWASs. In this paper, we propose a novel summary-statistics-based joint analysis method based on controlling the joint local false discovery rate (Jlfdr). We prove that our method is the most powerful summary-statistics-based joint analysis method when controlling the false discovery rate at a certain level. In particular, the Jlfdr-based method achieves higher power than commonly used meta-analysis methods when analyzing heterogeneous datasets from multiple GWASs. Simulation experiments demonstrate the superior power of our method over meta-analysis methods. Also, our method discovers more associations than meta-analysis methods from empirical datasets of four phenotypes. The R-package is available at: http://bioinformatics.ust.hk/Jlfdr.html . eeyu@ust.hk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
Ensuring reliability in expansion schemes.
Kamal-Uddin, Abu Sayed; Williams, Donald Leigh
2005-01-01
Existing electricity power supplies must serve, or be adapted to serve, the expansion of hospital buildings. With the existing power supply assets of many hospitals being up to 20 years old, assessing the security and reliability of the power system must be given appropriate priority to avoid unplanned outages due to overloads and equipment failures. It is imperative that adequate contingency is planned for essential and non-essential electricity circuits. This article describes the methodology undertaken, and the subsequent recommendations that were made, when evaluating the security and reliability of electricity power supplies to a number of major London hospitals. The methodology described aligns with the latest issue of NHS Estates HTM 2011 'Primary Electrical Infrastructure Emergency Electrical Services Design Guidance' (to which ERA Technology has contributed).
Reproducible Growth of High-Quality Cubic-SiC Layers
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.; Powell, J. Anthony
2004-01-01
Semiconductor electronic devices and circuits based on silicon carbide (SiC) are being developed for use in high-temperature, high-power, and/or high-radiation conditions under which devices made from conventional semiconductors cannot adequately perform. The ability of SiC-based devices to function under such extreme conditions is expected to enable significant improvements in a variety of applications and systems. These include greatly improved high-voltage switching for saving energy in public electric power distribution and electric motor drives; more powerful microwave electronic circuits for radar and communications; and sensors and controls for cleaner-burning, more fuel-efficient jet aircraft and automobile engines.
The ac power line protection for an IEEE 587 Class B environment
NASA Technical Reports Server (NTRS)
Roehr, W. D.; Clark, O. M.
1984-01-01
The 587B series of protectors are unique, low clamping voltage transient suppressors to protect ac-powered equipment from the 6000V peak open-circuit voltage and 3000A short circuit current as defined in IEEE standard 587 for Category B transients. The devices, which incorporate multiple-stage solid-state protector components, were specifically designed to operate under multiple exposures to maximum threat levels in this severe environment. The output voltage peaks are limited to 350V under maximum threat conditions for a 120V ac power line, thus providing adequate protection to vulnerable electronic equipment. The principle of operation and test performance data is discussed.
Graded junction termination extensions for electronic devices
NASA Technical Reports Server (NTRS)
Merrett, J. Neil (Inventor); Isaacs-Smith, Tamara (Inventor); Sheridan, David C. (Inventor); Williams, John R. (Inventor)
2006-01-01
A graded junction termination extension in a silicon carbide (SiC) semiconductor device and method of its fabrication using ion implementation techniques is provided for high power devices. The properties of silicon carbide (SiC) make this wide band gap semiconductor a promising material for high power devices. This potential is demonstrated in various devices such as p-n diodes, Schottky diodes, bipolar junction transistors, thyristors, etc. These devices require adequate and affordable termination techniques to reduce leakage current and increase breakdown voltage in order to maximize power handling capabilities. The graded junction termination extension disclosed is effective, self-aligned, and simplifies the implementation process.
Graded junction termination extensions for electronic devices
NASA Technical Reports Server (NTRS)
Merrett, J. Neil (Inventor); Isaacs-Smith, Tamara (Inventor); Sheridan, David C. (Inventor); Williams, John R. (Inventor)
2007-01-01
A graded junction termination extension in a silicon carbide (SiC) semiconductor device and method of its fabrication using ion implementation techniques is provided for high power devices. The properties of silicon carbide (SiC) make this wide band gap semiconductor a promising material for high power devices. This potential is demonstrated in various devices such as p-n diodes, Schottky diodes, bipolar junction transistors, thyristors, etc. These devices require adequate and affordable termination techniques to reduce leakage current and increase breakdown voltage in order to maximize power handling capabilities. The graded junction termination extension disclosed is effective, self-aligned, and simplifies the implementation process.
Improved control strategy for wind-powered refrigerated storage of apples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baldwin, J.D.C.; Vaughan, D.H.
1981-01-01
A refrigerated apple storage facility was constructed at the VPI and SU Horticultural Research Farm in Blacksburg, Virginia and began operation in March 1978. The system included a 10-kW electric wind generator, electrical battery storage, thermal (ice) storage, and auxiliary power. The need for an improved control system for the VPI and SU system was determined from tests on the individual components and in situ performance tests. The results of these tests formed the basis for an improved control strategy to improve the utilization of available wind energy and reduce the need for auxiliary power while maintaining an adequate applemore » storage environment.« less
Colegrave, Nick
2017-01-01
A common approach to the analysis of experimental data across much of the biological sciences is test-qualified pooling. Here non-significant terms are dropped from a statistical model, effectively pooling the variation associated with each removed term with the error term used to test hypotheses (or estimate effect sizes). This pooling is only carried out if statistical testing on the basis of applying that data to a previous more complicated model provides motivation for this model simplification; hence the pooling is test-qualified. In pooling, the researcher increases the degrees of freedom of the error term with the aim of increasing statistical power to test their hypotheses of interest. Despite this approach being widely adopted and explicitly recommended by some of the most widely cited statistical textbooks aimed at biologists, here we argue that (except in highly specialized circumstances that we can identify) the hoped-for improvement in statistical power will be small or non-existent, and there is likely to be much reduced reliability of the statistical procedures through deviation of type I error rates from nominal levels. We thus call for greatly reduced use of test-qualified pooling across experimental biology, more careful justification of any use that continues, and a different philosophy for initial selection of statistical models in the light of this change in procedure. PMID:28330912
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
Radiation Hardened, Modulator ASIC for High Data Rate Communications
NASA Technical Reports Server (NTRS)
McCallister, Ron; Putnam, Robert; Andro, Monty; Fujikawa, Gene
2000-01-01
Satellite-based telecommunication services are challenged by the need to generate down-link power levels adequate to support high quality (BER approx. equals 10(exp 12)) links required for modem broadband data services. Bandwidth-efficient Nyquist signaling, using low values of excess bandwidth (alpha), can exhibit large peak-to-average-power ratio (PAPR) values. High PAPR values necessitate high-power amplifier (HPA) backoff greater than the PAPR, resulting in unacceptably low HPA efficiency. Given the high cost of on-board prime power, this inefficiency represents both an economical burden, and a constraint on the rates and quality of data services supportable from satellite platforms. Constant-envelope signals offer improved power-efficiency, but only by imposing a severe bandwidth-efficiency penalty. This paper describes a radiation- hardened modulator which can improve satellite-based broadband data services by combining the bandwidth-efficiency of low-alpha Nyquist signals with high power-efficiency (negligible HPA backoff).
Agricultural and Food Processing Applications of Pulsed Power Technology
NASA Astrophysics Data System (ADS)
Takaki, Koichi; Ihara, Satoshi
Recent progress of agricultural and food processing applications of pulsed power is described in this paper. Repetitively operated compact pulsed power generators with a moderate peak power have been developed for the agricultural and the food processing applications. These applications are mainly based on biological effects and can be categorized as decontamination of air and liquid, germination promotion, inhabitation of saprophytes growth, extraction of juice from fruits and vegetables, and fertilization of liquid medium, etc. Types of pulsed power that have biological effects are caused with gas discharges, water discharges, and electromagnetic fields. The discharges yield free radicals, UV radiation, intense electric field, and shock waves. Biologically based applications of pulsed power are performed by selecting the type that gives the target objects the adequate result from among these agents or byproducts. For instance, intense electric fields form pores on the cell membrane, which is called electroporation, or influence the nuclei.
A statistical spatial power spectrum of the Earth's lithospheric magnetic field
NASA Astrophysics Data System (ADS)
Thébault, E.; Vervelidou, F.
2015-05-01
The magnetic field of the Earth's lithosphere arises from rock magnetization contrasts that were shaped over geological times. The field can be described mathematically in spherical harmonics or with distributions of magnetization. We exploit this dual representation and assume that the lithospheric field is induced by spatially varying susceptibility values within a shell of constant thickness. By introducing a statistical assumption about the power spectrum of the susceptibility, we then derive a statistical expression for the spatial power spectrum of the crustal magnetic field for the spatial scales ranging from 60 to 2500 km. This expression depends on the mean induced magnetization, the thickness of the shell, and a power law exponent for the power spectrum of the susceptibility. We test the relevance of this form with a misfit analysis to the observational NGDC-720 lithospheric magnetic field model power spectrum. This allows us to estimate a mean global apparent induced magnetization value between 0.3 and 0.6 A m-1, a mean magnetic crustal thickness value between 23 and 30 km, and a root mean square for the field value between 190 and 205 nT at 95 per cent. These estimates are in good agreement with independent models of the crustal magnetization and of the seismic crustal thickness. We carry out the same analysis in the continental and oceanic domains separately. We complement the misfit analyses with a Kolmogorov-Smirnov goodness-of-fit test and we conclude that the observed power spectrum can be each time a sample of the statistical one.
Statistical power and effect sizes of depression research in Japan.
Okumura, Yasuyuki; Sakamoto, Shinji
2011-06-01
Few studies have been conducted on the rationales for using interpretive guidelines for effect size, and most of the previous statistical power surveys have covered broad research domains. The present study aimed to estimate the statistical power and to obtain realistic target effect sizes of depression research in Japan. We systematically reviewed 18 leading journals of psychiatry and psychology in Japan and identified 974 depression studies that were mentioned in 935 articles published between 1990 and 2006. In 392 studies, logistic regression analyses revealed that using clinical populations was independently associated with being a statistical power of <0.80 (odds ratio 5.9, 95% confidence interval 2.9-12.0) and of <0.50 (odds ratio 4.9, 95% confidence interval 2.3-10.5). Of the studies using clinical populations, 80% did not achieve a power of 0.80 or more, and 44% did not achieve a power of 0.50 or more to detect the medium population effect sizes. A predictive model for the proportion of variance explained was developed using a linear mixed-effects model. The model was then used to obtain realistic target effect sizes in defined study characteristics. In the face of a real difference or correlation in population, many depression researchers are less likely to give a valid result than simply tossing a coin. It is important to educate depression researchers in order to enable them to conduct an a priori power analysis. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.
Targeted On-Demand Team Performance App Development
2016-10-01
from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes. What opportunities for
Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity
Beasley, T. Mark
2013-01-01
Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952
NASA Astrophysics Data System (ADS)
Ikegami, Takashi; Iwafune, Yumiko; Ogimoto, Kazuhiko
The high penetration of variable renewable generation such as Photovoltaic (PV) systems will cause the issue of supply-demand imbalance in a whole power system. The activation of the residential power usage, storage and generation by sophisticated scheduling and control using the Home Energy Management System (HEMS) will be needed to balance power supply and demand in the near future. In order to evaluate the applicability of the HEMS as a distributed controller for local and system-wide supply-demand balances, we developed an optimum operation scheduling model of domestic electric appliances using the mixed integer linear programming. Applying this model to several houses with dynamic electricity prices reflecting the power balance of the total power system, it was found that the adequate changes in electricity prices bring about the shift of residential power usages to control the amount of the reverse power flow due to excess PV generation.
Real Time Voltage and Current Phase Shift Analyzer for Power Saving Applications
Krejcar, Ondrej; Frischer, Robert
2012-01-01
Nowadays, high importance is given to low energy devices (such as refrigerators, deep-freezers, washing machines, pumps, etc.) that are able to produce reactive power in power lines which can be optimized (reduced). Reactive power is the main component which overloads power lines and brings excessive thermal stress to conductors. If the reactive power is optimized, it can significantly lower the electricity consumption (from 10 to 30%—varies between countries). This paper will examine and discuss the development of a measuring device for analyzing reactive power. However, the main problem is the precise real time measurement of the input and output voltage and current. Such quality measurement is needed to allow adequate action intervention (feedback which reduces or fully compensates reactive power). Several other issues, such as the accuracy and measurement speed, must be examined while designing this device. The price and the size of the final product need to remain low as they are the two important parameters of this solution. PMID:23112662
A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.
Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L
2014-01-01
We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.
Sampling methods for amphibians in streams in the Pacific Northwest.
R. Bruce Bury; Paul Stephen Corn
1991-01-01
Methods describing how to sample aquatic and semiaquatic amphibians in small streams and headwater habitats in the Pacific Northwest are presented. We developed a technique that samples 10-meter stretches of selected streams, which was adequate to detect presence or absence of amphibian species and provided sample sizes statistically sufficient to compare abundance of...
THE RAPID GROWTH OF COMMUNITY COLLEGES AND THEIR ACCESSIBILITY IN RURAL AREAS.
ERIC Educational Resources Information Center
ELDRIDGE, DONALD A.
THE COURSE OFFERINGS IN SOME JUNIOR COLLEGES FAIL TO MEET ADEQUATELY THE UNIQUE NEEDS OF RURAL YOUTH. A STUDY IN 1964 REVEALED THAT ONLY TWENTY OF THE SEVENTY JUNIOR COLLEGES IN CALIFORNIA OFFERED TRAINING IN AGRICULTURE, ALTHOUGH THE RECENTLY PUBLISHED "DIRECTORY OF JUNIOR COLLEGES" SHOWS AN INCREASE TO SIXTY. FURTHER STATISTICS REVEAL THAT 253…
10 CFR 9.80 - Disclosure of record to persons other than the individual to whom it pertains.
Code of Federal Regulations, 2010 CFR
2010-01-01
... has provided the agency with advance adequate written assurance that the record will be used solely as a statistical research or reporting record and the record is transferred in a form that is not individually identifiable. The advance written statement of assurance shall (i) state the purpose for which the...
?Cuan buenas son nuestras viviendas?: Los hispanos [How Good Is Our Housing? Hispanics].
ERIC Educational Resources Information Center
Yezer, Anthony; Limmer, Ruth
This report provides statistical information regarding the quality and cost of housing occupied by Hispanic Americans throughout the United States. Some of the findings include: (1) Hispanos occupy older and worse dwellings than the general U.S. population, with a significant number of dwellings lacking heat and adequate electricity and plumbing…
Closing the Gender Gap: Girls and Computers.
ERIC Educational Resources Information Center
Fuchs, Lucy
While 15 years ago only a few schools had microcomputers, today a majority of public schools have some computers, although an adequate number of computers for students to use is still in the future. Unfortunately, statistics show that, in many states, a higher percentage of male students are enrolled in computer classes than female; boys seem to…
ERIC Educational Resources Information Center
Bullis, Michael; Reiman, John
1992-01-01
The Transition Competence Battery for Deaf Adolescents and Young Adults (TCB) measures employment and independent living skills. The TCB was standardized on students (N from 180 to 230 for the different subtests) from both mainstreamed and residential settings. Item statistics and subtest reliabilities were adequate; evidence of construct validity…
ERIC Educational Resources Information Center
Yang, Dazhi
2017-01-01
Background: Teaching online is a different experience from that of teaching in a face-to-face setting. Knowledge and skills developed for teaching face-to-face classes are not adequate preparation for teaching online. It is even more challenging to teach science, technology, engineering and math (STEM) courses completely online because these…
Idaho Kids Count Data Book, 1996: Profiles of Child Well-Being.
ERIC Educational Resources Information Center
Idaho KIDS COUNT Project, Boise.
This Kids Count report examines statewide trends in the well-being of Idaho's children. The statistical portrait is based on 15 indicators of child and family well-being: (1) poverty; (2) single parent families; (3) infant mortality; (4) low birth weight babies; (5) percent of all mothers not receiving adequate prenatal care; (6) mothers ages…
A global goodness-of-fit statistic for Cox regression models.
Parzen, M; Lipsitz, S R
1999-06-01
In this paper, a global goodness-of-fit test statistic for a Cox regression model, which has an approximate chi-squared distribution when the model has been correctly specified, is proposed. Our goodness-of-fit statistic is global and has power to detect if interactions or higher order powers of covariates in the model are needed. The proposed statistic is similar to the Hosmer and Lemeshow (1980, Communications in Statistics A10, 1043-1069) goodness-of-fit statistic for binary data as well as Schoenfeld's (1980, Biometrika 67, 145-153) statistic for the Cox model. The methods are illustrated using data from a Mayo Clinic trial in primary billiary cirrhosis of the liver (Fleming and Harrington, 1991, Counting Processes and Survival Analysis), in which the outcome is the time until liver transplantation or death. The are 17 possible covariates. Two Cox proportional hazards models are fit to the data, and the proposed goodness-of-fit statistic is applied to the fitted models.
ERIC Educational Resources Information Center
Núñez, Anne-Marie
2014-01-01
The theoretical framework of intersectionality shows much promise in exploring how multiple social identities and their relationships with interlocking systems of power influence educational equity, particularly for historically underserved groups in education. Yet, social scientists have critiqued this framework for not adequately specifying how…
30 CFR 77.508 - Lightning arresters, ungrounded and exposed power conductors and telephone wires.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Lightning arresters, ungrounded and exposed... AND SURFACE WORK AREAS OF UNDERGROUND COAL MINES Electrical Equipment-General § 77.508 Lightning... conductors and telephone wires shall be equipped with suitable lightning arresters which are adequately...
30 CFR 77.508 - Lightning arresters, ungrounded and exposed power conductors and telephone wires.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Lightning arresters, ungrounded and exposed... AND SURFACE WORK AREAS OF UNDERGROUND COAL MINES Electrical Equipment-General § 77.508 Lightning... conductors and telephone wires shall be equipped with suitable lightning arresters which are adequately...
30 CFR 77.508 - Lightning arresters, ungrounded and exposed power conductors and telephone wires.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Lightning arresters, ungrounded and exposed... AND SURFACE WORK AREAS OF UNDERGROUND COAL MINES Electrical Equipment-General § 77.508 Lightning... conductors and telephone wires shall be equipped with suitable lightning arresters which are adequately...
ERIC Educational Resources Information Center
Boehnlein, Mary Maher
Parents and the extended family are the most influential factors in the child's lifelong eating habits, general health and development, and brain power. Convincing parents of diet components that insure adequate nutrition is of prime importance; if the home does not support the content of the school's nutritional curriculum, the child may feel…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... assure that the emergency diesel generator's diesel driven cooling water pumps perform their required... generators will provide required electrical power as assumed in the accident analyses and the cooling water... Technical Specifications to require an adequate emergency diesel generator and diesel driven cooling water...