Sample records for size statistical multifragmentation

  1. The statistical multifragmentation model: Origins and recent advances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donangelo, R., E-mail: donangel@fing.edu.uy; Instituto de Física, Universidade Federal do Rio de Janeiro, C.P. 68528, 21941-972 Rio de Janeiro - RJ; Souza, S. R., E-mail: srsouza@if.ufrj.br

    2016-07-07

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures reliesmore » on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.« less

  2. The statistical multifragmentation model: Origins and recent advances

    NASA Astrophysics Data System (ADS)

    Donangelo, R.; Souza, S. R.

    2016-07-01

    We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.

  3. WIX: statistical nuclear multifragmentation with collective expansion and Coulomb forces

    NASA Astrophysics Data System (ADS)

    Randrup, J.∅rgen

    1993-10-01

    By suitable augmentation of the event generator FREESCO, a code WIX has been constructed with which it is possible to simulate the statistical multifragmentation of a specified nuclear source, which may be both hollow and deformed, in the presence of a collective expansion and with the interfragment Coulomb forces included.

  4. Statistical analysis of experimental multifragmentation events in 64Zn+112Sn at 40 MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Lin, W.; Zheng, H.; Ren, P.; Liu, X.; Huang, M.; Wada, R.; Chen, Z.; Wang, J.; Xiao, G. Q.; Qu, G.

    2018-04-01

    A statistical multifragmentation model (SMM) is applied to the experimentally observed multifragmentation events in an intermediate heavy-ion reaction. Using the temperature and symmetry energy extracted from the isobaric yield ratio (IYR) method based on the modified Fisher model (MFM), SMM is applied to the reaction 64Zn+112Sn at 40 MeV/nucleon. The experimental isotope distribution and mass distribution of the primary reconstructed fragments are compared without afterburner and they are well reproduced. The extracted temperature T and symmetry energy coefficient asym from SMM simulated events, using the IYR method, are also consistent with those from the experiment. These results strongly suggest that in the multifragmentation process there is a freezeout volume, in which the thermal and chemical equilibrium is established before or at the time of the intermediate-mass fragments emission.

  5. Incorporation of the statistical multi-fragmentation model in PHITS and its application for simulation of fragmentation by heavy ions and protons

    NASA Astrophysics Data System (ADS)

    Ogawa, Tatsuhiko; Sato, Tatsuhiko; Hashimoto, Shintaro; Niita, Koji

    2014-06-01

    The fragmentation reactions of relativistic-energy nucleus-nucleus and proton-nucleus collisions were simulated using the Statistical Multi-fragmentation Model (SMM) incorporated with the Particle and Heavy Ion Transport code System (PHITS). The comparisons of calculated cross-sections with literature data showed that PHITS-SMM predicts the fragmentation cross-sections of heavy nuclei up to two orders of magnitude more accurately than PHITS for heavy-ion-induced reactions. For proton-induced reactions, noticeable improvements are observed for interactions of the heavy target with protons at an energy greater than 1 GeV. Therefore, consideration for multi-fragmentation reactions is necessary for the accurate simulation of energetic fragmentation reactions of heavy nuclei.

  6. Analysis of multi-fragmentation reactions induced by relativistic heavy ions using the statistical multi-fragmentation model

    NASA Astrophysics Data System (ADS)

    Ogawa, T.; Sato, T.; Hashimoto, S.; Niita, K.

    2013-09-01

    The fragmentation cross-sections of relativistic energy nucleus-nucleus collisions were analyzed using the statistical multi-fragmentation model (SMM) incorporated with the Monte-Carlo radiation transport simulation code particle and heavy ion transport code system (PHITS). Comparison with the literature data showed that PHITS-SMM reproduces fragmentation cross-sections of heavy nuclei at relativistic energies better than the original PHITS by up to two orders of magnitude. It was also found that SMM does not degrade the neutron production cross-sections in heavy ion collisions or the fragmentation cross-sections of light nuclei, for which SMM has not been benchmarked. Therefore, SMM is a robust model that can supplement conventional nucleus-nucleus reaction models, enabling more accurate prediction of fragmentation cross-sections.

  7. Delayed fission and multifragmentation in sub-keV C60 - Au(0 0 1) collisions via molecular dynamics simulations: Mass distributions and activated statistical decay

    NASA Astrophysics Data System (ADS)

    Bernstein, V.; Kolodney, E.

    2017-10-01

    We have recently observed, both experimentally and computationally, the phenomenon of postcollision multifragmentation in sub-keV surface collisions of a C60 projectile. Namely, delayed multiparticle breakup of a strongly impact deformed and vibrationally excited large cluster collider into several large fragments, after leaving the surface. Molecular dynamics simulations with extensive statistics revealed a nearly simultaneous event, within a sub-psec time window. Here we study, computationally, additional essential aspects of this new delayed collisional fragmentation which were not addressed before. Specifically, we study here the delayed (binary) fission channel for different impact energies both by calculating mass distributions over all fission events and by calculating and analyzing lifetime distributions of the scattered projectile. We observe an asymmetric fission resulting in a most probable fission channel and we find an activated exponential (statistical) decay. Finally, we also calculate and discuss the fragment mass distribution in (triple) multifragmentation over different time windows, in terms of most abundant fragments.

  8. Phase transition dynamics for hot nuclei

    NASA Astrophysics Data System (ADS)

    Borderie, B.; Le Neindre, N.; Rivet, M. F.; Désesquelles, P.; Bonnet, E.; Bougault, R.; Chbihi, A.; Dell'Aquila, D.; Fable, Q.; Frankland, J. D.; Galichet, E.; Gruyer, D.; Guinet, D.; La Commara, M.; Lombardo, I.; Lopez, O.; Manduci, L.; Napolitani, P.; Pârlog, M.; Rosato, E.; Roy, R.; St-Onge, P.; Verde, G.; Vient, E.; Vigilante, M.; Wieleczko, J. P.; Indra Collaboration

    2018-07-01

    An abnormal production of events with almost equal-sized fragments was theoretically proposed as a signature of spinodal instabilities responsible for nuclear multifragmentation in the Fermi energy domain. On the other hand finite size effects are predicted to strongly reduce this abnormal production. High statistics quasifusion hot nuclei produced in central collisions between Xe and Sn isotopes at 32 and 45 A MeV incident energies have been used to definitively establish, through the experimental measurement of charge correlations, the presence of spinodal instabilities. N/Z influence was also studied.

  9. Sensitivity study of experimental measures for the nuclear liquid-gas phase transition in the statistical multifragmentation model

    NASA Astrophysics Data System (ADS)

    Lin, W.; Ren, P.; Zheng, H.; Liu, X.; Huang, M.; Wada, R.; Qu, G.

    2018-05-01

    The experimental measures of the multiplicity derivatives—the moment parameters, the bimodal parameter, the fluctuation of maximum fragment charge number (normalized variance of Zmax, or NVZ), the Fisher exponent (τ ), and the Zipf law parameter (ξ )—are examined to search for the liquid-gas phase transition in nuclear multifragmention processes within the framework of the statistical multifragmentation model (SMM). The sensitivities of these measures are studied. All these measures predict a critical signature at or near to the critical point both for the primary and secondary fragments. Among these measures, the total multiplicity derivative and the NVZ provide accurate measures for the critical point from the final cold fragments as well as the primary fragments. The present study will provide a guide for future experiments and analyses in the study of the nuclear liquid-gas phase transition.

  10. Nuclear energy release from fragmentation

    NASA Astrophysics Data System (ADS)

    Li, Cheng; Souza, S. R.; Tsang, M. B.; Zhang, Feng-Shou

    2016-08-01

    It is well known that binary fission occurs with positive energy gain. In this article we examine the energetics of splitting uranium and thorium isotopes into various numbers of fragments (from two to eight) with nearly equal size. We find that the energy released by splitting 230,232Th and 235,238U into three equal size fragments is largest. The statistical multifragmentation model (SMM) is applied to calculate the probability of different breakup channels for excited nuclei. By weighing the probability distributions of fragment multiplicity at different excitation energies, we find the peaks of energy release for 230,232Th and 235,238U are around 0.7-0.75 MeV/u at excitation energy between 1.2 and 2 MeV/u in the primary breakup process. Taking into account the secondary de-excitation processes of primary fragments with the GEMINI code, these energy peaks fall to about 0.45 MeV/u.

  11. Dynamical and many-body correlation effects in the kinetic energy spectra of isotopes produced in nuclear multifragmentation

    NASA Astrophysics Data System (ADS)

    Souza, S. R.; Donangelo, R.; Lynch, W. G.; Tsang, M. B.

    2018-03-01

    The properties of the kinetic energy spectra of light isotopes produced in the breakup of a nuclear source and during the de-excitation of its products are examined. The initial stage, at which the hot fragments are created, is modeled by the statistical multifragmentation model, whereas the Weisskopf-Ewing evaporation treatment is adopted to describe the subsequent fragment de-excitation, as they follow their classical trajectories dictated by the Coulomb repulsion among them. The energy spectra obtained are compared to available experimental data. The influence of the fusion cross section entering into the evaporation treatment is investigated and its influence on the qualitative aspects of the energy spectra turns out to be small. Although these aspects can be fairly well described by the model, the underlying physics associated with the quantitative discrepancies remains to be understood.

  12. Reaction mechanisms and multifragmentation processes in 64Zn+58Ni at 35A-79A MeV

    NASA Astrophysics Data System (ADS)

    Wada, R.; Hagel, K.; Cibor, J.; Gonin, M.; Keutgen, Th.; Murray, M.; Natowitz, J. B.; Ono, A.; Steckmeyer, J. C.; Kerambrum, A.; Angélique, J. C.; Auger, A.; Bizard, G.; Brou, R.; Cabot, C.; Crema, E.; Cussol, D.; Durand, D.; El Masri, Y.; Eudes, P.; He, Z. Y.; Jeong, S. C.; Lebrun, C.; Patry, J. P.; Péghaire, A.; Peter, J.; Régimbart, R.; Rosato, E.; Saint-Laurent, F.; Tamain, B.; Vient, E.

    2000-09-01

    Reaction mechanisms and multifragmentation processes have been studied for 64Zn+58Ni collisions at intermediate energies with the help of antisymmetrized molecular dynamics (AMD-V) model calculations. Experimental energy spectra, angular distributions, charge distributions, and isotope distributions, classified by their associated charged particle multiplicities, are compared with the results of the AMD-V calculations. In general the experimental results are reasonably well reproduced by the calculations. The multifragmentation observed experimentally at all incident energies is also reproduced by the AMD-V calculations. A detailed study of AMD-V events reveals that, in nucleon transport, the reaction shows some transparency, whereas in energy transport the reaction is much less transparent at all incident energies studied here. The transparency in the nucleon transport indicates that, even for central collisions, about 75% of the projectile nucleons appear in the forward direction. In energy transport about 80% of the initial kinetic energy of the projectile in the center- of-mass frame is dissipated. The detailed study of AMD-V events also elucidates the dynamics of the multifragmentation process. The study suggests that, at 35A MeV, the semitransparency and thermal expansion are the dominant mechanisms for the multifragmentation process, whereas at 49A MeV and higher incident energies a nuclear compression occurs at an early stage of the reaction and plays an important role in the multifragmentation process in addition to that of the thermal expansion and the semitransparency.

  13. Production of exotic nuclei in projectile fragmentation at relativistic and Fermi energies

    NASA Astrophysics Data System (ADS)

    Ogul, R.; Ergun, A.; Buyukcizmeci, N.

    2017-02-01

    Isotopic distributions of projectile fragmentation in peripheral heavy ion collisions of 86Kr on 112Sn are calculated within the statistical multifragmentation model. Obtained data are compared to the experimental cross section measurements. We show the enhancement in the production of neutron-rich isotopes close to the projectile, observed in the experiments. Our results show the universality of the limitation of the excitation energy induced in the projectile residues.

  14. The AO Pediatric Comprehensive Classification of Long Bone Fractures (PCCF).

    PubMed

    Audigé, Laurent; Slongo, Theddy; Lutz, Nicolas; Blumenthal, Andrea; Joeris, Alexander

    2017-04-01

    Background and purpose - The AO Pediatric Comprehensive Classification of Long Bone Fractures (PCCF) describes the localization and morphology of fractures, and considers severity in 2 categories: (1) simple, and (2) multifragmentary. We evaluated simple and multifragmentary fractures in a large consecutive cohort of children diagnosed with long bone fractures in Switzerland. Patients and methods - Children and adolescents treated for fractures between 2009 and 2011 at 2 tertiary pediatric surgery hospitals were retrospectively included. Fractures were classified according to the AO PCCF. Severity classes were described according to fracture location, patient age and sex, BMI, and cause of trauma. Results - Of all trauma events, 3% (84 of 2,730) were diagnosed with a multifragmentary fracture. This proportion was age-related: 2% of multifragmentary fractures occurred in school-children and 7% occurred in adolescents. In patients diagnosed with a single fracture only, the highest percentage of multifragmentation occurred in the femur (12%, 15 of 123). In fractured paired radius/ulna bones, multifragmentation occurred in 2% (11 of 687); in fractured paired tibia/fibula bones, it occurred in 21% (24 of 115), particularly in schoolchildren (5 of 18) and adolescents (16 of 40). In a multivariable regression model, age, cause of injury, and bone were found to be relevant prognostic factors of multifragmentation (odds ratio (OR) > 2). Interpretation - Overall, multifragmentation in long bone fractures in children was rare and was mostly observed in adolescents. The femur was mostly affected in single fractures and the lower leg was mostly affected in paired-bone fractures. The clinical relevance of multifragmentation regarding growth and long-term functional recovery remains to be determined.

  15. Fragmentation patterns of multicharged C60r+ (r=3-5) studied with well-controlled internal excitation energy

    NASA Astrophysics Data System (ADS)

    Martin, S.; Chen, L.; Salmoun, A.; Li, B.; Bernard, J.; Brédy, R.

    2008-04-01

    We have studied the relaxation of triply charged C60 obtained in collisions F2++C60→F-+C603+∗ at low impact energy (E=6.8keV) . Depending on the excitation energy, these initial parent ions decay following a variety of channels, such as thermal electronic ionization, evaporation of C2 units, asymmetrical fission, and multifragmentation. Using a recently developed experimental method, named collision-induced dissociation under energy control, we were able to measure the energy deposited in C603+∗ for each collision event and to obtain an excitation energy profile of the parent ions associated with each decay channel. In our chosen observation time scale of the order of 1μs , evaporations and asymmetrical fissions of C603+,4+ occur when the internal energy is in the range from 40 to 100 eV. The multifragmentation becomes dominant for multicharged C604+,5+ parent ions from 100 to 210 eV. In the case of C604+ , the multifragmentation channel is opened at low energy (40 eV). Therefore, in the energy range 40-100 eV, the asymmetrical fission, evaporation, and multifragmentation channels are in competition.

  16. The AO Pediatric Comprehensive Classification of Long Bone Fractures (PCCF)

    PubMed Central

    Audigé, Laurent; Slongo, Theddy; Lutz, Nicolas; Blumenthal, Andrea; Joeris, Alexander

    2017-01-01

    Background and purpose The AO Pediatric Comprehensive Classification of Long Bone Fractures (PCCF) describes the localization and morphology of fractures, and considers severity in 2 categories: (1) simple, and (2) multifragmentary. We evaluated simple and multifragmentary fractures in a large consecutive cohort of children diagnosed with long bone fractures in Switzerland. Patients and methods Children and adolescents treated for fractures between 2009 and 2011 at 2 tertiary pediatric surgery hospitals were retrospectively included. Fractures were classified according to the AO PCCF. Severity classes were described according to fracture location, patient age and sex, BMI, and cause of trauma. Results Of all trauma events, 3% (84 of 2,730) were diagnosed with a multifragmentary fracture. This proportion was age-related: 2% of multifragmentary fractures occurred in school­children and 7% occurred in adolescents. In patients diagnosed with a single fracture only, the highest percentage of multifragmentation occurred in the femur (12%, 15 of 123). In fractured paired radius/ulna bones, multifragmentation occurred in 2% (11 of 687); in fractured paired tibia/fibula bones, it occurred in 21% (24 of 115), particularly in schoolchildren (5 of 18) and adolescents (16 of 40). In a multivariable regression model, age, cause of injury, and bone were found to be relevant prognostic factors of multifragmentation (odds ratio (OR) > 2). Interpretation Overall, multifragmentation in long bone fractures in children was rare and was mostly observed in adolescents. The femur was mostly affected in single fractures and the lower leg was mostly affected in paired-bone fractures. The clinical relevance of multifragmentation regarding growth and long-term functional recovery remains to be determined. PMID:27882814

  17. VizieR Online Data Catalog: Supernova matter EOS (Buyukcizmeci+, 2014)

    NASA Astrophysics Data System (ADS)

    Buyukcizmeci, N.; Botvina, A. S.; Mishustin, I. N.

    2017-03-01

    The Statistical Model for Supernova Matter (SMSM) was developed in Botvina & Mishustin (2004, PhLB, 584, 233 ; 2010, NuPhA, 843, 98) as a direct generalization of the Statistical Multifragmentation Model (SMM; Bondorf et al. 1995, PhR, 257, 133). We treat supernova matter as a mixture of nuclear species, electrons, and photons in statistical equilibrium. The SMSM EOS tables cover the following ranges of control parameters: 1. Temperature: T = 0.2-25 MeV; for 35 T values. 2. Electron fraction Ye: 0.02-0.56; linear mesh of Ye = 0.02, giving 28 Ye values. It is equal to the total proton fraction Xp, due to charge neutrality. 3. Baryon number density fraction {rho}/{rho}0 = (10-8-0.32), giving 31 {rho}/{rho}0 values. (2 data files).

  18. I. Excluded volume effects in Ising cluster distributions and nuclear multifragmentation. II. Multiple-chance effects in alpha-particle evaporation

    NASA Astrophysics Data System (ADS)

    Breus, Dimitry Eugene

    In Part I, geometric clusters of the Ising model are studied as possible model clusters for nuclear multifragmentation. These clusters may not be considered as non-interacting (ideal gas) due to excluded volume effect which predominantly is the artifact of the cluster's finite size. Interaction significantly complicates the use of clusters in the analysis of thermodynamic systems. Stillinger's theory is used as a basis for the analysis, which within the RFL (Reiss, Frisch, Lebowitz) fluid-of-spheres approximation produces a prediction for cluster concentrations well obeyed by geometric clusters of the Ising model. If thermodynamic condition of phase coexistence is met, these concentrations can be incorporated into a differential equation procedure of moderate complexity to elucidate the liquid-vapor phase diagram of the system with cluster interaction included. The drawback of increased complexity is outweighted by the reward of greater accuracy of the phase diagram, as it is demonstrated by the Ising model. A novel nuclear-cluster analysis procedure is developed by modifying Fisher's model to contain cluster interaction and employing the differential equation procedure to obtain thermodynamic variables. With this procedure applied to geometric clusters, the guidelines are developed to look for excluded volume effect in nuclear multifragmentation. In Part II, an explanation is offered for the recently observed oscillations in the energy spectra of alpha-particles emitted from hot compound nuclei. Contrary to what was previously expected, the oscillations are assumed to be caused by the multiple-chance nature of alpha-evaporation. In a semi-empirical fashion this assumption is successfully confirmed by a technique of two-spectra decomposition which treats experimental alpha-spectra as having contributions from at least two independent emitters. Building upon the success of the multiple-chance explanation of the oscillations, Moretto's single-chance evaporation theory is augmented to include multiple-chance emission and tested on experimental data to yield positive results.

  19. Angular distributions in multifragmentation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stoenner, R.W.; Klobuchar, R.L.; Haustein, P.E.

    2006-04-15

    Angular distributions are reported for {sup 37}Ar and {sup 127}Xe from 381-GeV {sup 28}Si+Au interactions and for products between {sup 24}Na and {sup 149}Gd from 28-GeV {sup 1}H+Au. Sideward peaking and forward deficits for multifragmentation products are significantly enhanced for heavy ions compared with protons. Projectile kinetic energy does not appear to be a satisfactory scaling variable. The data are discussed in terms of a kinetic-focusing model in which sideward peaking is due to transverse motion of the excited product from the initial projectile-target interaction.

  20. Studies of nuclei under the extreme conditions of density, temperature, isospin asymmetry and the phase diagram of hadronic matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mekjian, Aram

    2016-10-18

    The main emphasis of the entire project is on issues having to do with medium energy and ultra-relativistic energy and heavy ion collisions. A major goal of both theory and experiment is to study properties of hot dense nuclear matter under various extreme conditions and to map out the phase diagram in density or chemical potential and temperature. My studies in medium energy nuclear collisions focused on the liquid-gas phase transition and cluster yields from such transitions. Here I developed both the statistical model of nuclear multi-fragmentation and also a mean field theory.

  1. Dynamics of hot rotating nuclei

    NASA Astrophysics Data System (ADS)

    Garcias, F.; de La Mota, V.; Remaud, B.; Royer, G.; Sébille, F.

    1991-02-01

    The deexcitation of hot rotating nuclei is studied within a microscopic semiclassical transport formalism. This framework allows the study of the competition between the fission and evaporation channels of deexcitation, including the mean-field and two-body interactions, without shape constraint for the fission channel. As a function of initial angular momenta and excitation energies, the transitions between three regimes is analyzed [particle evaporation, binary (ternary) fussion and multifragmentation], which correspond to well-defined symmetry breakings in the inertia tensor of the system. The competition between evaporation and binary fission is studied, showing the progressive disappearance of the fission process with increasing excitation energies, up to a critical point where nuclei pass directly from evaporation to multifragmentation channels.

  2. Ratio of shear viscosity to entropy density in multifragmentation of Au + Au

    NASA Astrophysics Data System (ADS)

    Zhou, C. L.; Ma, Y. G.; Fang, D. Q.; Li, S. X.; Zhang, G. Q.

    2012-06-01

    The ratio of the shear viscosity (η) to entropy density (s) for the intermediate energy heavy-ion collisions has been calculated by using the Green-Kubo method in the framework of the quantum molecular dynamics model. The theoretical curve of η/s as a function of the incident energy for the head-on Au + Au collisions displays that a minimum region of η/s has been approached at higher incident energies, where the minimum η/s value is about 7 times Kovtun-Son-Starinets (KSS) bound (1/4π). We argue that the onset of minimum η/s region at higher incident energies corresponds to the nuclear liquid gas phase transition in nuclear multifragmentation.

  3. Using experimental data to test an n -body dynamical model coupled with an energy-based clusterization algorithm at low incident energies

    NASA Astrophysics Data System (ADS)

    Kumar, Rohit; Puri, Rajeev K.

    2018-03-01

    Employing the quantum molecular dynamics (QMD) approach for nucleus-nucleus collisions, we test the predictive power of the energy-based clusterization algorithm, i.e., the simulating annealing clusterization algorithm (SACA), to describe the experimental data of charge distribution and various event-by-event correlations among fragments. The calculations are constrained into the Fermi-energy domain and/or mildly excited nuclear matter. Our detailed study spans over different system masses, and system-mass asymmetries of colliding partners show the importance of the energy-based clusterization algorithm for understanding multifragmentation. The present calculations are also compared with the other available calculations, which use one-body models, statistical models, and/or hybrid models.

  4. Effects of medium on nuclear properties in multifragmentation

    NASA Astrophysics Data System (ADS)

    De, J. N.; Samaddar, S. K.; Viñas, X.; Centelles, M.; Mishustin, I. N.; Greiner, W.

    2012-08-01

    In multifragmentation of hot nuclear matter, properties of fragments embedded in a soup of nucleonic gas and other fragments should be modified as compared with isolated nuclei. Such modifications are studied within a simple model where only nucleons and one kind of heavy nuclei are considered. The interaction between different species is described with a momentum-dependent two-body potential whose parameters are fitted to reproduce properties of cold isolated nuclei. The internal energy of heavy fragments is parametrized according to a liquid-drop model with density- and temperature-dependent parameters. Calculations are carried out for several subnuclear densities and moderate temperatures, for isospin-symmetric and asymmetric systems. We find that the fragments get stretched due to interactions with the medium and their binding energies decrease with increasing temperature and density of nuclear matter.

  5. Ion-impact-induced multifragmentation of liquid droplets★

    NASA Astrophysics Data System (ADS)

    Surdutovich, Eugene; Verkhovtsev, Alexey; Solov'yov, Andrey V.

    2017-11-01

    An instability of a liquid droplet traversed by an energetic ion is explored theoretically. This instability is brought about by the predicted shock wave induced by the ion. An observation of multifragmentation of small droplets traversed by ions with high linear energy transfer is suggested to demonstrate the existence of shock waves. A number of effects are analysed in effort to find the conditions for such an experiment to be signifying. The presence of shock waves crucially affects the scenario of radiation damage with ions since the shock waves significantly contribute to the thermomechanical damage of biomolecules as well as the transport of reactive species. While the scenario has been upheld by analyses of biological experiments, the shock waves have not yet been observed directly, regardless of a number of ideas of experiments to detect them were exchanged at conferences. Contribution to the Topical Issue "Dynamics of Systems at the Nanoscale", edited by Andrey Solov'yov and Andrei Korol.

  6. Effect of scaled Gaussian width (SGW) on fragment flow and multifragmentation in heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Rajni; Kumar, Suneel

    2012-02-01

    We have analyzed the role of interaction range on multifragmentation within the isospin-dependent quantum molecular dynamic (IQMD) model. We find that the effect of width of Gaussian wave packet associated with a nucleon depends on the mass of the colliding system. For a given set of input parameters, we find that width has a sizable effect. At the same time, we know that a different set of parameters can influence the reaction dynamics drastically. Hence, in our opinion it may not be possible to pin down the width to a very narrow level. A systematic study of mass effect ( 197Au, 124La, 124Sn, 107Sn in the breakup of a projectile spectator at intermediate energies has been performed. We also studied the disapperance of flow which demonstrates the effect of the scaled Gaussian width (SGW). Our studies shows that SGW influences the reaction dynamics.

  7. Fusion and reaction mechanism evolution in 24Mg+12C at intermediate energies

    NASA Astrophysics Data System (ADS)

    Samri, M.; Grenier, F.; Ball, G. C.; Beaulieu, L.; Gingras, L.; Horn, D.; Larochelle, Y.; Moustabchir, R.; Roy, R.; St-Pierre, C.; Theriault, D.

    2002-06-01

    The formation and deexcitation of fusionlike events selected in events with a total charge equal or greater than 16 in 24Mg+12C system has been investigated at 25, 35, and 45 MeV/nucleon with a large multidetector array. Central single-source events are selected by use of the statistical discriminant analysis method applied to a set of 26 global variables. The fusion cross section has been extracted for the three bombarding energies and compared to other experimental data and to theoretical predictions. The total multiplicity is found to first increase to a maximum value and then decrease with increasing beam energy. It is shown that this behavior is connected to the opening of multifragmentation channels at 45 MeV/nucleon and the disappearance of channels with only light charged particles.

  8. Strange quark matter fragmentation in astrophysical events

    NASA Astrophysics Data System (ADS)

    Paulucci, L.; Horvath, J. E.

    2014-06-01

    The conjecture of Bodmer-Witten-Terazawa suggesting a form of quark matter (Strange Quark Matter) as the ground state of hadronic interactions has been studied in laboratory and astrophysical contexts by a large number of authors. If strange stars exist, some violent events involving these compact objects, such as mergers and even their formation process, might eject some strange matter into the interstellar medium that could be detected as a trace signal in the cosmic ray flux. To evaluate this possibility, it is necessary to understand how this matter in bulk would fragment in the form of strangelets (small lumps of strange quark matter in which finite effects become important). We calculate the mass distribution outcome using the statistical multifragmentation model and point out several caveats affecting it. In particular, the possibility that strangelets fragmentation will render a tiny fraction of contamination in the cosmic ray flux is discussed.

  9. Formation of H{sub 2} from internally heated polycyclic aromatic hydrocarbons: Excitation energy dependence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, T., E-mail: tao.chen@fysik.su.se, E-mail: henning@fysik.su.se; Gatchell, M.; Stockett, M. H.

    2015-04-14

    We have investigated the effectiveness of molecular hydrogen (H{sub 2}) formation from Polycyclic Aromatic Hydrocarbons (PAHs) which are internally heated by collisions with keV ions. The present and earlier experimental results are analyzed in view of molecular structure calculations and a simple collision model. We estimate that H{sub 2} formation becomes important for internal PAH temperatures exceeding about 2200 K, regardless of the PAH size and the excitation agent. This suggests that keV ions may effectively induce such reactions, while they are unlikely due to, e.g., absorption of single photons with energies below the Lyman limit. The present analysis alsomore » suggests that H{sub 2} emission is correlated with multi-fragmentation processes, which means that the [PAH-2H]{sup +} peak intensities in the mass spectra may not be used for estimating H{sub 2}-formation rates.« less

  10. Study of nuclear multifragmentation induced by ultrarelativistic μ-mesons in nuclear track emulsion

    NASA Astrophysics Data System (ADS)

    Artemenkov, D. A.; Bradnova, V.; Firu, E.; Kornegrutsa, N. K.; Haiduc, M.; Mamatkulov, K. Z.; Kattabekov, R. R.; Neagu, A.; Rukoyatkin, P. A.; Rusakova, V. V.; Stanoeva, R.; Zaitsev, A. A.; Zarubin, P. I.; Zarubina, I. G.

    2016-02-01

    Exposures of test samples of nuclear track emulsion were analyzed. The formation of high-multiplicity nuclear stars was observed upon irradiating nuclear track emulsions with ultrarelativistic muons. Kinematical features studied in this exposure of nuclear track emulsions for events of the muon-induced splitting of carbon nuclei to three α-particles are indicative of the nuclear-diffraction interaction mechanism.

  11. Neutron-rich rare-isotope production from projectile fission of heavy nuclei near 20 MeV/nucleon beam energy

    NASA Astrophysics Data System (ADS)

    Vonta, N.; Souliotis, G. A.; Loveland, W.; Kwon, Y. K.; Tshoo, K.; Jeong, S. C.; Veselsky, M.; Bonasera, A.; Botvina, A.

    2016-12-01

    We investigate the possibilities of producing neutron-rich nuclides in projectile fission of heavy beams in the energy range of 20 MeV/nucleon expected from low-energy facilities. We report our efforts to theoretically describe the reaction mechanism of projectile fission following a multinucleon transfer collision at this energy range. Our calculations are mainly based on a two-step approach: The dynamical stage of the collision is described with either the phenomenological deep-inelastic transfer model (DIT) or with the microscopic constrained molecular dynamics model (CoMD). The de-excitation or fission of the hot heavy projectile fragments is performed with the statistical multifragmentation model (SMM). We compared our model calculations with our previous experimental projectile-fission data of 238U (20 MeV/nucleon) + 208Pb and 197Au (20 MeV/nucleon) + 197Au and found an overall reasonable agreement. Our study suggests that projectile fission following peripheral heavy-ion collisions at this energy range offers an effective route to access very neutron-rich rare isotopes toward and beyond the astrophysical r-process path.

  12. Study of collective flows of protons and π^{{-}}_{} -mesons in p+C, Ta and He+Li, C collisions at momenta of 4.2, 4.5 and 10 AGeV/c

    NASA Astrophysics Data System (ADS)

    Chkhaidze, L.; Chlachidze, G.; Djobava, T.; Galoyan, A.; Kharkhelauri, L.; Togoo, R.; Uzhinsky, V.

    2016-11-01

    Collective flows of protons and π- -mesons are studied at the momenta of 4.2, 4.5 and 10AGeV/ c for p+C, Ta and He+Li, C interactions. The data were obtained from the streamer chamber (SKM-200-GIBS) and from the Propane Bubble Chamber (PBC-500) systems utilized at JINR. A method of Danielewicz and Odyniec has been employed in determining a directed transverse flow of particles. The values of the transverse flow parameter and the strength of the anisotropic emission were defined for each interacting nuclear pair. It is found that the directed flows of protons and pions decrease with increasing the energy and the mass numbers of colliding nucleus pairs. The π^{{-}}_{} -meson and proton flows exhibit opposite directions in all studied interactions, and the flows of protons are directed in the reaction plane. The Ultra-relativistic Quantum Molecular Dynamical Model (UrQMD) coupled with the Statistical Multi-fragmentation Model (SMM), satisfactorily describes the obtained experimental results.

  13. Power law behavior of the isotope yield distributions in the multifragmentation regime of heavy ion reactions

    NASA Astrophysics Data System (ADS)

    Huang, M.; Wada, R.; Chen, Z.; Keutgen, T.; Kowalski, S.; Hagel, K.; Barbui, M.; Bonasera, A.; Bottosso, C.; Materna, T.; Natowitz, J. B.; Qin, L.; Rodrigues, M. R. D.; Sahu, P. K.; Schmidt, K. J.; Wang, J.

    2010-11-01

    Isotope yield distributions in the multifragmentation regime were studied with high-quality isotope identification, focusing on the intermediate mass fragments (IMFs) produced in semiviolent collisions. The yields were analyzed within the framework of a modified Fisher model. Using the ratio of the mass-dependent symmetry energy coefficient relative to the temperature, asym/T, extracted in previous work and that of the pairing term, ap/T, extracted from this work, and assuming that both reflect secondary decay processes, the experimentally observed isotope yields were corrected for these effects. For a given I=N-Z value, the corrected yields of isotopes relative to the yield of C12 show a power law distribution Y(N,Z)/Y(12C)~A-τ in the mass range 1⩽A⩽30, and the distributions are almost identical for the different reactions studied. The observed power law distributions change systematically when I of the isotopes changes and the extracted τ value decreases from 3.9 to 1.0 as I increases from -1 to 3. These observations are well reproduced by a simple deexcitation model, with which the power law distribution of the primary isotopes is determined to be τprim=2.4±0.2, suggesting that the disassembling system at the time of the fragment formation is indeed at, or very near, the critical point.

  14. Study of collective flows of protons and $$ \\pi^{{-}}_{}$$ π - -mesons in p+C, Ta and He+Li, C collisions at momenta of 4.2, 4.5 and 10 AGeV/c

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chkhaidze, L.; Chlachidze, G.; Djobava, T.

    Collective flows of protons andmore » $$\\pi^{-}$$ -mesons are studied at the momenta of 4.2, 4.5 and 10AGeV/c for p+C, Ta and He+Li, C interactions. The data were obtained from the streamer chamber (SKM-200-GIBS) and from the Propane Bubble Chamber (PBC-500) systems utilized at JINR. A method of Danielewicz and Odyniec has been employed in determining a directed transverse flow of particles. The values of the transverse flow parameter and the strength of the anisotropic emission were defined for each interacting nuclear pair. It is found that the directed flows of protons and pions decrease with increasing the energy and the mass numbers of colliding nucleus pairs. The $$ \\pi^{{-}}_{}$$ -meson and proton flows exhibit opposite directions in all studied interactions, and the flows of protons are directed in the reaction plane. Lastly, the Ultra-relativistic Quantum Molecular Dynamical Model (UrQMD) coupled with the Statistical Multi-fragmentation Model (SMM), satisfactorily describes the obtained experimental results.« less

  15. Study of collective flows of protons and $$ \\pi^{{-}}_{}$$ π - -mesons in p+C, Ta and He+Li, C collisions at momenta of 4.2, 4.5 and 10 AGeV/c

    DOE PAGES

    Chkhaidze, L.; Chlachidze, G.; Djobava, T.; ...

    2016-11-01

    Collective flows of protons andmore » $$\\pi^{-}$$ -mesons are studied at the momenta of 4.2, 4.5 and 10AGeV/c for p+C, Ta and He+Li, C interactions. The data were obtained from the streamer chamber (SKM-200-GIBS) and from the Propane Bubble Chamber (PBC-500) systems utilized at JINR. A method of Danielewicz and Odyniec has been employed in determining a directed transverse flow of particles. The values of the transverse flow parameter and the strength of the anisotropic emission were defined for each interacting nuclear pair. It is found that the directed flows of protons and pions decrease with increasing the energy and the mass numbers of colliding nucleus pairs. The $$ \\pi^{{-}}_{}$$ -meson and proton flows exhibit opposite directions in all studied interactions, and the flows of protons are directed in the reaction plane. Lastly, the Ultra-relativistic Quantum Molecular Dynamical Model (UrQMD) coupled with the Statistical Multi-fragmentation Model (SMM), satisfactorily describes the obtained experimental results.« less

  16. Isotopic dependence of the fragments' internal temperatures determined from multifragment emission

    NASA Astrophysics Data System (ADS)

    Souza, S. R.; Donangelo, R.

    2018-05-01

    The internal temperatures of fragments produced by an excited nuclear source are investigated by using the microcanonical version of the statistical multifragmentation model, with discrete energy. We focus on the fragments' properties at the breakup stage, before they have time to deexcite by particle emission. Since the adopted model provides the excitation energy distribution of these primordial fragments, it allows one to calculate the temperatures of different isotope families and to make inferences about the sensitivity to their isospin composition. It is found that, due to the functional form of the nuclear density of states and the excitation energy distribution of the fragments, proton-rich isotopes are hotter than neutron-rich isotopes. This property has been taken to be an indication of earlier emission of the former from a source that cools down as it expands and emits fragments. Although this scenario is incompatible with the prompt breakup of a thermally equilibrated source, our results reveal that the latter framework also provides the same qualitative features just mentioned. Therefore they suggest that this property cannot be taken as evidence for nonequilibrium emission. We also found that this sensitivity to the isotopic composition of the fragments depends on the isospin composition of the source, and that it is weakened as the excitation energy of the source increases.

  17. Fragment emission from the mass-symmetric reactions 58Fe,58Ni +58Fe,58Ni at Ebeam=30 MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Ramakrishnan, E.; Johnston, H.; Gimeno-Nogues, F.; Rowland, D. J.; Laforest, R.; Lui, Y.-W.; Ferro, S.; Vasal, S.; Yennello, S. J.

    1998-04-01

    The mass-symmetric reactions 58Fe,58Ni +58Fe,58Ni were studied at a beam energy of Ebeam=30 MeV/nucleon in order to investigate the isospin dependence of fragment emission. Ratios of inclusive yields of isotopic fragments from hydrogen through nitrogen were extracted as a function of laboratory angle. A moving source analysis of the data indicates that at laboratory angles around 40° the yield of intermediate mass fragments (IMF's) beyond Z=3 is predominantly from a midrapidity source. The angular dependence of the relative yields of isotopes beyond Z=3 indicates that the IMF's at more central angles originate from a source which is more neutron deficient than the source responsible for fragments emitted at forward angles. The charge distributions and kinetic energy spectra of the IMF's at various laboratory angles were well reproduced by calculations employing a quantum molecular-dynamics code followed by a statistical multifragmentation model for generating fragments. The calculations indicate that the measured IMF's originate mainly from a single source. The isotopic composition of the emitted fragments is, however, not reproduced by the same calculation. The measured isotopic and isobaric ratios indicate an emitting source that is more neutron rich in comparison to the source predicted by model calculations.

  18. Interaction of 160-GeV muon with emulsion nuclei

    NASA Astrophysics Data System (ADS)

    Othman, S. M.; Ghoneim, M. T.; Hussein, M. T.; El-Samman, H.; Hussein, A.

    In this work we present some results of the interaction of high-energy muons with emulsion nuclei. The interaction results in emission of a number of fragments as a consequence of electromagnetic dissociation of the excited target nuclei. This excitation is attributed to absorption of photons by the target nuclei due to the intense electric field of the very fast incident muon particles. The interactions take place at impact parameters that allow ultra-peripheral collisions to take place, leading to giant resonances and hence multifragmentation of emulsion targets. Charge identification, range, energy spectra, angular distribution and topological cross-section of the produced fragments are measured and evaluated.

  19. Radial flow in 40Ar+45Sc reactions at E=35-115 MeV/nucleon

    NASA Astrophysics Data System (ADS)

    Pak, R.; Craig, D.; Gualtieri, E. E.; Hannuschke, S. A.; Lacey, R. A.; Lauret, J.; Llope, W. J.; Stone, N. T. B.; Vander Molen, A. M.; Westfall, G. D.; Yee, J.

    1996-10-01

    Collective radial flow of light fragments from 40Ar+45Sc reactions at beam energies between 35 and 115 MeV/nucleon has been investigated using the Michigan State University 4π Array. The mean transverse kinetic energy of the different fragment types increases with event centrality and increases as a function of the incident beam energy. Comparison of our measured values of shows agreement with predictions of Boltzmann-Uehling-Uhlenbeck model and WIX multifragmentation model calculations. The radial flow extracted from accounts for approximately half of the emitted particle's energy for the heavier fragments (Z>=4) at the highest beam energy studied.

  20. The decay of hot nuclei formed in La-induced reactions at E/A=45 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Libby, Bruce

    1993-01-01

    The decay of hot nuclei formed in the reactions 139La + 27Al, 51V, natCu, and 139La were studied by the coincident detection of up to four complex fragments (Z > 3) emitted in these reactions. Fragments were characterized as to their atomic number, energy and in- and out-of-plane angles. The probability of the decay by an event of a given complex fragment multiplicity as a function of excitation energy per nucleon of the source is nearly independent of the system studied. Additionally, there is no large increase in the proportion of multiple fragment events as the excitation energy of themore » source increases past 5 MeV/nucleon. This is at odds with many prompt multifragmentation models of nuclear decay. The reactions 139La + 27Al, 51V, natCu were also studied by combining a dynamical model calculation that simulates the early stages of nuclear reactions with a statistical model calculation for the latter stages of the reactions. For the reaction 139La + 27Al, these calculations reproduced many of the experimental features, but other features were not reproduced. For the reaction 139La + 51V, the calculation failed to reproduce somewhat more of the experimental features. The calculation failed to reproduce any of the experimental features of the reaction 139La + natCu, with the exception of the source velocity distributions.« less

  1. The decay of hot nuclei formed in La-induced reactions at E/A=45 MeV

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Libby, B.

    1993-01-01

    The decay of hot nuclei formed in the reactions [sup 139]La + [sup 27]Al, [sup 51]V, [sup nat]Cu, and [sup 139]La were studied by the coincident detection of up to four complex fragments (Z > 3) emitted in these reactions. Fragments were characterized as to their atomic number, energy and in- and out-of-plane angles. The probability of the decay by an event of a given complex fragment multiplicity as a function of excitation energy per nucleon of the source is nearly independent of the system studied. Additionally, there is no large increase in the proportion of multiple fragment events asmore » the excitation energy of the source increases past 5 MeV/nucleon. This is at odds with many prompt multifragmentation models of nuclear decay. The reactions [sup 139]La + [sup 27]Al, [sup 51]V, [sup nat]Cu were also studied by combining a dynamical model calculation that simulates the early stages of nuclear reactions with a statistical model calculation for the latter stages of the reactions. For the reaction [sup 139]La + [sup 27]Al, these calculations reproduced many of the experimental features, but other features were not reproduced. For the reaction [sup 139]La + [sup 51]V, the calculation failed to reproduce somewhat more of the experimental features. The calculation failed to reproduce any of the experimental features of the reaction [sup 139]La + [sup nat]Cu, with the exception of the source velocity distributions.« less

  2. Fragmentation of endohedral fullerene H o 3 N @ C 80 in an intense femtosecond near-infrared laser field

    DOE PAGES

    Xiong, Hui; Fang, Li; Osipov, Timur; ...

    2018-02-22

    The fragmentation of gas phase endohedral fullerene, Ho 3N@C 80, was investigated using femtosecond near-infrared laser pulses with an ion velocity map imaging spectrometer. Here, we observed that Ho + abundance associated with carbon cage opening dominates at an intensity of 1.1 x 10 14 W/cm 2. As the intensity increases, the Ho + yield associated with multifragmentation of the carbon cage exceeds the prominence of Ho + associated with the gentler carbon cage opening. Moreover, the power law dependence of Ho + on laser intensity indicates that the transition of the most likely fragmentation mechanisms occurs around 2.0 xmore » 10 14 W/cn 2.« less

  3. Fragmentation of endohedral fullerene H o 3 N @ C 80 in an intense femtosecond near-infrared laser field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xiong, Hui; Fang, Li; Osipov, Timur

    The fragmentation of gas phase endohedral fullerene, Ho 3N@C 80, was investigated using femtosecond near-infrared laser pulses with an ion velocity map imaging spectrometer. Here, we observed that Ho + abundance associated with carbon cage opening dominates at an intensity of 1.1 x 10 14 W/cm 2. As the intensity increases, the Ho + yield associated with multifragmentation of the carbon cage exceeds the prominence of Ho + associated with the gentler carbon cage opening. Moreover, the power law dependence of Ho + on laser intensity indicates that the transition of the most likely fragmentation mechanisms occurs around 2.0 xmore » 10 14 W/cn 2.« less

  4. Analyzing fragment production in mass-asymmetric reactions as a function of density dependent part of symmetry energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaur, Amandeep; Deepshikha; Vinayak, Karan Singh

    2016-07-15

    We performed a theoretical investigation of different mass-asymmetric reactions to access the direct impact of the density-dependent part of symmetry energy on multifragmentation. The simulations are performed for a specific set of reactions having same system mass and N/Z content, using isospin-dependent quantum molecular dynamics model to estimate the quantitative dependence of fragment production on themass-asymmetry factor (τ) for various symmetry energy forms. The dynamics associated with different mass-asymmetric reactions is explored and the direct role of symmetry energy is checked. Also a comparison with the experimental data (asymmetric reaction) is presented for a different equation of states (symmetry energymore » forms).« less

  5. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size.

    PubMed

    Heidel, R Eric

    2016-01-01

    Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  6. Absolute Configuration from Different Multifragmentation Pathways in Light-Induced Coulomb Explosion Imaging.

    PubMed

    Pitzer, Martin; Kastirke, Gregor; Kunitski, Maksim; Jahnke, Till; Bauer, Tobias; Goihl, Christoph; Trinter, Florian; Schober, Carl; Henrichs, Kevin; Becht, Jasper; Zeller, Stefan; Gassert, Helena; Waitz, Markus; Kuhlins, Andreas; Sann, Hendrik; Sturm, Felix; Wiegandt, Florian; Wallauer, Robert; Schmidt, Lothar Ph H; Johnson, Allan S; Mazenauer, Manuel; Spenger, Benjamin; Marquardt, Sabrina; Marquardt, Sebastian; Schmidt-Böcking, Horst; Stohner, Jürgen; Dörner, Reinhard; Schöffler, Markus; Berger, Robert

    2016-08-18

    The absolute configuration of individual small molecules in the gas phase can be determined directly by light-induced Coulomb explosion imaging (CEI). Herein, this approach is demonstrated for ionization with a single X-ray photon from a synchrotron light source, leading to enhanced efficiency and faster fragmentation as compared to previous experiments with a femtosecond laser. In addition, it is shown that even incomplete fragmentation pathways of individual molecules from a racemic CHBrClF sample can give access to the absolute configuration in CEI. This leads to a significant increase of the applicability of the method as compared to the previously reported complete break-up into atomic ions and can pave the way for routine stereochemical analysis of larger chiral molecules by light-induced CEI. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. The neutrino opacity of neutron rich matter

    NASA Astrophysics Data System (ADS)

    Alcain, P. N.; Dorso, C. O.

    2017-05-01

    The study of neutron rich matter, present in neutron star, proto-neutron stars and core-collapse supernovae, can lead to further understanding of the behavior of nuclear matter in highly asymmetric nuclei. Heterogeneous structures are expected to exist in these systems, often referred to as nuclear pasta. We have carried out a systematic study of neutrino opacity for different thermodynamic conditions in order to assess the impact that the structure has on it. We studied the dynamics of the neutrino opacity of the heterogeneous matter at different thermodynamic conditions with semiclassical molecular dynamics model already used to study nuclear multifragmentation. For different densities, proton fractions and temperature, we calculate the very long range opacity and the cluster distribution. The neutrino opacity is of crucial importance for the evolution of the core-collapse supernovae and the neutrino scattering.

  8. The large sample size fallacy.

    PubMed

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  9. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    PubMed

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  10. Computer programs for computing particle-size statistics of fluvial sediments

    USGS Publications Warehouse

    Stevens, H.H.; Hubbell, D.W.

    1986-01-01

    Two versions of computer programs for inputing data and computing particle-size statistics of fluvial sediments are presented. The FORTRAN 77 language versions are for use on the Prime computer, and the BASIC language versions are for use on microcomputers. The size-statistics program compute Inman, Trask , and Folk statistical parameters from phi values and sizes determined for 10 specified percent-finer values from inputed size and percent-finer data. The program also determines the percentage gravel, sand, silt, and clay, and the Meyer-Peter effective diameter. Documentation and listings for both versions of the programs are included. (Author 's abstract)

  11. Rasch fit statistics and sample size considerations for polytomous data.

    PubMed

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-05-29

    Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire - 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges.

  12. Rasch fit statistics and sample size considerations for polytomous data

    PubMed Central

    Smith, Adam B; Rush, Robert; Fallowfield, Lesley J; Velikova, Galina; Sharpe, Michael

    2008-01-01

    Background Previous research on educational data has demonstrated that Rasch fit statistics (mean squares and t-statistics) are highly susceptible to sample size variation for dichotomously scored rating data, although little is known about this relationship for polytomous data. These statistics help inform researchers about how well items fit to a unidimensional latent trait, and are an important adjunct to modern psychometrics. Given the increasing use of Rasch models in health research the purpose of this study was therefore to explore the relationship between fit statistics and sample size for polytomous data. Methods Data were collated from a heterogeneous sample of cancer patients (n = 4072) who had completed both the Patient Health Questionnaire – 9 and the Hospital Anxiety and Depression Scale. Ten samples were drawn with replacement for each of eight sample sizes (n = 25 to n = 3200). The Rating and Partial Credit Models were applied and the mean square and t-fit statistics (infit/outfit) derived for each model. Results The results demonstrated that t-statistics were highly sensitive to sample size, whereas mean square statistics remained relatively stable for polytomous data. Conclusion It was concluded that mean square statistics were relatively independent of sample size for polytomous data and that misfit to the model could be identified using published recommended ranges. PMID:18510722

  13. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    PubMed

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  14. [A Review on the Use of Effect Size in Nursing Research].

    PubMed

    Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae

    2015-10-01

    The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.

  15. An audit of the statistics and the comparison with the parameter in the population

    NASA Astrophysics Data System (ADS)

    Bujang, Mohamad Adam; Sa'at, Nadiah; Joys, A. Reena; Ali, Mariana Mohamad

    2015-10-01

    The sufficient sample size that is needed to closely estimate the statistics for particular parameters are use to be an issue. Although sample size might had been calculated referring to objective of the study, however, it is difficult to confirm whether the statistics are closed with the parameter for a particular population. All these while, guideline that uses a p-value less than 0.05 is widely used as inferential evidence. Therefore, this study had audited results that were analyzed from various sub sample and statistical analyses and had compared the results with the parameters in three different populations. Eight types of statistical analysis and eight sub samples for each statistical analysis were analyzed. Results found that the statistics were consistent and were closed to the parameters when the sample study covered at least 15% to 35% of population. Larger sample size is needed to estimate parameter that involve with categorical variables compared with numerical variables. Sample sizes with 300 to 500 are sufficient to estimate the parameters for medium size of population.

  16. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic.

    PubMed

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.

  17. Selection of the Maximum Spatial Cluster Size of the Spatial Scan Statistic by Using the Maximum Clustering Set-Proportion Statistic

    PubMed Central

    Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong

    2016-01-01

    Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set–proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters. PMID:26820646

  18. Précis of statistical significance: rationale, validity, and utility.

    PubMed

    Chow, S L

    1998-04-01

    The null-hypothesis significance-test procedure (NHSTP) is defended in the context of the theory-corroboration experiment, as well as the following contrasts: (a) substantive hypotheses versus statistical hypotheses, (b) theory corroboration versus statistical hypothesis testing, (c) theoretical inference versus statistical decision, (d) experiments versus nonexperimental studies, and (e) theory corroboration versus treatment assessment. The null hypothesis can be true because it is the hypothesis that errors are randomly distributed in data. Moreover, the null hypothesis is never used as a categorical proposition. Statistical significance means only that chance influences can be excluded as an explanation of data; it does not identify the nonchance factor responsible. The experimental conclusion is drawn with the inductive principle underlying the experimental design. A chain of deductive arguments gives rise to the theoretical conclusion via the experimental conclusion. The anomalous relationship between statistical significance and the effect size often used to criticize NHSTP is more apparent than real. The absolute size of the effect is not an index of evidential support for the substantive hypothesis. Nor is the effect size, by itself, informative as to the practical importance of the research result. Being a conditional probability, statistical power cannot be the a priori probability of statistical significance. The validity of statistical power is debatable because statistical significance is determined with a single sampling distribution of the test statistic based on H0, whereas it takes two distributions to represent statistical power or effect size. Sample size should not be determined in the mechanical manner envisaged in power analysis. It is inappropriate to criticize NHSTP for nonstatistical reasons. At the same time, neither effect size, nor confidence interval estimate, nor posterior probability can be used to exclude chance as an explanation of data. Neither can any of them fulfill the nonstatistical functions expected of them by critics.

  19. Extreme value statistics analysis of fracture strengths of a sintered silicon nitride failing from pores

    NASA Technical Reports Server (NTRS)

    Chao, Luen-Yuan; Shetty, Dinesh K.

    1992-01-01

    Statistical analysis and correlation between pore-size distribution and fracture strength distribution using the theory of extreme-value statistics is presented for a sintered silicon nitride. The pore-size distribution on a polished surface of this material was characterized, using an automatic optical image analyzer. The distribution measured on the two-dimensional plane surface was transformed to a population (volume) distribution, using the Schwartz-Saltykov diameter method. The population pore-size distribution and the distribution of the pore size at the fracture origin were correllated by extreme-value statistics. Fracture strength distribution was then predicted from the extreme-value pore-size distribution, usin a linear elastic fracture mechanics model of annular crack around pore and the fracture toughness of the ceramic. The predicted strength distribution was in good agreement with strength measurements in bending. In particular, the extreme-value statistics analysis explained the nonlinear trend in the linearized Weibull plot of measured strengths without postulating a lower-bound strength.

  20. The Statistical Power of Planned Comparisons.

    ERIC Educational Resources Information Center

    Benton, Roberta L.

    Basic principles underlying statistical power are examined; and issues pertaining to effect size, sample size, error variance, and significance level are highlighted via the use of specific hypothetical examples. Analysis of variance (ANOVA) and related methods remain popular, although other procedures sometimes have more statistical power against…

  1. Effect size and statistical power in the rodent fear conditioning literature - A systematic review.

    PubMed

    Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.

  2. Effect size and statistical power in the rodent fear conditioning literature – A systematic review

    PubMed Central

    Macleod, Malcolm R.

    2018-01-01

    Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451

  3. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    This study statistically analyzed a grain-size based additivity model that has been proposed to scale reaction rates and parameters from laboratory to field. The additivity model assumed that reaction properties in a sediment including surface area, reactive site concentration, reaction rate, and extent can be predicted from field-scale grain size distribution by linearly adding reaction properties for individual grain size fractions. This study focused on the statistical analysis of the additivity model with respect to reaction rate constants using multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment as an example. Experimental data of rate-limited U(VI) desorption in amore » stirred flow-cell reactor were used to estimate the statistical properties of multi-rate parameters for individual grain size fractions. The statistical properties of the rate constants for the individual grain size fractions were then used to analyze the statistical properties of the additivity model to predict rate-limited U(VI) desorption in the composite sediment, and to evaluate the relative importance of individual grain size fractions to the overall U(VI) desorption. The result indicated that the additivity model provided a good prediction of the U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model, and U(VI) desorption in individual grain size fractions have to be simulated in order to apply the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel size fraction (2-8mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  4. Data-driven inference for the spatial scan statistic.

    PubMed

    Almeida, Alexandre C L; Duarte, Anderson R; Duczmal, Luiz H; Oliveira, Fernando L P; Takahashi, Ricardo H C

    2011-08-02

    Kulldorff's spatial scan statistic for aggregated area maps searches for clusters of cases without specifying their size (number of areas) or geographic location in advance. Their statistical significance is tested while adjusting for the multiple testing inherent in such a procedure. However, as is shown in this work, this adjustment is not done in an even manner for all possible cluster sizes. A modification is proposed to the usual inference test of the spatial scan statistic, incorporating additional information about the size of the most likely cluster found. A new interpretation of the results of the spatial scan statistic is done, posing a modified inference question: what is the probability that the null hypothesis is rejected for the original observed cases map with a most likely cluster of size k, taking into account only those most likely clusters of size k found under null hypothesis for comparison? This question is especially important when the p-value computed by the usual inference process is near the alpha significance level, regarding the correctness of the decision based in this inference. A practical procedure is provided to make more accurate inferences about the most likely cluster found by the spatial scan statistic.

  5. Investigation of trends in flooding in the Tug Fork basin of Kentucky, Virginia, and West Virginia

    USGS Publications Warehouse

    Hirsch, Robert M.; Scott, Arthur G.; Wyant, Timothy

    1982-01-01

    Statistical analysis indicates that the average size of annual-flood peaks of the Tug Fork (Ky., Va., and W. Va.) has been increasing. However, additional statistical analysis does not indicate that the flood levels that were exceeded typically once or twice a year in the period 1947-79 are any more likely to be exceeded now than in 1947. Possible trends in streamchannel size also are investigated at three locations. No discernible trends in channel size are noted. Further statistical analysis of the trend in the size of annual-flood peaks shows that much of the annual variation is related to local rainfall and to the 'natural' hydrologic response in a relatively undisturbed subbasin. However, some statistical indication of trend persists after accounting for these natural factors, though it is of borderline statistical significance. Further study in the basin may relate flood magnitudes to both rainfall and to land use.

  6. Effect Size as the Essential Statistic in Developing Methods for mTBI Diagnosis.

    PubMed

    Gibson, Douglas Brandt

    2015-01-01

    The descriptive statistic known as "effect size" measures the distinguishability of two sets of data. Distingishability is at the core of diagnosis. This article is intended to point out the importance of effect size in the development of effective diagnostics for mild traumatic brain injury and to point out the applicability of the effect size statistic in comparing diagnostic efficiency across the main proposed TBI diagnostic methods: psychological, physiological, biochemical, and radiologic. Comparing diagnostic approaches is difficult because different researcher in different fields have different approaches to measuring efficacy. Converting diverse measures to effect sizes, as is done in meta-analysis, is a relatively easy way to make studies comparable.

  7. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Dissociation of biomolecules in liquid environments during fast heavy-ion irradiation

    NASA Astrophysics Data System (ADS)

    Nomura, Shinji; Tsuchida, Hidetsugu; Kajiwara, Akihiro; Yoshida, Shintaro; Majima, Takuya; Saito, Manabu

    2017-12-01

    The effect of aqueous environment on fast heavy-ion radiation damage of biomolecules was studied by comparative experiments using liquid- and gas-phase amino acid targets. Three types of amino acids with different chemical structures were used: glycine, proline, and hydroxyproline. Ion-induced reaction products were analyzed by time-of-flight secondary-ion mass spectrometry. The results showed that fragments from the amino acids resulting from the C—Cα bond cleavage were the major products for both types of targets. For liquid-phase targets, specific products originating from chemical reactions in solutions were observed. Interestingly, multiple dissociated atomic fragments were negligible for the liquid-phase targets. We found that the ratio of multifragment to total fragment ion yields was approximately half of that for gas-phase targets. This finding agreed with the results of other studies on biomolecular cluster targets. It is concluded that the suppression of molecular multifragmentation is caused by the energy dispersion to numerous water molecules surrounding the biomolecular solutes.

  9. The Effect Size Statistic: Overview of Various Choices.

    ERIC Educational Resources Information Center

    Mahadevan, Lakshmi

    Over the years, methodologists have been recommending that researchers use magnitude of effect estimates in result interpretation to highlight the distinction between statistical and practical significance (cf. R. Kirk, 1996). A magnitude of effect statistic (i.e., effect size) tells to what degree the dependent variable can be controlled,…

  10. Gene flow analysis method, the D-statistic, is robust in a wide parameter space.

    PubMed

    Zheng, Yichen; Janke, Axel

    2018-01-08

    We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.

  11. EVALUATION OF A NEW MEAN SCALED AND MOMENT ADJUSTED TEST STATISTIC FOR SEM.

    PubMed

    Tong, Xiaoxiao; Bentler, Peter M

    2013-01-01

    Recently a new mean scaled and skewness adjusted test statistic was developed for evaluating structural equation models in small samples and with potentially nonnormal data, but this statistic has received only limited evaluation. The performance of this statistic is compared to normal theory maximum likelihood and two well-known robust test statistics. A modification to the Satorra-Bentler scaled statistic is developed for the condition that sample size is smaller than degrees of freedom. The behavior of the four test statistics is evaluated with a Monte Carlo confirmatory factor analysis study that varies seven sample sizes and three distributional conditions obtained using Headrick's fifth-order transformation to nonnormality. The new statistic performs badly in most conditions except under the normal distribution. The goodness-of-fit χ(2) test based on maximum-likelihood estimation performed well under normal distributions as well as under a condition of asymptotic robustness. The Satorra-Bentler scaled test statistic performed best overall, while the mean scaled and variance adjusted test statistic outperformed the others at small and moderate sample sizes under certain distributional conditions.

  12. How Large Should a Statistical Sample Be?

    ERIC Educational Resources Information Center

    Menil, Violeta C.; Ye, Ruili

    2012-01-01

    This study serves as a teaching aid for teachers of introductory statistics. The aim of this study was limited to determining various sample sizes when estimating population proportion. Tables on sample sizes were generated using a C[superscript ++] program, which depends on population size, degree of precision or error level, and confidence…

  13. A d-statistic for single-case designs that is equivalent to the usual between-groups d-statistic.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E; Boyajian, Jonathan G; Sullivan, Kristynn J; Andrade, Alma; Barrientos, Jeannette L

    2014-01-01

    We describe a standardised mean difference statistic (d) for single-case designs that is equivalent to the usual d in between-groups experiments. We show how it can be used to summarise treatment effects over cases within a study, to do power analyses in planning new studies and grant proposals, and to meta-analyse effects across studies of the same question. We discuss limitations of this d-statistic, and possible remedies to them. Even so, this d-statistic is better founded statistically than other effect size measures for single-case design, and unlike many general linear model approaches such as multilevel modelling or generalised additive models, it produces a standardised effect size that can be integrated over studies with different outcome measures. SPSS macros for both effect size computation and power analysis are available.

  14. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items.

    PubMed

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed.

  15. The Importance of Teaching Power in Statistical Hypothesis Testing

    ERIC Educational Resources Information Center

    Olinsky, Alan; Schumacher, Phyllis; Quinn, John

    2012-01-01

    In this paper, we discuss the importance of teaching power considerations in statistical hypothesis testing. Statistical power analysis determines the ability of a study to detect a meaningful effect size, where the effect size is the difference between the hypothesized value of the population parameter under the null hypothesis and the true value…

  16. Confidence Intervals for Effect Sizes: Applying Bootstrap Resampling

    ERIC Educational Resources Information Center

    Banjanovic, Erin S.; Osborne, Jason W.

    2016-01-01

    Confidence intervals for effect sizes (CIES) provide readers with an estimate of the strength of a reported statistic as well as the relative precision of the point estimate. These statistics offer more information and context than null hypothesis statistic testing. Although confidence intervals have been recommended by scholars for many years,…

  17. Knowledge level of effect size statistics, confidence intervals and meta-analysis in Spanish academic psychologists.

    PubMed

    Badenes-Ribera, Laura; Frias-Navarro, Dolores; Pascual-Soler, Marcos; Monterde-I-Bort, Héctor

    2016-11-01

    The statistical reform movement and the American Psychological Association (APA) defend the use of estimators of the effect size and its confidence intervals, as well as the interpretation of the clinical significance of the findings. A survey was conducted in which academic psychologists were asked about their behavior in designing and carrying out their studies. The sample was composed of 472 participants (45.8% men). The mean number of years as a university professor was 13.56 years (SD= 9.27). The use of effect-size estimators is becoming generalized, as well as the consideration of meta-analytic studies. However, several inadequate practices still persist. A traditional model of methodological behavior based on statistical significance tests is maintained, based on the predominance of Cohen’s d and the unadjusted R2/η2, which are not immune to outliers or departure from normality and the violations of statistical assumptions, and the under-reporting of confidence intervals of effect-size statistics. The paper concludes with recommendations for improving statistical practice.

  18. Rear-End Crashes: Problem Size Assessment And Statistical Description

    DOT National Transportation Integrated Search

    1993-05-01

    KEYWORDS : RESEARCH AND DEVELOPMENT OR R&D, ADVANCED VEHICLE CONTROL & SAFETY SYSTEMS OR AVCSS, INTELLIGENT VEHICLE INITIATIVE OR IVI : THIS DOCUMENT PRESENTS PROBLEM SIZE ASSESSMENTS AND STATISTICAL CRASH DESCRIPTION FOR REAR-END CRASHES, INC...

  19. Bootstrap versus Statistical Effect Size Corrections: A Comparison with Data from the Finding Embedded Figures Test.

    ERIC Educational Resources Information Center

    Thompson, Bruce; Melancon, Janet G.

    Effect sizes have been increasingly emphasized in research as more researchers have recognized that: (1) all parametric analyses (t-tests, analyses of variance, etc.) are correlational; (2) effect sizes have played an important role in meta-analytic work; and (3) statistical significance testing is limited in its capacity to inform scientific…

  20. Statistical Misconceptions and Rushton's Writings on Race.

    ERIC Educational Resources Information Center

    Cernovsky, Zack Z.

    The term "statistical significance" is often misunderstood or abused to imply a large effect size. A recent example is in the work of J. P. Rushton (1988, 1990) on differences between Negroids and Caucasoids. Rushton used brain size and cranial size as indicators of intelligence, using Pearson "r"s ranging from 0.03 to 0.35.…

  1. Races of Heliconius erato (Nymphalidae: Heliconiinae) found on different sides of the Andes show wing size differences

    USDA-ARS?s Scientific Manuscript database

    Differences in wing size in geographical races of Heliconius erato distributed over the western and eastern sides of the Andes are reported on here. Individuals from the eastern side of the Andes are statistically larger in size than the ones on the western side of the Andes. A statistical differenc...

  2. A Sorting Statistic with Application in Neurological Magnetic Resonance Imaging of Autism.

    PubMed

    Levman, Jacob; Takahashi, Emi; Forgeron, Cynthia; MacDonald, Patrick; Stewart, Natalie; Lim, Ashley; Martel, Anne

    2018-01-01

    Effect size refers to the assessment of the extent of differences between two groups of samples on a single measurement. Assessing effect size in medical research is typically accomplished with Cohen's d statistic. Cohen's d statistic assumes that average values are good estimators of the position of a distribution of numbers and also assumes Gaussian (or bell-shaped) underlying data distributions. In this paper, we present an alternative evaluative statistic that can quantify differences between two data distributions in a manner that is similar to traditional effect size calculations; however, the proposed approach avoids making assumptions regarding the shape of the underlying data distribution. The proposed sorting statistic is compared with Cohen's d statistic and is demonstrated to be capable of identifying feature measurements of potential interest for which Cohen's d statistic implies the measurement would be of little use. This proposed sorting statistic has been evaluated on a large clinical autism dataset from Boston Children's Hospital , Harvard Medical School , demonstrating that it can potentially play a constructive role in future healthcare technologies.

  3. A Sorting Statistic with Application in Neurological Magnetic Resonance Imaging of Autism

    PubMed Central

    Takahashi, Emi; Lim, Ashley; Martel, Anne

    2018-01-01

    Effect size refers to the assessment of the extent of differences between two groups of samples on a single measurement. Assessing effect size in medical research is typically accomplished with Cohen's d statistic. Cohen's d statistic assumes that average values are good estimators of the position of a distribution of numbers and also assumes Gaussian (or bell-shaped) underlying data distributions. In this paper, we present an alternative evaluative statistic that can quantify differences between two data distributions in a manner that is similar to traditional effect size calculations; however, the proposed approach avoids making assumptions regarding the shape of the underlying data distribution. The proposed sorting statistic is compared with Cohen's d statistic and is demonstrated to be capable of identifying feature measurements of potential interest for which Cohen's d statistic implies the measurement would be of little use. This proposed sorting statistic has been evaluated on a large clinical autism dataset from Boston Children's Hospital, Harvard Medical School, demonstrating that it can potentially play a constructive role in future healthcare technologies. PMID:29796236

  4. Is There a Common Summary Statistical Process for Representing the Mean and Variance? A Study Using Illustrations of Familiar Items

    PubMed Central

    Yang, Yi; Tokita, Midori; Ishiguchi, Akira

    2018-01-01

    A number of studies revealed that our visual system can extract different types of summary statistics, such as the mean and variance, from sets of items. Although the extraction of such summary statistics has been studied well in isolation, the relationship between these statistics remains unclear. In this study, we explored this issue using an individual differences approach. Observers viewed illustrations of strawberries and lollypops varying in size or orientation and performed four tasks in a within-subject design, namely mean and variance discrimination tasks with size and orientation domains. We found that the performances in the mean and variance discrimination tasks were not correlated with each other and demonstrated that extractions of the mean and variance are mediated by different representation mechanisms. In addition, we tested the relationship between performances in size and orientation domains for each summary statistic (i.e. mean and variance) and examined whether each summary statistic has distinct processes across perceptual domains. The results illustrated that statistical summary representations of size and orientation may share a common mechanism for representing the mean and possibly for representing variance. Introspections for each observer performing the tasks were also examined and discussed. PMID:29399318

  5. Mesopic pupil size in a refractive surgery population (13,959 eyes).

    PubMed

    Linke, Stephan J; Baviera, Julio; Munzer, Gur; Fricke, Otto H; Richard, Gisbert; Katz, Toam

    2012-08-01

    To evaluate factors that may affect mesopic pupil size in refractive surgery candidates. Medical records of 13,959 eyes of 13,959 refractive surgery candidates were reviewed, and one eye per subject was selected randomly for statistical analysis. Detailed ophthalmological examination data were obtained from medical records. Preoperative measurements included uncorrected distance visual acuity, corrected distance visual acuity, manifest and cycloplegic refraction, topography, slit lamp examination, and funduscopy. Mesopic pupil size measurements were performed with Colvard pupillometer. Relationship between mesopic pupil size and age, gender, refractive state, average keratometry, and pachymetry (thinnest point) were analyzed by means of ANOVA (+ANCOVA) and multivariate regression analyses. Overall mesopic pupil size was 6.45 ± 0.82 mm, and mean age was 36.07 years. Mesopic pupil size was 5.96 ± 0.8 mm in hyperopic astigmatism, 6.36 ± 0.83 mm in high astigmatism, and 6.51 ± 0.8 mm in myopic astigmatism. The difference in mesopic pupil size between all refractive subgroups was statistically significant (p < 0.001). Age revealed the strongest correlation (r = -0.405, p < 0.001) with mesopic pupil size. Spherical equivalent showed a moderate correlation (r = -0.136), whereas keratometry (r = -0.064) and pachymetry (r = -0.057) had a weak correlation with mesopic pupil size. No statistically significant difference in mesopic pupil size was noted regarding gender and ocular side. The sum of all analyzed factors (age, refractive state, keratometry, and pachymetry) can only predict the expected pupil size in <20% (R = 0.179, p < 0.001). Our analysis confirmed that age and refractive state are determinative factors on mesopic pupil size. Average keratometry and minimal pachymetry exhibited a statistically significant, but clinically insignificant, impact on mesopic pupil size.

  6. Measuring an Effect Size from Dichotomized Data: Contrasted Results Whether Using a Correlation or an Odds Ratio

    ERIC Educational Resources Information Center

    Rousson, Valentin

    2014-01-01

    It is well known that dichotomizing continuous data has the effect to decrease statistical power when the goal is to test for a statistical association between two variables. Modern researchers however are focusing not only on statistical significance but also on an estimation of the "effect size" (i.e., the strength of association…

  7. Modeling Cell Size Regulation: From Single-Cell-Level Statistics to Molecular Mechanisms and Population-Level Effects.

    PubMed

    Ho, Po-Yi; Lin, Jie; Amir, Ariel

    2018-05-20

    Most microorganisms regulate their cell size. In this article, we review some of the mathematical formulations of the problem of cell size regulation. We focus on coarse-grained stochastic models and the statistics that they generate. We review the biologically relevant insights obtained from these models. We then describe cell cycle regulation and its molecular implementations, protein number regulation, and population growth, all in relation to size regulation. Finally, we discuss several future directions for developing understanding beyond phenomenological models of cell size regulation.

  8. Spatial analyses for nonoverlapping objects with size variations and their application to coral communities.

    PubMed

    Muko, Soyoka; Shimatani, Ichiro K; Nozawa, Yoko

    2014-07-01

    Spatial distributions of individuals are conventionally analysed by representing objects as dimensionless points, in which spatial statistics are based on centre-to-centre distances. However, if organisms expand without overlapping and show size variations, such as is the case for encrusting corals, interobject spacing is crucial for spatial associations where interactions occur. We introduced new pairwise statistics using minimum distances between objects and demonstrated their utility when examining encrusting coral community data. We also calculated the conventional point process statistics and the grid-based statistics to clarify the advantages and limitations of each spatial statistical method. For simplicity, coral colonies were approximated by disks in these demonstrations. Focusing on short-distance effects, the use of minimum distances revealed that almost all coral genera were aggregated at a scale of 1-25 cm. However, when fragmented colonies (ramets) were treated as a genet, a genet-level analysis indicated weak or no aggregation, suggesting that most corals were randomly distributed and that fragmentation was the primary cause of colony aggregations. In contrast, point process statistics showed larger aggregation scales, presumably because centre-to-centre distances included both intercolony spacing and colony sizes (radius). The grid-based statistics were able to quantify the patch (aggregation) scale of colonies, but the scale was strongly affected by the colony size. Our approach quantitatively showed repulsive effects between an aggressive genus and a competitively weak genus, while the grid-based statistics (covariance function) also showed repulsion although the spatial scale indicated from the statistics was not directly interpretable in terms of ecological meaning. The use of minimum distances together with previously proposed spatial statistics helped us to extend our understanding of the spatial patterns of nonoverlapping objects that vary in size and the associated specific scales. © 2013 The Authors. Journal of Animal Ecology © 2013 British Ecological Society.

  9. Characterizing the Joint Effect of Diverse Test-Statistic Correlation Structures and Effect Size on False Discovery Rates in a Multiple-Comparison Study of Many Outcome Measures

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Ploutz-Snyder, Robert; Fiedler, James

    2011-01-01

    In their 2009 Annals of Statistics paper, Gavrilov, Benjamini, and Sarkar report the results of a simulation assessing the robustness of their adaptive step-down procedure (GBS) for controlling the false discovery rate (FDR) when normally distributed test statistics are serially correlated. In this study we extend the investigation to the case of multiple comparisons involving correlated non-central t-statistics, in particular when several treatments or time periods are being compared to a control in a repeated-measures design with many dependent outcome measures. In addition, we consider several dependence structures other than serial correlation and illustrate how the FDR depends on the interaction between effect size and the type of correlation structure as indexed by Foerstner s distance metric from an identity. The relationship between the correlation matrix R of the original dependent variables and R, the correlation matrix of associated t-statistics is also studied. In general R depends not only on R, but also on sample size and the signed effect sizes for the multiple comparisons.

  10. A visual basic program to generate sediment grain-size statistics and to extrapolate particle distributions

    USGS Publications Warehouse

    Poppe, L.J.; Eliason, A.H.; Hastings, M.E.

    2004-01-01

    Measures that describe and summarize sediment grain-size distributions are important to geologists because of the large amount of information contained in textural data sets. Statistical methods are usually employed to simplify the necessary comparisons among samples and quantify the observed differences. The two statistical methods most commonly used by sedimentologists to describe particle distributions are mathematical moments (Krumbein and Pettijohn, 1938) and inclusive graphics (Folk, 1974). The choice of which of these statistical measures to use is typically governed by the amount of data available (Royse, 1970). If the entire distribution is known, the method of moments may be used; if the next to last accumulated percent is greater than 95, inclusive graphics statistics can be generated. Unfortunately, earlier programs designed to describe sediment grain-size distributions statistically do not run in a Windows environment, do not allow extrapolation of the distribution's tails, or do not generate both moment and graphic statistics (Kane and Hubert, 1963; Collias et al., 1963; Schlee and Webster, 1967; Poppe et al., 2000)1.Owing to analytical limitations, electro-resistance multichannel particle-size analyzers, such as Coulter Counters, commonly truncate the tails of the fine-fraction part of grain-size distributions. These devices do not detect fine clay in the 0.6–0.1 μm range (part of the 11-phi and all of the 12-phi and 13-phi fractions). Although size analyses performed down to 0.6 μm microns are adequate for most freshwater and near shore marine sediments, samples from many deeper water marine environments (e.g. rise and abyssal plain) may contain significant material in the fine clay fraction, and these analyses benefit from extrapolation.The program (GSSTAT) described herein generates statistics to characterize sediment grain-size distributions and can extrapolate the fine-grained end of the particle distribution. It is written in Microsoft Visual Basic 6.0 and provides a window to facilitate program execution. The input for the sediment fractions is weight percentages in whole-phi notation (Krumbein, 1934; Inman, 1952), and the program permits the user to select output in either method of moments or inclusive graphics statistics (Fig. 1). Users select options primarily with mouse-click events, or through interactive dialogue boxes.

  11. The other half of the story: effect size analysis in quantitative research.

    PubMed

    Maher, Jessica Middlemis; Markey, Jonathan C; Ebert-May, Diane

    2013-01-01

    Statistical significance testing is the cornerstone of quantitative research, but studies that fail to report measures of effect size are potentially missing a robust part of the analysis. We provide a rationale for why effect size measures should be included in quantitative discipline-based education research. Examples from both biological and educational research demonstrate the utility of effect size for evaluating practical significance. We also provide details about some effect size indices that are paired with common statistical significance tests used in educational research and offer general suggestions for interpreting effect size measures. Finally, we discuss some inherent limitations of effect size measures and provide further recommendations about reporting confidence intervals.

  12. The Statistics of Urban Scaling and Their Connection to Zipf’s Law

    PubMed Central

    Gomez-Lievano, Andres; Youn, HyeJin; Bettencourt, Luís M. A.

    2012-01-01

    Urban scaling relations characterizing how diverse properties of cities vary on average with their population size have recently been shown to be a general quantitative property of many urban systems around the world. However, in previous studies the statistics of urban indicators were not analyzed in detail, raising important questions about the full characterization of urban properties and how scaling relations may emerge in these larger contexts. Here, we build a self-consistent statistical framework that characterizes the joint probability distributions of urban indicators and city population sizes across an urban system. To develop this framework empirically we use one of the most granular and stochastic urban indicators available, specifically measuring homicides in cities of Brazil, Colombia and Mexico, three nations with high and fast changing rates of violent crime. We use these data to derive the conditional probability of the number of homicides per year given the population size of a city. To do this we use Bayes’ rule together with the estimated conditional probability of city size given their number of homicides and the distribution of total homicides. We then show that scaling laws emerge as expectation values of these conditional statistics. Knowledge of these distributions implies, in turn, a relationship between scaling and population size distribution exponents that can be used to predict Zipf’s exponent from urban indicator statistics. Our results also suggest how a general statistical theory of urban indicators may be constructed from the stochastic dynamics of social interaction processes in cities. PMID:22815745

  13. Robust functional statistics applied to Probability Density Function shape screening of sEMG data.

    PubMed

    Boudaoud, S; Rix, H; Al Harrach, M; Marin, F

    2014-01-01

    Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.

  14. Finite-data-size study on practical universal blind quantum computation

    NASA Astrophysics Data System (ADS)

    Zhao, Qiang; Li, Qiong

    2018-07-01

    The universal blind quantum computation with weak coherent pulses protocol is a practical scheme to allow a client to delegate a computation to a remote server while the computation hidden. However, in the practical protocol, a finite data size will influence the preparation efficiency in the remote blind qubit state preparation (RBSP). In this paper, a modified RBSP protocol with two decoy states is studied in the finite data size. The issue of its statistical fluctuations is analyzed thoroughly. The theoretical analysis and simulation results show that two-decoy-state case with statistical fluctuation is closer to the asymptotic case than the one-decoy-state case with statistical fluctuation. Particularly, the two-decoy-state protocol can achieve a longer communication distance than the one-decoy-state case in this statistical fluctuation situation.

  15. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    PubMed

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  16. Using the Bootstrap Method for a Statistical Significance Test of Differences between Summary Histograms

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man

    2006-01-01

    A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.

  17. Confidence Interval Coverage for Cohen's Effect Size Statistic

    ERIC Educational Resources Information Center

    Algina, James; Keselman, H. J.; Penfield, Randall D.

    2006-01-01

    Kelley compared three methods for setting a confidence interval (CI) around Cohen's standardized mean difference statistic: the noncentral-"t"-based, percentile (PERC) bootstrap, and biased-corrected and accelerated (BCA) bootstrap methods under three conditions of nonnormality, eight cases of sample size, and six cases of population…

  18. An Introduction to Confidence Intervals for Both Statistical Estimates and Effect Sizes.

    ERIC Educational Resources Information Center

    Capraro, Mary Margaret

    This paper summarizes methods of estimating confidence intervals, including classical intervals and intervals for effect sizes. The recent American Psychological Association (APA) Task Force on Statistical Inference report suggested that confidence intervals should always be reported, and the fifth edition of the APA "Publication Manual"…

  19. Qualitative Meta-Analysis on the Hospital Task: Implications for Research

    ERIC Educational Resources Information Center

    Noll, Jennifer; Sharma, Sashi

    2014-01-01

    The "law of large numbers" indicates that as sample size increases, sample statistics become less variable and more closely estimate their corresponding population parameters. Different research studies investigating how people consider sample size when evaluating the reliability of a sample statistic have found a wide range of…

  20. Recurrence time statistics for finite size intervals

    NASA Astrophysics Data System (ADS)

    Altmann, Eduardo G.; da Silva, Elton C.; Caldas, Iberê L.

    2004-12-01

    We investigate the statistics of recurrences to finite size intervals for chaotic dynamical systems. We find that the typical distribution presents an exponential decay for almost all recurrence times except for a few short times affected by a kind of memory effect. We interpret this effect as being related to the unstable periodic orbits inside the interval. Although it is restricted to a few short times it changes the whole distribution of recurrences. We show that for systems with strong mixing properties the exponential decay converges to the Poissonian statistics when the width of the interval goes to zero. However, we alert that special attention to the size of the interval is required in order to guarantee that the short time memory effect is negligible when one is interested in numerically or experimentally calculated Poincaré recurrence time statistics.

  1. Intercomparison of textural parameters of intertidal sediments generated by different statistical procedures, and implications for a unifying descriptive nomenclature

    NASA Astrophysics Data System (ADS)

    Fan, Daidu; Tu, Junbiao; Cai, Guofu; Shang, Shuai

    2015-06-01

    Grain-size analysis is a basic routine in sedimentology and related fields, but diverse methods of sample collection, processing and statistical analysis often make direct comparisons and interpretations difficult or even impossible. In this paper, 586 published grain-size datasets from the Qiantang Estuary (East China Sea) sampled and analyzed by the same procedures were merged and their textural parameters calculated by a percentile and two moment methods. The aim was to explore which of the statistical procedures performed best in the discrimination of three distinct sedimentary units on the tidal flats of the middle Qiantang Estuary. A Gaussian curve-fitting method served to simulate mixtures of two normal populations having different modal sizes, sorting values and size distributions, enabling a better understanding of the impact of finer tail components on textural parameters, as well as the proposal of a unifying descriptive nomenclature. The results show that percentile and moment procedures yield almost identical results for mean grain size, and that sorting values are also highly correlated. However, more complex relationships exist between percentile and moment skewness (kurtosis), changing from positive to negative correlations when the proportions of the finer populations decrease below 35% (10%). This change results from the overweighting of tail components in moment statistics, which stands in sharp contrast to the underweighting or complete amputation of small tail components by the percentile procedure. Intercomparisons of bivariate plots suggest an advantage of the Friedman & Johnson moment procedure over the McManus moment method in terms of the description of grain-size distributions, and over the percentile method by virtue of a greater sensitivity to small variations in tail components. The textural parameter scalings of Folk & Ward were translated into their Friedman & Johnson moment counterparts by application of mathematical functions derived by regression analysis of measured and modeled grain-size data, or by determining the abscissa values of intersections between auxiliary lines running parallel to the x-axis and vertical lines corresponding to the descriptive percentile limits along the ordinate of representative bivariate plots. Twofold limits were extrapolated for the moment statistics in relation to single descriptive terms in the cases of skewness and kurtosis by considering both positive and negative correlations between percentile and moment statistics. The extrapolated descriptive scalings were further validated by examining entire size-frequency distributions simulated by mixing two normal populations of designated modal size and sorting values, but varying in mixing ratios. These were found to match well in most of the proposed scalings, although platykurtic and very platykurtic categories were questionable when the proportion of the finer population was below 5%. Irrespective of the statistical procedure, descriptive nomenclatures should therefore be cautiously used when tail components contribute less than 5% to grain-size distributions.

  2. Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin

    NASA Astrophysics Data System (ADS)

    He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu

    2017-06-01

    This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.

  3. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data.

    PubMed

    Kim, Sehwi; Jung, Inkyung

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns.

  4. Optimizing the maximum reported cluster size in the spatial scan statistic for ordinal data

    PubMed Central

    Kim, Sehwi

    2017-01-01

    The spatial scan statistic is an important tool for spatial cluster detection. There have been numerous studies on scanning window shapes. However, little research has been done on the maximum scanning window size or maximum reported cluster size. Recently, Han et al. proposed to use the Gini coefficient to optimize the maximum reported cluster size. However, the method has been developed and evaluated only for the Poisson model. We adopt the Gini coefficient to be applicable to the spatial scan statistic for ordinal data to determine the optimal maximum reported cluster size. Through a simulation study and application to a real data example, we evaluate the performance of the proposed approach. With some sophisticated modification, the Gini coefficient can be effectively employed for the ordinal model. The Gini coefficient most often picked the optimal maximum reported cluster sizes that were the same as or smaller than the true cluster sizes with very high accuracy. It seems that we can obtain a more refined collection of clusters by using the Gini coefficient. The Gini coefficient developed specifically for the ordinal model can be useful for optimizing the maximum reported cluster size for ordinal data and helpful for properly and informatively discovering cluster patterns. PMID:28753674

  5. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    NASA Astrophysics Data System (ADS)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  6. Statistical Significance and Effect Size: Two Sides of a Coin.

    ERIC Educational Resources Information Center

    Fan, Xitao

    This paper suggests that statistical significance testing and effect size are two sides of the same coin; they complement each other, but do not substitute for one another. Good research practice requires that both should be taken into consideration to make sound quantitative decisions. A Monte Carlo simulation experiment was conducted, and a…

  7. The Misdirection of Public Policy: Comparing and Combining Standardised Effect Sizes

    ERIC Educational Resources Information Center

    Simpson, Adrian

    2017-01-01

    Increased attention on "what works" in education has led to an emphasis on developing policy from evidence based on comparing and combining a particular statistical summary of intervention studies: the standardised effect size. It is assumed that this statistical summary provides an estimate of the educational impact of interventions and…

  8. Assessing the Disconnect between Grade Expectation and Achievement in a Business Statistics Course

    ERIC Educational Resources Information Center

    Berenson, Mark L.; Ramnarayanan, Renu; Oppenheim, Alan

    2015-01-01

    In an institutional review board--approved study aimed at evaluating differences in learning between a large-sized introductory business statistics course section using courseware assisted examinations compared with small-sized sections using traditional paper-and-pencil examinations, there appeared to be a severe disconnect between the final…

  9. Standardized Effect Sizes for Moderated Conditional Fixed Effects with Continuous Moderator Variables

    PubMed Central

    Bodner, Todd E.

    2017-01-01

    Wilkinson and Task Force on Statistical Inference (1999) recommended that researchers include information on the practical magnitude of effects (e.g., using standardized effect sizes) to distinguish between the statistical and practical significance of research results. To date, however, researchers have not widely incorporated this recommendation into the interpretation and communication of the conditional effects and differences in conditional effects underlying statistical interactions involving a continuous moderator variable where at least one of the involved variables has an arbitrary metric. This article presents a descriptive approach to investigate two-way statistical interactions involving continuous moderator variables where the conditional effects underlying these interactions are expressed in standardized effect size metrics (i.e., standardized mean differences and semi-partial correlations). This approach permits researchers to evaluate and communicate the practical magnitude of particular conditional effects and differences in conditional effects using conventional and proposed guidelines, respectively, for the standardized effect size and therefore provides the researcher important supplementary information lacking under current approaches. The utility of this approach is demonstrated with two real data examples and important assumptions underlying the standardization process are highlighted. PMID:28484404

  10. Usage Statistics

    MedlinePlus

    ... this page: https://medlineplus.gov/usestatistics.html MedlinePlus Statistics To use the sharing features on this page, ... By Quarter View image full size Quarterly User Statistics Quarter Page Views Unique Visitors Oct-Dec-98 ...

  11. Methods for flexible sample-size design in clinical trials: Likelihood, weighted, dual test, and promising zone approaches.

    PubMed

    Shih, Weichung Joe; Li, Gang; Wang, Yining

    2016-03-01

    Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. [Practical aspects regarding sample size in clinical research].

    PubMed

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  13. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    PubMed Central

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  14. Power, effects, confidence, and significance: an investigation of statistical practices in nursing research.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2014-05-01

    To (a) assess the statistical power of nursing research to detect small, medium, and large effect sizes; (b) estimate the experiment-wise Type I error rate in these studies; and (c) assess the extent to which (i) a priori power analyses, (ii) effect sizes (and interpretations thereof), and (iii) confidence intervals were reported. Statistical review. Papers published in the 2011 volumes of the 10 highest ranked nursing journals, based on their 5-year impact factors. Papers were assessed for statistical power, control of experiment-wise Type I error, reporting of a priori power analyses, reporting and interpretation of effect sizes, and reporting of confidence intervals. The analyses were based on 333 papers, from which 10,337 inferential statistics were identified. The median power to detect small, medium, and large effect sizes was .40 (interquartile range [IQR]=.24-.71), .98 (IQR=.85-1.00), and 1.00 (IQR=1.00-1.00), respectively. The median experiment-wise Type I error rate was .54 (IQR=.26-.80). A priori power analyses were reported in 28% of papers. Effect sizes were routinely reported for Spearman's rank correlations (100% of papers in which this test was used), Poisson regressions (100%), odds ratios (100%), Kendall's tau correlations (100%), Pearson's correlations (99%), logistic regressions (98%), structural equation modelling/confirmatory factor analyses/path analyses (97%), and linear regressions (83%), but were reported less often for two-proportion z tests (50%), analyses of variance/analyses of covariance/multivariate analyses of variance (18%), t tests (8%), Wilcoxon's tests (8%), Chi-squared tests (8%), and Fisher's exact tests (7%), and not reported for sign tests, Friedman's tests, McNemar's tests, multi-level models, and Kruskal-Wallis tests. Effect sizes were infrequently interpreted. Confidence intervals were reported in 28% of papers. The use, reporting, and interpretation of inferential statistics in nursing research need substantial improvement. Most importantly, researchers should abandon the misleading practice of interpreting the results from inferential tests based solely on whether they are statistically significant (or not) and, instead, focus on reporting and interpreting effect sizes, confidence intervals, and significance levels. Nursing researchers also need to conduct and report a priori power analyses, and to address the issue of Type I experiment-wise error inflation in their studies. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  15. Perception of ensemble statistics requires attention.

    PubMed

    Jackson-Nielsen, Molly; Cohen, Michael A; Pitts, Michael A

    2017-02-01

    To overcome inherent limitations in perceptual bandwidth, many aspects of the visual world are represented as summary statistics (e.g., average size, orientation, or density of objects). Here, we investigated the relationship between summary (ensemble) statistics and visual attention. Recently, it was claimed that one ensemble statistic in particular, color diversity, can be perceived without focal attention. However, a broader debate exists over the attentional requirements of conscious perception, and it is possible that some form of attention is necessary for ensemble perception. To test this idea, we employed a modified inattentional blindness paradigm and found that multiple types of summary statistics (color and size) often go unnoticed without attention. In addition, we found attentional costs in dual-task situations, further implicating a role for attention in statistical perception. Overall, we conclude that while visual ensembles may be processed efficiently, some amount of attention is necessary for conscious perception of ensemble statistics. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models.

    PubMed

    Hillier, John K; Kougioumtzoglou, Ioannis A; Stokes, Chris R; Smith, Michael J; Clark, Chris D; Spagnolo, Matteo S

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A 'stochastic instability' (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models.

  17. Targeted On-Demand Team Performance App Development

    DTIC Science & Technology

    2016-10-01

    from three sites; 6) Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes...statistical analyses, and examine any resulting qualitative data for trends or connections to statistical outcomes. On Schedule 21 Predictive...Preliminary analysis indicates larger than estimate effect size and study is sufficiently powered for generalizable outcomes.  What opportunities for

  18. A General Model for Estimating and Correcting the Effects of Nonindependence in Meta-Analysis.

    ERIC Educational Resources Information Center

    Strube, Michael J.

    A general model is described which can be used to represent the four common types of meta-analysis: (1) estimation of effect size by combining study outcomes; (2) estimation of effect size by contrasting study outcomes; (3) estimation of statistical significance by combining study outcomes; and (4) estimation of statistical significance by…

  19. Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics

    ERIC Educational Resources Information Center

    Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas

    2014-01-01

    Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…

  20. Interpretation of correlations in clinical research.

    PubMed

    Hung, Man; Bounsanga, Jerry; Voss, Maren Wright

    2017-11-01

    Critically analyzing research is a key skill in evidence-based practice and requires knowledge of research methods, results interpretation, and applications, all of which rely on a foundation based in statistics. Evidence-based practice makes high demands on trained medical professionals to interpret an ever-expanding array of research evidence. As clinical training emphasizes medical care rather than statistics, it is useful to review the basics of statistical methods and what they mean for interpreting clinical studies. We reviewed the basic concepts of correlational associations, violations of normality, unobserved variable bias, sample size, and alpha inflation. The foundations of causal inference were discussed and sound statistical analyses were examined. We discuss four ways in which correlational analysis is misused, including causal inference overreach, over-reliance on significance, alpha inflation, and sample size bias. Recent published studies in the medical field provide evidence of causal assertion overreach drawn from correlational findings. The findings present a primer on the assumptions and nature of correlational methods of analysis and urge clinicians to exercise appropriate caution as they critically analyze the evidence before them and evaluate evidence that supports practice. Critically analyzing new evidence requires statistical knowledge in addition to clinical knowledge. Studies can overstate relationships, expressing causal assertions when only correlational evidence is available. Failure to account for the effect of sample size in the analyses tends to overstate the importance of predictive variables. It is important not to overemphasize the statistical significance without consideration of effect size and whether differences could be considered clinically meaningful.

  1. Role of microstructure on twin nucleation and growth in HCP titanium: A statistical study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arul Kumar, M.; Wroński, M.; McCabe, Rodney James

    In this study, a detailed statistical analysis is performed using Electron Back Scatter Diffraction (EBSD) to establish the effect of microstructure on twin nucleation and growth in deformed commercial purity hexagonal close packed (HCP) titanium. Rolled titanium samples are compressed along rolling, transverse and normal directions to establish statistical correlations for {10–12}, {11–21}, and {11–22} twins. A recently developed automated EBSD-twinning analysis software is employed for the statistical analysis. Finally, the analysis provides the following key findings: (I) grain size and strain dependence is different for twin nucleation and growth; (II) twinning statistics can be generalized for the HCP metalsmore » magnesium, zirconium and titanium; and (III) complex microstructure, where grain shape and size distribution is heterogeneous, requires multi-point statistical correlations.« less

  2. Role of microstructure on twin nucleation and growth in HCP titanium: A statistical study

    DOE PAGES

    Arul Kumar, M.; Wroński, M.; McCabe, Rodney James; ...

    2018-02-01

    In this study, a detailed statistical analysis is performed using Electron Back Scatter Diffraction (EBSD) to establish the effect of microstructure on twin nucleation and growth in deformed commercial purity hexagonal close packed (HCP) titanium. Rolled titanium samples are compressed along rolling, transverse and normal directions to establish statistical correlations for {10–12}, {11–21}, and {11–22} twins. A recently developed automated EBSD-twinning analysis software is employed for the statistical analysis. Finally, the analysis provides the following key findings: (I) grain size and strain dependence is different for twin nucleation and growth; (II) twinning statistics can be generalized for the HCP metalsmore » magnesium, zirconium and titanium; and (III) complex microstructure, where grain shape and size distribution is heterogeneous, requires multi-point statistical correlations.« less

  3. Using Patient Demographics and Statistical Modeling to Predict Knee Tibia Component Sizing in Total Knee Arthroplasty.

    PubMed

    Ren, Anna N; Neher, Robert E; Bell, Tyler; Grimm, James

    2018-06-01

    Preoperative planning is important to achieve successful implantation in primary total knee arthroplasty (TKA). However, traditional TKA templating techniques are not accurate enough to predict the component size to a very close range. With the goal of developing a general predictive statistical model using patient demographic information, ordinal logistic regression was applied to build a proportional odds model to predict the tibia component size. The study retrospectively collected the data of 1992 primary Persona Knee System TKA procedures. Of them, 199 procedures were randomly selected as testing data and the rest of the data were randomly partitioned between model training data and model evaluation data with a ratio of 7:3. Different models were trained and evaluated on the training and validation data sets after data exploration. The final model had patient gender, age, weight, and height as independent variables and predicted the tibia size within 1 size difference 96% of the time on the validation data, 94% of the time on the testing data, and 92% on a prospective cadaver data set. The study results indicated the statistical model built by ordinal logistic regression can increase the accuracy of tibia sizing information for Persona Knee preoperative templating. This research shows statistical modeling may be used with radiographs to dramatically enhance the templating accuracy, efficiency, and quality. In general, this methodology can be applied to other TKA products when the data are applicable. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. The role of drop velocity in statistical spray description

    NASA Technical Reports Server (NTRS)

    Groeneweg, J. F.; El-Wakil, M. M.; Myers, P. S.; Uyehara, O. A.

    1978-01-01

    The justification for describing a spray by treating drop velocity as a random variable on an equal statistical basis with drop size was studied experimentally. A double exposure technique using fluorescent drop photography was used to make size and velocity measurements at selected locations in a steady ethanol spray formed by a swirl atomizer. The size velocity data were categorized to construct bivariate spray density functions to describe the spray immediately after formation and during downstream propagation. Bimodal density functions were formed by environmental interaction during downstream propagation. Large differences were also found between spatial mass density and mass flux size distribution at the same location.

  5. Sample size, confidence, and contingency judgement.

    PubMed

    Clément, Mélanie; Mercier, Pierre; Pastò, Luigi

    2002-06-01

    According to statistical models, the acquisition function of contingency judgement is due to confidence increasing with sample size. According to associative models, the function reflects the accumulation of associative strength on which the judgement is based. Which view is right? Thirty university students assessed the relation between a fictitious medication and a symptom of skin discoloration in conditions that varied sample size (4, 6, 8 or 40 trials) and contingency (delta P = .20, .40, .60 or .80). Confidence was also collected. Contingency judgement was lower for smaller samples, while confidence level correlated inversely with sample size. This dissociation between contingency judgement and confidence contradicts the statistical perspective.

  6. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    PubMed

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  7. Approximate sample size formulas for the two-sample trimmed mean test with unequal variances.

    PubMed

    Luh, Wei-Ming; Guo, Jiin-Huarng

    2007-05-01

    Yuen's two-sample trimmed mean test statistic is one of the most robust methods to apply when variances are heterogeneous. The present study develops formulas for the sample size required for the test. The formulas are applicable for the cases of unequal variances, non-normality and unequal sample sizes. Given the specified alpha and the power (1-beta), the minimum sample size needed by the proposed formulas under various conditions is less than is given by the conventional formulas. Moreover, given a specified size of sample calculated by the proposed formulas, simulation results show that Yuen's test can achieve statistical power which is generally superior to that of the approximate t test. A numerical example is provided.

  8. Sample Size Estimation: The Easy Way

    ERIC Educational Resources Information Center

    Weller, Susan C.

    2015-01-01

    This article presents a simple approach to making quick sample size estimates for basic hypothesis tests. Although there are many sources available for estimating sample sizes, methods are not often integrated across statistical tests, levels of measurement of variables, or effect sizes. A few parameters are required to estimate sample sizes and…

  9. Statistical test for ΔρDCCA cross-correlation coefficient

    NASA Astrophysics Data System (ADS)

    Guedes, E. F.; Brito, A. A.; Oliveira Filho, F. M.; Fernandez, B. F.; de Castro, A. P. N.; da Silva Filho, A. M.; Zebende, G. F.

    2018-07-01

    In this paper we propose a new statistical test for ΔρDCCA, Detrended Cross-Correlation Coefficient Difference, a tool to measure contagion/interdependence effect in time series of size N at different time scale n. For this proposition we analyzed simulated and real time series. The results showed that the statistical significance of ΔρDCCA depends on the size N and the time scale n, and we can define a critical value for this dependency in 90%, 95%, and 99% of confidence level, as will be shown in this paper.

  10. Statistical Analysis Techniques for Small Sample Sizes

    NASA Technical Reports Server (NTRS)

    Navard, S. E.

    1984-01-01

    The small sample sizes problem which is encountered when dealing with analysis of space-flight data is examined. Because of such a amount of data available, careful analyses are essential to extract the maximum amount of information with acceptable accuracy. Statistical analysis of small samples is described. The background material necessary for understanding statistical hypothesis testing is outlined and the various tests which can be done on small samples are explained. Emphasis is on the underlying assumptions of each test and on considerations needed to choose the most appropriate test for a given type of analysis.

  11. Enhancing pediatric clinical trial feasibility through the use of Bayesian statistics.

    PubMed

    Huff, Robin A; Maca, Jeff D; Puri, Mala; Seltzer, Earl W

    2017-11-01

    BackgroundPediatric clinical trials commonly experience recruitment challenges including limited number of patients and investigators, inclusion/exclusion criteria that further reduce the patient pool, and a competitive research landscape created by pediatric regulatory commitments. To overcome these challenges, innovative approaches are needed.MethodsThis article explores the use of Bayesian statistics to improve pediatric trial feasibility, using pediatric Type-2 diabetes as an example. Data for six therapies approved for adults were used to perform simulations to determine the impact on pediatric trial size.ResultsWhen the number of adult patients contributing to the simulation was assumed to be the same as the number of patients to be enrolled in the pediatric trial, the pediatric trial size was reduced by 75-78% when compared with a frequentist statistical approach, but was associated with a 34-45% false-positive rate. In subsequent simulations, greater control was exerted over the false-positive rate by decreasing the contribution of the adult data. A 30-33% reduction in trial size was achieved when false-positives were held to less than 10%.ConclusionReducing the trial size through the use of Bayesian statistics would facilitate completion of pediatric trials, enabling drugs to be labeled appropriately for children.

  12. Characterization of Inclusion Populations in Mn-Si Deoxidized Steel

    NASA Astrophysics Data System (ADS)

    García-Carbajal, Alfonso; Herrera-Trejo, Martín; Castro-Cedeño, Edgar-Ivan; Castro-Román, Manuel; Martinez-Enriquez, Arturo-Isaias

    2017-12-01

    Four plant heats of Mn-Si deoxidized steel were conducted to follow the evolution of the inclusion population through ladle furnace (LF) treatment and subsequent vacuum treatment (VT). The liquid steel was sampled, and the chemical composition and size distribution of the inclusion populations were characterized. The Gumbel generalized extreme-value (GEV) and generalized Pareto (GP) distributions were used for the statistical analysis of the inclusion size distributions. The inclusions found at the beginning of the LF treatment were mostly fully liquid SiO2-Al2O3-MnO inclusions, which then evolved into fully liquid SiO2-Al2O3-CaO-MgO and partly liquid SiO2-CaO-MgO-(Al2O3-MgO) inclusions detected at the end of the VT. The final fully liquid inclusions had a desirable chemical composition for plastic behavior in subsequent metallurgical operations. The GP distribution was found to be undesirable for statistical analysis. The GEV distribution approach led to shape parameter values different from the zero value hypothesized from the Gumbel distribution. According to the GEV approach, some of the final inclusion size distributions had statistically significant differences, whereas the Gumbel approach predicted no statistically significant differences. The heats were organized according to indicators of inclusion cleanliness and a statistical comparison of the size distributions.

  13. Neural Systems with Numerically Matched Input-Output Statistic: Isotonic Bivariate Statistical Modeling

    PubMed Central

    Fiori, Simone

    2007-01-01

    Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its input-output statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on look-up-table (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and real-world data sets, illustrate the features of the proposed modeling procedure. PMID:18566641

  14. A random-sum Wilcoxon statistic and its application to analysis of ROC and LROC data.

    PubMed

    Tang, Liansheng Larry; Balakrishnan, N

    2011-01-01

    The Wilcoxon-Mann-Whitney statistic is commonly used for a distribution-free comparison of two groups. One requirement for its use is that the sample sizes of the two groups are fixed. This is violated in some of the applications such as medical imaging studies and diagnostic marker studies; in the former, the violation occurs since the number of correctly localized abnormal images is random, while in the latter the violation is due to some subjects not having observable measurements. For this reason, we propose here a random-sum Wilcoxon statistic for comparing two groups in the presence of ties, and derive its variance as well as its asymptotic distribution for large sample sizes. The proposed statistic includes the regular Wilcoxon rank-sum statistic. Finally, we apply the proposed statistic for summarizing location response operating characteristic data from a liver computed tomography study, and also for summarizing diagnostic accuracy of biomarker data.

  15. Exploring Explanations of Subglacial Bedform Sizes Using Statistical Models

    PubMed Central

    Kougioumtzoglou, Ioannis A.; Stokes, Chris R.; Smith, Michael J.; Clark, Chris D.; Spagnolo, Matteo S.

    2016-01-01

    Sediments beneath modern ice sheets exert a key control on their flow, but are largely inaccessible except through geophysics or boreholes. In contrast, palaeo-ice sheet beds are accessible, and typically characterised by numerous bedforms. However, the interaction between bedforms and ice flow is poorly constrained and it is not clear how bedform sizes might reflect ice flow conditions. To better understand this link we present a first exploration of a variety of statistical models to explain the size distribution of some common subglacial bedforms (i.e., drumlins, ribbed moraine, MSGL). By considering a range of models, constructed to reflect key aspects of the physical processes, it is possible to infer that the size distributions are most effectively explained when the dynamics of ice-water-sediment interaction associated with bedform growth is fundamentally random. A ‘stochastic instability’ (SI) model, which integrates random bedform growth and shrinking through time with exponential growth, is preferred and is consistent with other observations of palaeo-bedforms and geophysical surveys of active ice sheets. Furthermore, we give a proof-of-concept demonstration that our statistical approach can bridge the gap between geomorphological observations and physical models, directly linking measurable size-frequency parameters to properties of ice sheet flow (e.g., ice velocity). Moreover, statistically developing existing models as proposed allows quantitative predictions to be made about sizes, making the models testable; a first illustration of this is given for a hypothesised repeat geophysical survey of bedforms under active ice. Thus, we further demonstrate the potential of size-frequency distributions of subglacial bedforms to assist the elucidation of subglacial processes and better constrain ice sheet models. PMID:27458921

  16. Explanation of Two Anomalous Results in Statistical Mediation Analysis.

    PubMed

    Fritz, Matthew S; Taylor, Aaron B; Mackinnon, David P

    2012-01-01

    Previous studies of different methods of testing mediation models have consistently found two anomalous results. The first result is elevated Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap tests not found in nonresampling tests or in resampling tests that did not include a bias correction. This is of special concern as the bias-corrected bootstrap is often recommended and used due to its higher statistical power compared with other tests. The second result is statistical power reaching an asymptote far below 1.0 and in some conditions even declining slightly as the size of the relationship between X and M , a , increased. Two computer simulations were conducted to examine these findings in greater detail. Results from the first simulation found that the increased Type I error rates for the bias-corrected and accelerated bias-corrected bootstrap are a function of an interaction between the size of the individual paths making up the mediated effect and the sample size, such that elevated Type I error rates occur when the sample size is small and the effect size of the nonzero path is medium or larger. Results from the second simulation found that stagnation and decreases in statistical power as a function of the effect size of the a path occurred primarily when the path between M and Y , b , was small. Two empirical mediation examples are provided using data from a steroid prevention and health promotion program aimed at high school football players (Athletes Training and Learning to Avoid Steroids; Goldberg et al., 1996), one to illustrate a possible Type I error for the bias-corrected bootstrap test and a second to illustrate a loss in power related to the size of a . Implications of these findings are discussed.

  17. Stable statistical representations facilitate visual search.

    PubMed

    Corbett, Jennifer E; Melcher, David

    2014-10-01

    Observers represent the average properties of object ensembles even when they cannot identify individual elements. To investigate the functional role of ensemble statistics, we examined how modulating statistical stability affects visual search. We varied the mean and/or individual sizes of an array of Gabor patches while observers searched for a tilted target. In "stable" blocks, the mean and/or local sizes of the Gabors were constant over successive displays, whereas in "unstable" baseline blocks they changed from trial to trial. Although there was no relationship between the context and the spatial location of the target, observers found targets faster (as indexed by faster correct responses and fewer saccades) as the global mean size became stable over several displays. Building statistical stability also facilitated scanning the scene, as measured by larger saccadic amplitudes, faster saccadic reaction times, and shorter fixation durations. These findings suggest a central role for peripheral visual information, creating context to free resources for detailed processing of salient targets and maintaining the illusion of visual stability.

  18. General herpetological collecting is size-based for five Pacific lizards

    USGS Publications Warehouse

    Rodda, Gordon H.; Yackel Adams, Amy A.; Campbell, Earl W.; Fritts, Thomas H.

    2015-01-01

    Accurate estimation of a species’ size distribution is a key component of characterizing its ecology, evolution, physiology, and demography. We compared the body size distributions of five Pacific lizards (Carlia ailanpalai, Emoia caeruleocauda, Gehyra mutilata, Hemidactylus frenatus, and Lepidodactylus lugubris) from general herpetological collecting (including visual surveys and glue boards) with those from complete censuses obtained by total removal. All species exhibited the same pattern: general herpetological collecting undersampled juveniles and oversampled mid-sized adults. The bias was greatest for the smallest juveniles and was not statistically evident for newly maturing and very large adults. All of the true size distributions of these continuously breeding species were skewed heavily toward juveniles, more so than the detections obtained from general collecting. A strongly skewed size distribution is not well characterized by the mean or maximum, though those are the statistics routinely reported for species’ sizes. We found body mass to be distributed more symmetrically than was snout–vent length, providing an additional rationale for collecting and reporting that size measure.

  19. Modified Distribution-Free Goodness-of-Fit Test Statistic.

    PubMed

    Chun, So Yeon; Browne, Michael W; Shapiro, Alexander

    2018-03-01

    Covariance structure analysis and its structural equation modeling extensions have become one of the most widely used methodologies in social sciences such as psychology, education, and economics. An important issue in such analysis is to assess the goodness of fit of a model under analysis. One of the most popular test statistics used in covariance structure analysis is the asymptotically distribution-free (ADF) test statistic introduced by Browne (Br J Math Stat Psychol 37:62-83, 1984). The ADF statistic can be used to test models without any specific distribution assumption (e.g., multivariate normal distribution) of the observed data. Despite its advantage, it has been shown in various empirical studies that unless sample sizes are extremely large, this ADF statistic could perform very poorly in practice. In this paper, we provide a theoretical explanation for this phenomenon and further propose a modified test statistic that improves the performance in samples of realistic size. The proposed statistic deals with the possible ill-conditioning of the involved large-scale covariance matrices.

  20. Improvement on Fermionic properties and new isotope production in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Wang, Ning; Wu, Tong; Zeng, Jie; Yang, Yongxu; Ou, Li

    2016-06-01

    By considering momentum transfer in the Fermi constraint procedure, the stability of the initial nuclei and fragments produced in heavy-ion collisions can be further improved in quantum molecular dynamics simulations. The case of a phase-space occupation probability larger than one is effectively reduced with the proposed procedure. Simultaneously, the energy conservation can be better described for both individual nuclei and heavy-ion reactions. With the revised version of the improved quantum molecular dynamics model, the fusion excitation functions of 16O+186W and the central collisions of Au+Au at 35 AMeV are re-examined. The fusion cross sections at sub-barrier energies and the charge distribution of fragments are relatively better reproduced due to the reduction of spurious nucleon emission. The charge and isotope distribution of fragments in Xe+Sn, U+U and Zr+Sn at intermediate energies are also predicted. More unmeasured extremely neutron-rich fragments with Z = 16-28 are observed in the central collisions of 238U+238U than that of 96Zr+124Sn, which indicates that multi-fragmentation of U+U may offer a fruitful pathway to new neutron-rich isotopes.

  1. The relative effects of habitat loss and fragmentation on population genetic variation in the red-cockaded woodpecker (Picoides borealis).

    PubMed

    Bruggeman, Douglas J; Wiegand, Thorsten; Fernández, Néstor

    2010-09-01

    The relative influence of habitat loss, fragmentation and matrix heterogeneity on the viability of populations is a critical area of conservation research that remains unresolved. Using simulation modelling, we provide an analysis of the influence both patch size and patch isolation have on abundance, effective population size (N(e)) and F(ST). An individual-based, spatially explicit population model based on 15 years of field work on the red-cockaded woodpecker (Picoides borealis) was applied to different landscape configurations. The variation in landscape patterns was summarized using spatial statistics based on O-ring statistics. By regressing demographic and genetics attributes that emerged across the landscape treatments against proportion of total habitat and O-ring statistics, we show that O-ring statistics provide an explicit link between population processes, habitat area, and critical thresholds of fragmentation that affect those processes. Spatial distances among land cover classes that affect biological processes translated into critical scales at which the measures of landscape structure correlated best with genetic indices. Therefore our study infers pattern from process, which contrasts with past studies of landscape genetics. We found that population genetic structure was more strongly affected by fragmentation than population size, which suggests that examining only population size may limit recognition of fragmentation effects that erode genetic variation. If effective population size is used to set recovery goals for endangered species, then habitat fragmentation effects may be sufficiently strong to prevent evaluation of recovery based on the ratio of census:effective population size alone.

  2. Sample Size in Clinical Cardioprotection Trials Using Myocardial Salvage Index, Infarct Size, or Biochemical Markers as Endpoint.

    PubMed

    Engblom, Henrik; Heiberg, Einar; Erlinge, David; Jensen, Svend Eggert; Nordrehaug, Jan Erik; Dubois-Randé, Jean-Luc; Halvorsen, Sigrun; Hoffmann, Pavel; Koul, Sasha; Carlsson, Marcus; Atar, Dan; Arheden, Håkan

    2016-03-09

    Cardiac magnetic resonance (CMR) can quantify myocardial infarct (MI) size and myocardium at risk (MaR), enabling assessment of myocardial salvage index (MSI). We assessed how MSI impacts the number of patients needed to reach statistical power in relation to MI size alone and levels of biochemical markers in clinical cardioprotection trials and how scan day affect sample size. Controls (n=90) from the recent CHILL-MI and MITOCARE trials were included. MI size, MaR, and MSI were assessed from CMR. High-sensitivity troponin T (hsTnT) and creatine kinase isoenzyme MB (CKMB) levels were assessed in CHILL-MI patients (n=50). Utilizing distribution of these variables, 100 000 clinical trials were simulated for calculation of sample size required to reach sufficient power. For a treatment effect of 25% decrease in outcome variables, 50 patients were required in each arm using MSI compared to 93, 98, 120, 141, and 143 for MI size alone, hsTnT (area under the curve [AUC] and peak), and CKMB (AUC and peak) in order to reach a power of 90%. If average CMR scan day between treatment and control arms differed by 1 day, sample size needs to be increased by 54% (77 vs 50) to avoid scan day bias masking a treatment effect of 25%. Sample size in cardioprotection trials can be reduced 46% to 65% without compromising statistical power when using MSI by CMR as an outcome variable instead of MI size alone or biochemical markers. It is essential to ensure lack of bias in scan day between treatment and control arms to avoid compromising statistical power. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  3. Analysis of Longitudinal Outcome Data with Missing Values in Total Knee Arthroplasty.

    PubMed

    Kang, Yeon Gwi; Lee, Jang Taek; Kang, Jong Yeal; Kim, Ga Hye; Kim, Tae Kyun

    2016-01-01

    We sought to determine the influence of missing data on the statistical results, and to determine which statistical method is most appropriate for the analysis of longitudinal outcome data of TKA with missing values among repeated measures ANOVA, generalized estimating equation (GEE) and mixed effects model repeated measures (MMRM). Data sets with missing values were generated with different proportion of missing data, sample size and missing-data generation mechanism. Each data set was analyzed with three statistical methods. The influence of missing data was greater with higher proportion of missing data and smaller sample size. MMRM tended to show least changes in the statistics. When missing values were generated by 'missing not at random' mechanism, no statistical methods could fully avoid deviations in the results. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Pupil Size in Outdoor Environments

    DTIC Science & Technology

    2007-04-06

    studies. .........................19 Table 3: Descriptive statistics for pupils measured over luminance range. .........50 Table 4: N in each...strata for all pupil measurements..........................................50 Table 5: Descriptive statistics stratified against eye color...59 Table 6: Descriptive statistics stratified against gender. .....................................64 Table 7: Descriptive

  5. Inferring Demographic History Using Two-Locus Statistics.

    PubMed

    Ragsdale, Aaron P; Gutenkunst, Ryan N

    2017-06-01

    Population demographic history may be learned from contemporary genetic variation data. Methods based on aggregating the statistics of many single loci into an allele frequency spectrum (AFS) have proven powerful, but such methods ignore potentially informative patterns of linkage disequilibrium (LD) between neighboring loci. To leverage such patterns, we developed a composite-likelihood framework for inferring demographic history from aggregated statistics of pairs of loci. Using this framework, we show that two-locus statistics are more sensitive to demographic history than single-locus statistics such as the AFS. In particular, two-locus statistics escape the notorious confounding of depth and duration of a bottleneck, and they provide a means to estimate effective population size based on the recombination rather than mutation rate. We applied our approach to a Zambian population of Drosophila melanogaster Notably, using both single- and two-locus statistics, we inferred a substantially lower ancestral effective population size than previous works and did not infer a bottleneck history. Together, our results demonstrate the broad potential for two-locus statistics to enable powerful population genetic inference. Copyright © 2017 by the Genetics Society of America.

  6. Appropriate Domain Size for Groundwater Flow Modeling with a Discrete Fracture Network Model.

    PubMed

    Ji, Sung-Hoon; Koh, Yong-Kwon

    2017-01-01

    When a discrete fracture network (DFN) is constructed from statistical conceptualization, uncertainty in simulating the hydraulic characteristics of a fracture network can arise due to the domain size. In this study, the appropriate domain size, where less significant uncertainty in the stochastic DFN model is expected, was suggested for the Korea Atomic Energy Research Institute Underground Research Tunnel (KURT) site. The stochastic DFN model for the site was established, and the appropriate domain size was determined with the density of the percolating cluster and the percolation probability using the stochastically generated DFNs for various domain sizes. The applicability of the appropriate domain size to our study site was evaluated by comparing the statistical properties of stochastically generated fractures of varying domain sizes and estimating the uncertainty in the equivalent permeability of the generated DFNs. Our results show that the uncertainty of the stochastic DFN model is acceptable when the modeling domain is larger than the determined appropriate domain size, and the appropriate domain size concept is applicable to our study site. © 2016, National Ground Water Association.

  7. Interpreting and Reporting Effect Sizes in Research Investigations.

    ERIC Educational Resources Information Center

    Tapia, Martha; Marsh, George E., II

    Since 1994, the American Psychological Association (APA) has advocated the inclusion of effect size indices in reporting research to elucidate the statistical significance of studies based on sample size. In 2001, the fifth edition of the APA "Publication Manual" stressed the importance of including an index of effect size to clarify…

  8. Effect Sizes in Gifted Education Research

    ERIC Educational Resources Information Center

    Gentry, Marcia; Peters, Scott J.

    2009-01-01

    Recent calls for reporting and interpreting effect sizes have been numerous, with the 5th edition of the "Publication Manual of the American Psychological Association" (2001) calling for the inclusion of effect sizes to interpret quantitative findings. Many top journals have required that effect sizes accompany claims of statistical significance.…

  9. Experimental design, power and sample size for animal reproduction experiments.

    PubMed

    Chapman, Phillip L; Seidel, George E

    2008-01-01

    The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.

  10. Studies in Support of the Application of Statistical Theory to Design and Evaluation of Operational Tests. Annex D. An Application of Bayesian Statistical Methods in the Determination of Sample Size for Operational Testing in the U.S. Army

    DTIC Science & Technology

    1977-07-01

    SIZE C XNI. C UE2 - UTILITY OF EXPERIMENT OF SIZE C XN2. C ICHECK - VARIABLE USLD TO CHECK FOR C TERMINATION, C~C DIMENSION SUBLIM{20),UPLIM(20),UEI(20...1J=UPLIM(K4-I)-XNI (K+1)+SU8LIt1(K+i*. C CHECK FOR TERMINATION. 944 ICHECK =SUBLIM(K)+2 IFIICHECK.GEUPLiHMK.,OR.K.G1.20’ GO TO 930 GO TO 920 930

  11. The most dangerous hospital or the most dangerous equation?

    PubMed

    Tu, Yu-Kang; Gilthorpe, Mark S

    2007-11-15

    Hospital mortality rates are one of the most frequently selected indicators for measuring the performance of NHS Trusts. A recent article in a national newspaper named the hospital with the highest or lowest mortality in the 2005/6 financial year; a report by the organization Dr Foster Intelligence provided information with regard to the performance of all NHS Trusts in England. Basic statistical theory and computer simulations were used to explore the relationship between the variations in the performance of NHS Trusts and the sizes of the Trusts. Data of hospital standardised mortality ratio (HSMR) of 152 English NHS Trusts for 2005/6 were re-analysed. A close examination of the information reveals a pattern which is consistent with a statistical phenomenon, discovered by the French mathematician de Moivre nearly 300 years ago, described in every introductory statistics textbook: namely that variation in performance indicators is expected to be greater in small Trusts and smaller in large Trusts. From a statistical viewpoint, the number of deaths in a hospital is not in proportion to the size of the hospital, but is proportional to the square root of its size. Therefore, it is not surprising to note that small hospitals are more likely to occur at the top and the bottom of league tables, whilst mortality rates are independent of hospital sizes. This statistical phenomenon needs to be taken into account in the comparison of hospital Trusts performance, especially with regard to policy decisions.

  12. Box-Cox transformation of firm size data in statistical analysis

    NASA Astrophysics Data System (ADS)

    Chen, Ting Ting; Takaishi, Tetsuya

    2014-03-01

    Firm size data usually do not show the normality that is often assumed in statistical analysis such as regression analysis. In this study we focus on two firm size data: the number of employees and sale. Those data deviate considerably from a normal distribution. To improve the normality of those data we transform them by the Box-Cox transformation with appropriate parameters. The Box-Cox transformation parameters are determined so that the transformed data best show the kurtosis of a normal distribution. It is found that the two firm size data transformed by the Box-Cox transformation show strong linearity. This indicates that the number of employees and sale have the similar property as a firm size indicator. The Box-Cox parameters obtained for the firm size data are found to be very close to zero. In this case the Box-Cox transformations are approximately a log-transformation. This suggests that the firm size data we used are approximately log-normal distributions.

  13. Reproductive potential of Spodoptera eridania (Stoll) (Lepidoptera: Noctuidae) in the laboratory: effect of multiple couples and the size.

    PubMed

    Specht, A; Montezano, D G; Sosa-Gómez, D R; Paula-Moraes, S V; Roque-Specht, V F; Barros, N M

    2016-06-01

    This study aimed to evaluate the effect of keeping three couples in the same cage, and the size of adults emerged from small, medium-sized and large pupae (278.67 mg; 333.20 mg and 381.58 mg, respectively), on the reproductive potential of S. eridania (Stoll, 1782) adults, under controlled conditions (25 ± 1 °C, 70% RH and 14 hour photophase). We evaluated the survival, number of copulations, fecundity and fertility of the adult females. The survival of females from these different pupal sizes did not differ statistically, but the survival of males from large pupae was statistically shorter than from small pupae. Fecundity differed significantly and correlated positively with size. The number of effective copulations (espematophores) and fertility did not vary significantly with pupal size. Our results emphasize the importance of indicating the number of copulations and the size of the insects when reproductive parameters are compared.

  14. Robust Covariate-Adjusted Log-Rank Statistics and Corresponding Sample Size Formula for Recurrent Events Data

    PubMed Central

    Song, Rui; Kosorok, Michael R.; Cai, Jianwen

    2009-01-01

    Summary Recurrent events data are frequently encountered in clinical trials. This article develops robust covariate-adjusted log-rank statistics applied to recurrent events data with arbitrary numbers of events under independent censoring and the corresponding sample size formula. The proposed log-rank tests are robust with respect to different data-generating processes and are adjusted for predictive covariates. It reduces to the Kong and Slud (1997, Biometrika 84, 847–862) setting in the case of a single event. The sample size formula is derived based on the asymptotic normality of the covariate-adjusted log-rank statistics under certain local alternatives and a working model for baseline covariates in the recurrent event data context. When the effect size is small and the baseline covariates do not contain significant information about event times, it reduces to the same form as that of Schoenfeld (1983, Biometrics 39, 499–503) for cases of a single event or independent event times within a subject. We carry out simulations to study the control of type I error and the comparison of powers between several methods in finite samples. The proposed sample size formula is illustrated using data from an rhDNase study. PMID:18162107

  15. How big should a mammal be? A macroecological look at mammalian body size over space and time

    PubMed Central

    Smith, Felisa A.; Lyons, S. Kathleen

    2011-01-01

    Macroecology was developed as a big picture statistical approach to the study of ecology and evolution. By focusing on broadly occurring patterns and processes operating at large spatial and temporal scales rather than on localized and/or fine-scaled details, macroecology aims to uncover general mechanisms operating at organism, population, and ecosystem levels of organization. Macroecological studies typically involve the statistical analysis of fundamental species-level traits, such as body size, area of geographical range, and average density and/or abundance. Here, we briefly review the history of macroecology and use the body size of mammals as a case study to highlight current developments in the field, including the increasing linkage with biogeography and other disciplines. Characterizing the factors underlying the spatial and temporal patterns of body size variation in mammals is a daunting task and moreover, one not readily amenable to traditional statistical analyses. Our results clearly illustrate remarkable regularities in the distribution and variation of mammalian body size across both geographical space and evolutionary time that are related to ecology and trophic dynamics and that would not be apparent without a broader perspective. PMID:21768152

  16. Relation Between Intelligence and Family Size, Position, and Income in Adolescent Girls in Saudi Arabia.

    PubMed

    Osman, Habab; Alahmadi, Maryam; Bakhiet, Salaheldin; Lynn, Richard

    2016-12-01

    Data are reported showing statistically significant negative correlations between intelligence and family size, position, and income in a sample of 604 adolescent girls in Saudi Arabia. There were no statistically significant correlations or associations between whether the mother or father were deceased or both parents were alive, and whether the parents were living together or were divorced. © The Author(s) 2016.

  17. Statistical shear lag model - unraveling the size effect in hierarchical composites.

    PubMed

    Wei, Xiaoding; Filleter, Tobin; Espinosa, Horacio D

    2015-05-01

    Numerous experimental and computational studies have established that the hierarchical structures encountered in natural materials, such as the brick-and-mortar structure observed in sea shells, are essential for achieving defect tolerance. Due to this hierarchy, the mechanical properties of natural materials have a different size dependence compared to that of typical engineered materials. This study aimed to explore size effects on the strength of bio-inspired staggered hierarchical composites and to define the influence of the geometry of constituents in their outstanding defect tolerance capability. A statistical shear lag model is derived by extending the classical shear lag model to account for the statistics of the constituents' strength. A general solution emerges from rigorous mathematical derivations, unifying the various empirical formulations for the fundamental link length used in previous statistical models. The model shows that the staggered arrangement of constituents grants composites a unique size effect on mechanical strength in contrast to homogenous continuous materials. The model is applied to hierarchical yarns consisting of double-walled carbon nanotube bundles to assess its predictive capabilities for novel synthetic materials. Interestingly, the model predicts that yarn gauge length does not significantly influence the yarn strength, in close agreement with experimental observations. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  18. (Finite) statistical size effects on compressive strength.

    PubMed

    Weiss, Jérôme; Girard, Lucas; Gimbert, Florent; Amitrano, David; Vandembroucq, Damien

    2014-04-29

    The larger structures are, the lower their mechanical strength. Already discussed by Leonardo da Vinci and Edmé Mariotte several centuries ago, size effects on strength remain of crucial importance in modern engineering for the elaboration of safety regulations in structural design or the extrapolation of laboratory results to geophysical field scales. Under tensile loading, statistical size effects are traditionally modeled with a weakest-link approach. One of its prominent results is a prediction of vanishing strength at large scales that can be quantified in the framework of extreme value statistics. Despite a frequent use outside its range of validity, this approach remains the dominant tool in the field of statistical size effects. Here we focus on compressive failure, which concerns a wide range of geophysical and geotechnical situations. We show on historical and recent experimental data that weakest-link predictions are not obeyed. In particular, the mechanical strength saturates at a nonzero value toward large scales. Accounting explicitly for the elastic interactions between defects during the damage process, we build a formal analogy of compressive failure with the depinning transition of an elastic manifold. This critical transition interpretation naturally entails finite-size scaling laws for the mean strength and its associated variability. Theoretical predictions are in remarkable agreement with measurements reported for various materials such as rocks, ice, coal, or concrete. This formalism, which can also be extended to the flowing instability of granular media under multiaxial compression, has important practical consequences for future design rules.

  19. A New Method for Estimating the Effective Population Size from Allele Frequency Changes

    PubMed Central

    Pollak, Edward

    1983-01-01

    A new procedure is proposed for estimating the effective population size, given that information is available on changes in frequencies of the alleles at one or more independently segregating loci and the population is observed at two or more separate times. Approximate expressions are obtained for the variances of the new statistic, as well as others, also based on allele frequency changes, that have been discussed in the literature. This analysis indicates that the new statistic will generally have a smaller variance than the others. Estimates of effective population sizes and of the standard errors of the estimates are computed for data on two fly populations that have been discussed in earlier papers. In both cases, there is evidence that the effective population size is very much smaller than the minimum census size of the population. PMID:17246147

  20. Metrological characterization of X-ray diffraction methods at different acquisition geometries for determination of crystallite size in nano-scale materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna

    2013-11-15

    Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less

  1. Adaptive interference cancel filter for evoked potential using high-order cumulants.

    PubMed

    Lin, Bor-Shyh; Lin, Bor-Shing; Chong, Fok-Ching; Lai, Feipei

    2004-01-01

    This paper is to present evoked potential (EP) processing using adaptive interference cancel (AIC) filter with second and high order cumulants. In conventional ensemble averaging method, people have to conduct repetitively experiments to record the required data. Recently, the use of AIC structure with second statistics in processing EP has proved more efficiency than traditional averaging method, but it is sensitive to both of the reference signal statistics and the choice of step size. Thus, we proposed higher order statistics-based AIC method to improve these disadvantages. This study was experimented in somatosensory EP corrupted with EEG. Gradient type algorithm is used in AIC method. Comparisons with AIC filter on second, third, fourth order statistics are also presented in this paper. We observed that AIC filter with third order statistics has better convergent performance for EP processing and is not sensitive to the selection of step size and reference input.

  2. 7 CFR 295.5 - Program statistical reports.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 4 2011-01-01 2011-01-01 false Program statistical reports. 295.5 Section 295.5 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF... statistical reports. Current and historical information on FNS food assistance program size, monetary outlays...

  3. 7 CFR 295.5 - Program statistical reports.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 4 2010-01-01 2010-01-01 false Program statistical reports. 295.5 Section 295.5 Agriculture Regulations of the Department of Agriculture (Continued) FOOD AND NUTRITION SERVICE, DEPARTMENT OF... statistical reports. Current and historical information on FNS food assistance program size, monetary outlays...

  4. Group size and nest success in red-cockaded woodpeckers in the West Gulf Coastal Plain: helpers make a difference

    Treesearch

    Richard N. Conner; Daniel Saenz; Richard R. Schaefer; James R. McCormick; D. Craig Rudolph; D. Brent Burt

    2004-01-01

    We studied the relationships between Red-cockaded Woodpecker (Picoides borealis) group size and nest productivity. Red-cockaded Woodpecker group size was positively correlated with fledging success. Although the relationships between woodpecker group size and nest productivity measures were nor statistically significant, a pattern of...

  5. Improving Research Clarity and Usefulness with Effect Size Indices as Supplements to Statistical Significance Tests.

    ERIC Educational Resources Information Center

    Thompson, Bruce

    1999-01-01

    A study examined effect-size reporting in 23 quantitative articles reported in "Exceptional Children". Findings reveal that effect sizes are rarely being reported, although exemplary reporting practices were also noted. Reasons why encouragement by the American Psychological Association to report effect size has been ineffective are…

  6. A statistical test of unbiased evolution of body size in birds.

    PubMed

    Bokma, Folmer

    2002-12-01

    Of the approximately 9500 bird species, the vast majority is small-bodied. That is a general feature of evolutionary lineages, also observed for instance in mammals and plants. The avian interspecific body size distribution is right-skewed even on a logarithmic scale. That has previously been interpreted as evidence that body size evolution has been biased. However, a procedure to test for unbiased evolution from the shape of body size distributions was lacking. In the present paper unbiased body size evolution is defined precisely, and a statistical test is developed based on Monte Carlo simulation of unbiased evolution. Application of the test to birds suggests that it is highly unlikely that avian body size evolution has been unbiased as defined. Several possible explanations for this result are discussed. A plausible explanation is that the general model of unbiased evolution assumes that population size and generation time do not affect the evolutionary variability of body size; that is, that micro- and macroevolution are decoupled, which theory suggests is not likely to be the case.

  7. Seven ways to increase power without increasing N.

    PubMed

    Hansen, W B; Collins, L M

    1994-01-01

    Many readers of this monograph may wonder why a chapter on statistical power was included. After all, by now the issue of statistical power is in many respects mundane. Everyone knows that statistical power is a central research consideration, and certainly most National Institute on Drug Abuse grantees or prospective grantees understand the importance of including a power analysis in research proposals. However, there is ample evidence that, in practice, prevention researchers are not paying sufficient attention to statistical power. If they were, the findings observed by Hansen (1992) in a recent review of the prevention literature would not have emerged. Hansen (1992) examined statistical power based on 46 cohorts followed longitudinally, using nonparametric assumptions given the subjects' age at posttest and the numbers of subjects. Results of this analysis indicated that, in order for a study to attain 80-percent power for detecting differences between treatment and control groups, the difference between groups at posttest would need to be at least 8 percent (in the best studies) and as much as 16 percent (in the weakest studies). In order for a study to attain 80-percent power for detecting group differences in pre-post change, 22 of the 46 cohorts would have needed relative pre-post reductions of greater than 100 percent. Thirty-three of the 46 cohorts had less than 50-percent power to detect a 50-percent relative reduction in substance use. These results are consistent with other review findings (e.g., Lipsey 1990) that have shown a similar lack of power in a broad range of research topics. Thus, it seems that, although researchers are aware of the importance of statistical power (particularly of the necessity for calculating it when proposing research), they somehow are failing to end up with adequate power in their completed studies. This chapter argues that the failure of many prevention studies to maintain adequate statistical power is due to an overemphasis on sample size (N) as the only, or even the best, way to increase statistical power. It is easy to see how this overemphasis has come about. Sample size is easy to manipulate, has the advantage of being related to power in a straight-forward way, and usually is under the direct control of the researcher, except for limitations imposed by finances or subject availability. Another option for increasing power is to increase the alpha used for hypothesis-testing but, as very few researchers seriously consider significance levels much larger than the traditional .05, this strategy seldom is used. Of course, sample size is important, and the authors of this chapter are not recommending that researchers cease choosing sample sizes carefully. Rather, they argue that researchers should not confine themselves to increasing N to enhance power. It is important to take additional measures to maintain and improve power over and above making sure the initial sample size is sufficient. The authors recommend two general strategies. One strategy involves attempting to maintain the effective initial sample size so that power is not lost needlessly. The other strategy is to take measures to maximize the third factor that determines statistical power: effect size.

  8. Statistical aspects of genetic association testing in small samples, based on selective DNA pooling data in the arctic fox.

    PubMed

    Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna

    2008-01-01

    We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.

  9. Testing homogeneity of proportion ratios for stratified correlated bilateral data in two-arm randomized clinical trials.

    PubMed

    Pei, Yanbo; Tian, Guo-Liang; Tang, Man-Lai

    2014-11-10

    Stratified data analysis is an important research topic in many biomedical studies and clinical trials. In this article, we develop five test statistics for testing the homogeneity of proportion ratios for stratified correlated bilateral binary data based on an equal correlation model assumption. Bootstrap procedures based on these test statistics are also considered. To evaluate the performance of these statistics and procedures, we conduct Monte Carlo simulations to study their empirical sizes and powers under various scenarios. Our results suggest that the procedure based on score statistic performs well generally and is highly recommended. When the sample size is large, procedures based on the commonly used weighted least square estimate and logarithmic transformation with Mantel-Haenszel estimate are recommended as they do not involve any computation of maximum likelihood estimates requiring iterative algorithms. We also derive approximate sample size formulas based on the recommended test procedures. Finally, we apply the proposed methods to analyze a multi-center randomized clinical trial for scleroderma patients. Copyright © 2014 John Wiley & Sons, Ltd.

  10. Empirical Reference Distributions for Networks of Different Size

    PubMed Central

    Smith, Anna; Calder, Catherine A.; Browning, Christopher R.

    2016-01-01

    Network analysis has become an increasingly prevalent research tool across a vast range of scientific fields. Here, we focus on the particular issue of comparing network statistics, i.e. graph-level measures of network structural features, across multiple networks that differ in size. Although “normalized” versions of some network statistics exist, we demonstrate via simulation why direct comparison is often inappropriate. We consider normalizing network statistics relative to a simple fully parameterized reference distribution and demonstrate via simulation how this is an improvement over direct comparison, but still sometimes problematic. We propose a new adjustment method based on a reference distribution constructed as a mixture model of random graphs which reflect the dependence structure exhibited in the observed networks. We show that using simple Bernoulli models as mixture components in this reference distribution can provide adjusted network statistics that are relatively comparable across different network sizes but still describe interesting features of networks, and that this can be accomplished at relatively low computational expense. Finally, we apply this methodology to a collection of ecological networks derived from the Los Angeles Family and Neighborhood Survey activity location data. PMID:27721556

  11. Low statistical power in biomedical science: a review of three human research domains.

    PubMed

    Dumas-Mallet, Estelle; Button, Katherine S; Boraud, Thomas; Gonon, Francois; Munafò, Marcus R

    2017-02-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0-10% or 11-20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.

  12. Low statistical power in biomedical science: a review of three human research domains

    PubMed Central

    Dumas-Mallet, Estelle; Button, Katherine S.; Boraud, Thomas; Gonon, Francois

    2017-01-01

    Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation. PMID:28386409

  13. A Guerilla Guide to Common Problems in ‘Neurostatistics’: Essential Statistical Topics in Neuroscience

    PubMed Central

    Smith, Paul F.

    2017-01-01

    Effective inferential statistical analysis is essential for high quality studies in neuroscience. However, recently, neuroscience has been criticised for the poor use of experimental design and statistical analysis. Many of the statistical issues confronting neuroscience are similar to other areas of biology; however, there are some that occur more regularly in neuroscience studies. This review attempts to provide a succinct overview of some of the major issues that arise commonly in the analyses of neuroscience data. These include: the non-normal distribution of the data; inequality of variance between groups; extensive correlation in data for repeated measurements across time or space; excessive multiple testing; inadequate statistical power due to small sample sizes; pseudo-replication; and an over-emphasis on binary conclusions about statistical significance as opposed to effect sizes. Statistical analysis should be viewed as just another neuroscience tool, which is critical to the final outcome of the study. Therefore, it needs to be done well and it is a good idea to be proactive and seek help early, preferably before the study even begins. PMID:29371855

  14. A Guerilla Guide to Common Problems in 'Neurostatistics': Essential Statistical Topics in Neuroscience.

    PubMed

    Smith, Paul F

    2017-01-01

    Effective inferential statistical analysis is essential for high quality studies in neuroscience. However, recently, neuroscience has been criticised for the poor use of experimental design and statistical analysis. Many of the statistical issues confronting neuroscience are similar to other areas of biology; however, there are some that occur more regularly in neuroscience studies. This review attempts to provide a succinct overview of some of the major issues that arise commonly in the analyses of neuroscience data. These include: the non-normal distribution of the data; inequality of variance between groups; extensive correlation in data for repeated measurements across time or space; excessive multiple testing; inadequate statistical power due to small sample sizes; pseudo-replication; and an over-emphasis on binary conclusions about statistical significance as opposed to effect sizes. Statistical analysis should be viewed as just another neuroscience tool, which is critical to the final outcome of the study. Therefore, it needs to be done well and it is a good idea to be proactive and seek help early, preferably before the study even begins.

  15. Models and Measurements for Multi-Layer Displays

    DTIC Science & Technology

    2006-07-26

    measurements. The observed statistical variation in the data results from laser speckle. No systematic uncertainties, which are expected to be less...difference metric. There are also some powerful statistical techniques to deal with this type of experiment, although it would take a lot of time to...hTraceWidth,vTraceWidth] in 10s of micrometers % Transitor sixe is vector : [hTransistorSize,vTransistorSize] in 10s of micrometers %Image is plotted if

  16. Statistical distribution of time to crack initiation and initial crack size using service data

    NASA Technical Reports Server (NTRS)

    Heller, R. A.; Yang, J. N.

    1977-01-01

    Crack growth inspection data gathered during the service life of the C-130 Hercules airplane were used in conjunction with a crack propagation rule to estimate the distribution of crack initiation times and of initial crack sizes. A Bayesian statistical approach was used to calculate the fraction of undetected initiation times as a function of the inspection time and the reliability of the inspection procedure used.

  17. Automated grain mapping using wide angle convergent beam electron diffraction in transmission electron microscope for nanomaterials.

    PubMed

    Kumar, Vineet

    2011-12-01

    The grain size statistics, commonly derived from the grain map of a material sample, are important microstructure characteristics that greatly influence its properties. The grain map for nanomaterials is usually obtained manually by visual inspection of the transmission electron microscope (TEM) micrographs because automated methods do not perform satisfactorily. While the visual inspection method provides reliable results, it is a labor intensive process and is often prone to human errors. In this article, an automated grain mapping method is developed using TEM diffraction patterns. The presented method uses wide angle convergent beam diffraction in the TEM. The automated technique was applied on a platinum thin film sample to obtain the grain map and subsequently derive grain size statistics from it. The grain size statistics obtained with the automated method were found in good agreement with the visual inspection method.

  18. An Improved Rank Correlation Effect Size Statistic for Single-Case Designs: Baseline Corrected Tau.

    PubMed

    Tarlow, Kevin R

    2017-07-01

    Measuring treatment effects when an individual's pretreatment performance is improving poses a challenge for single-case experimental designs. It may be difficult to determine whether improvement is due to the treatment or due to the preexisting baseline trend. Tau- U is a popular single-case effect size statistic that purports to control for baseline trend. However, despite its strengths, Tau- U has substantial limitations: Its values are inflated and not bound between -1 and +1, it cannot be visually graphed, and its relatively weak method of trend control leads to unacceptable levels of Type I error wherein ineffective treatments appear effective. An improved effect size statistic based on rank correlation and robust regression, Baseline Corrected Tau, is proposed and field-tested with both published and simulated single-case time series. A web-based calculator for Baseline Corrected Tau is also introduced for use by single-case investigators.

  19. Chapter two: Phenomenology of tsunamis II: scaling, event statistics, and inter-event triggering

    USGS Publications Warehouse

    Geist, Eric L.

    2012-01-01

    Observations related to tsunami catalogs are reviewed and described in a phenomenological framework. An examination of scaling relationships between earthquake size (as expressed by scalar seismic moment and mean slip) and tsunami size (as expressed by mean and maximum local run-up and maximum far-field amplitude) indicates that scaling is significant at the 95% confidence level, although there is uncertainty in how well earthquake size can predict tsunami size (R2 ~ 0.4-0.6). In examining tsunami event statistics, current methods used to estimate the size distribution of earthquakes and landslides and the inter-event time distribution of earthquakes are first reviewed. These methods are adapted to estimate the size and inter-event distribution of tsunamis at a particular recording station. Using a modified Pareto size distribution, the best-fit power-law exponents of tsunamis recorded at nine Pacific tide-gauge stations exhibit marked variation, in contrast to the approximately constant power-law exponent for inter-plate thrust earthquakes. With regard to the inter-event time distribution, significant temporal clustering of tsunami sources is demonstrated. For tsunami sources occurring in close proximity to other sources in both space and time, a physical triggering mechanism, such as static stress transfer, is a likely cause for the anomalous clustering. Mechanisms of earthquake-to-earthquake and earthquake-to-landslide triggering are reviewed. Finally, a modification of statistical branching models developed for earthquake triggering is introduced to describe triggering among tsunami sources.

  20. Analyzing the efficiency of small and medium-sized enterprises of a national technology innovation research and development program.

    PubMed

    Park, Sungmin

    2014-01-01

    This study analyzes the efficiency of small and medium-sized enterprises (SMEs) of a national technology innovation research and development (R&D) program. In particular, an empirical analysis is presented that aims to answer the following question: "Is there a difference in the efficiency between R&D collaboration types and between government R&D subsidy sizes?" Methodologically, the efficiency of a government-sponsored R&D project (i.e., GSP) is measured by Data Envelopment Analysis (DEA), and a nonparametric analysis of variance method, the Kruskal-Wallis (KW) test is adopted to see if the efficiency differences between R&D collaboration types and between government R&D subsidy sizes are statistically significant. This study's major findings are as follows. First, contrary to our hypothesis, when we controlled the influence of government R&D subsidy size, there was no statistically significant difference in the efficiency between R&D collaboration types. However, the R&D collaboration type, "SME-University-Laboratory" Joint-Venture was superior to the others, achieving the largest median and the smallest interquartile range of DEA efficiency scores. Second, the differences in the efficiency were statistically significant between government R&D subsidy sizes, and the phenomenon of diseconomies of scale was identified on the whole. As the government R&D subsidy size increases, the central measures of DEA efficiency scores were reduced, but the dispersion measures rather tended to get larger.

  1. Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.

    PubMed

    Wang, Zuozhen

    2018-01-01

    Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.

  2. Supervised classification in the presence of misclassified training data: a Monte Carlo simulation study in the three group case.

    PubMed

    Bolin, Jocelyn Holden; Finch, W Holmes

    2014-01-01

    Statistical classification of phenomena into observed groups is very common in the social and behavioral sciences. Statistical classification methods, however, are affected by the characteristics of the data under study. Statistical classification can be further complicated by initial misclassification of the observed groups. The purpose of this study is to investigate the impact of initial training data misclassification on several statistical classification and data mining techniques. Misclassification conditions in the three group case will be simulated and results will be presented in terms of overall as well as subgroup classification accuracy. Results show decreased classification accuracy as sample size, group separation and group size ratio decrease and as misclassification percentage increases with random forests demonstrating the highest accuracy across conditions.

  3. Nonlinear dynamics of the cellular-automaton ``game of Life''

    NASA Astrophysics Data System (ADS)

    Garcia, J. B. C.; Gomes, M. A. F.; Jyh, T. I.; Ren, T. I.; Sales, T. R. M.

    1993-11-01

    A statistical analysis of the ``game of Life'' due to Conway [Berlekamp, Conway, and Guy, Winning Ways for Your Mathematical Plays (Academic, New York, 1982), Vol. 2] is reported. The results are based on extensive computer simulations starting with uncorrelated distributions of live sites at t=0. The number n(s,t) of clusters of s live sites at time t, the mean cluster size s¯(t), and the diversity of sizes among other statistical functions are obtained. The dependence of the statistical functions with the initial density of live sites is examined. Several scaling relations as well as static and dynamic critical exponents are found.

  4. Performance of digital RGB reflectance color extraction for plaque lesion

    NASA Astrophysics Data System (ADS)

    Hashim, Hadzli; Taib, Mohd Nasir; Jailani, Rozita; Sulaiman, Saadiah; Baba, Roshidah

    2005-01-01

    Several clinical psoriasis lesion groups are been studied for digital RGB color features extraction. Previous works have used samples size that included all the outliers lying beyond the standard deviation factors from the peak histograms. This paper described the statistical performances of the RGB model with and without removing these outliers. Plaque lesion is experimented with other types of psoriasis. The statistical tests are compared with respect to three samples size; the original 90 samples, the first size reduction by removing outliers from 2 standard deviation distances (2SD) and the second size reduction by removing outliers from 1 standard deviation distance (1SD). Quantification of data images through the normal/direct and differential of the conventional reflectance method is considered. Results performances are concluded by observing the error plots with 95% confidence interval and findings of the inference T-tests applied. The statistical tests outcomes have shown that B component for conventional differential method can be used to distinctively classify plaque from the other psoriasis groups in consistent with the error plots finding with an improvement in p-value greater than 0.5.

  5. Assessment of Problem-Based Learning in the Undergraduate Statistics Course

    ERIC Educational Resources Information Center

    Karpiak, Christie P.

    2011-01-01

    Undergraduate psychology majors (N = 51) at a mid-sized private university took a statistics examination on the first day of the research methods course, a course for which a grade of "C" or higher in statistics is a prerequisite. Students who had taken a problem-based learning (PBL) section of the statistics course (n = 15) were compared to those…

  6. Regression modeling of particle size distributions in urban storm water: advancements through improved sample collection methods

    USGS Publications Warehouse

    Fienen, Michael N.; Selbig, William R.

    2012-01-01

    A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.

  7. Class Size.

    ERIC Educational Resources Information Center

    Ellis, Thomas I.

    1985-01-01

    After a brief introduction identifying current issues and trends in research on class size, this brochure reviews five recent studies bearing on the relationship of class size to educational effectiveness. Part 1 is a review of two interrelated and highly controversial "meta-analyses" or statistical integrations of research findings on…

  8. Distribution of the two-sample t-test statistic following blinded sample size re-estimation.

    PubMed

    Lu, Kaifeng

    2016-05-01

    We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. 7 CFR 3550.10 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... adjusted for household size) for the county or Metropolitan Statistical Area where the property is or will... excess of 10,000 but not in excess of 20,000, is not contained within a Metropolitan Statistical Area..., 1990, (even if within a Metropolitan Statistical Area), with a population exceeding 10,000, but not in...

  10. Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size

    ERIC Educational Resources Information Center

    Shieh, Gwowen

    2015-01-01

    Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…

  11. A Simple Effect Size Estimator for Single Case Designs Using WinBUGS

    ERIC Educational Resources Information Center

    Rindskopf, David; Shadish, William; Hedges, Larry

    2012-01-01

    Data from single case designs (SCDs) have traditionally been analyzed by visual inspection rather than statistical models. As a consequence, effect sizes have been of little interest. Lately, some effect-size estimators have been proposed, but most are either (i) nonparametric, and/or (ii) based on an analogy incompatible with effect sizes from…

  12. A U-statistics based approach to sample size planning of two-arm trials with discrete outcome criterion aiming to establish either superiority or noninferiority.

    PubMed

    Wellek, Stefan

    2017-02-28

    In current practice, the most frequently applied approach to the handling of ties in the Mann-Whitney-Wilcoxon (MWW) test is based on the conditional distribution of the sum of mid-ranks, given the observed pattern of ties. Starting from this conditional version of the testing procedure, a sample size formula was derived and investigated by Zhao et al. (Stat Med 2008). In contrast, the approach we pursue here is a nonconditional one exploiting explicit representations for the variances of and the covariance between the two U-statistics estimators involved in the Mann-Whitney form of the test statistic. The accuracy of both ways of approximating the sample sizes required for attaining a prespecified level of power in the MWW test for superiority with arbitrarily tied data is comparatively evaluated by means of simulation. The key qualitative conclusions to be drawn from these numerical comparisons are as follows: With the sample sizes calculated by means of the respective formula, both versions of the test maintain the level and the prespecified power with about the same degree of accuracy. Despite the equivalence in terms of accuracy, the sample size estimates obtained by means of the new formula are in many cases markedly lower than that calculated for the conditional test. Perhaps, a still more important advantage of the nonconditional approach based on U-statistics is that it can be also adopted for noninferiority trials. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  13. Global statistics of microphysical properties of cloud-top ice crystals

    NASA Astrophysics Data System (ADS)

    van Diedenhoven, B.; Fridlind, A. M.; Cairns, B.; Ackerman, A. S.; Riedi, J.

    2017-12-01

    Ice crystals in clouds are highly complex. Their sizes, macroscale shape (i.e., habit), mesoscale shape (i.e., aspect ratio of components) and microscale shape (i.e., surface roughness) determine optical properties and affect physical properties such as fall speeds, growth rates and aggregation efficiency. Our current understanding on the formation and evolution of ice crystals under various conditions can be considered poor. Commonly, ice crystal size and shape are related to ambient temperature and humidity, but global observational statistics on the variation of ice crystal size and particularly shape have not been available. Here we show results of a project aiming to infer ice crystal size, shape and scattering properties from a combination of MODIS measurements and POLDER-PARASOL multi-angle polarimetry. The shape retrieval procedure infers the mean aspect ratios of components of ice crystals and the mean microscale surface roughness levels, which are quantifiable parameters that mostly affect the scattering properties, in contrast to "habit". We present global statistics on the variation of ice effective radius, component aspect ratio, microscale surface roughness and scattering asymmetry parameter as a function of cloud top temperature, latitude, location, cloud type, season, etc. Generally, with increasing height, sizes decrease, roughness increases, asymmetry parameters decrease and aspect ratios increase towards unity. Some systematic differences are observed for clouds warmer and colder than the homogeneous freezing level. Uncertainties in the retrievals will be discussed. These statistics can be used as observational targets for modeling efforts and to better constrain other satellite remote sensing applications and their uncertainties.

  14. Statistical power and effect sizes of depression research in Japan.

    PubMed

    Okumura, Yasuyuki; Sakamoto, Shinji

    2011-06-01

    Few studies have been conducted on the rationales for using interpretive guidelines for effect size, and most of the previous statistical power surveys have covered broad research domains. The present study aimed to estimate the statistical power and to obtain realistic target effect sizes of depression research in Japan. We systematically reviewed 18 leading journals of psychiatry and psychology in Japan and identified 974 depression studies that were mentioned in 935 articles published between 1990 and 2006. In 392 studies, logistic regression analyses revealed that using clinical populations was independently associated with being a statistical power of <0.80 (odds ratio 5.9, 95% confidence interval 2.9-12.0) and of <0.50 (odds ratio 4.9, 95% confidence interval 2.3-10.5). Of the studies using clinical populations, 80% did not achieve a power of 0.80 or more, and 44% did not achieve a power of 0.50 or more to detect the medium population effect sizes. A predictive model for the proportion of variance explained was developed using a linear mixed-effects model. The model was then used to obtain realistic target effect sizes in defined study characteristics. In the face of a real difference or correlation in population, many depression researchers are less likely to give a valid result than simply tossing a coin. It is important to educate depression researchers in order to enable them to conduct an a priori power analysis. © 2011 The Authors. Psychiatry and Clinical Neurosciences © 2011 Japanese Society of Psychiatry and Neurology.

  15. Global Statistics of Microphysical Properties of Cloud-Top Ice Crystals

    NASA Technical Reports Server (NTRS)

    Van Diedenhoven, Bastiaan; Fridlind, Ann; Cairns, Brian; Ackerman, Andrew; Riedl, Jerome

    2017-01-01

    Ice crystals in clouds are highly complex. Their sizes, macroscale shape (i.e., habit), mesoscale shape (i.e., aspect ratio of components) and microscale shape (i.e., surface roughness) determine optical properties and affect physical properties such as fall speeds, growth rates and aggregation efficiency. Our current understanding on the formation and evolution of ice crystals under various conditions can be considered poor. Commonly, ice crystal size and shape are related to ambient temperature and humidity, but global observational statistics on the variation of ice crystal size and particularly shape have not been available. Here we show results of a project aiming to infer ice crystal size, shape and scattering properties from a combination of MODIS measurements and POLDER-PARASOL multi-angle polarimetry. The shape retrieval procedure infers the mean aspect ratios of components of ice crystals and the mean microscale surface roughness levels, which are quantifiable parameters that mostly affect the scattering properties, in contrast to a habit. We present global statistics on the variation of ice effective radius, component aspect ratio, microscale surface roughness and scattering asymmetry parameter as a function of cloud top temperature, latitude, location, cloud type, season, etc. Generally, with increasing height, sizes decrease, roughness increases, asymmetry parameters decrease and aspect ratios increase towards unity. Some systematic differences are observed for clouds warmer and colder than the homogeneous freezing level. Uncertainties in the retrievals will be discussed. These statistics can be used as observational targets for modeling efforts and to better constrain other satellite remote sensing applications and their uncertainties.

  16. Statistical power analysis in wildlife research

    USGS Publications Warehouse

    Steidl, R.J.; Hayes, J.P.

    1997-01-01

    Statistical power analysis can be used to increase the efficiency of research efforts and to clarify research results. Power analysis is most valuable in the design or planning phases of research efforts. Such prospective (a priori) power analyses can be used to guide research design and to estimate the number of samples necessary to achieve a high probability of detecting biologically significant effects. Retrospective (a posteriori) power analysis has been advocated as a method to increase information about hypothesis tests that were not rejected. However, estimating power for tests of null hypotheses that were not rejected with the effect size observed in the study is incorrect; these power estimates will always be a??0.50 when bias adjusted and have no relation to true power. Therefore, retrospective power estimates based on the observed effect size for hypothesis tests that were not rejected are misleading; retrospective power estimates are only meaningful when based on effect sizes other than the observed effect size, such as those effect sizes hypothesized to be biologically significant. Retrospective power analysis can be used effectively to estimate the number of samples or effect size that would have been necessary for a completed study to have rejected a specific null hypothesis. Simply presenting confidence intervals can provide additional information about null hypotheses that were not rejected, including information about the size of the true effect and whether or not there is adequate evidence to 'accept' a null hypothesis as true. We suggest that (1) statistical power analyses be routinely incorporated into research planning efforts to increase their efficiency, (2) confidence intervals be used in lieu of retrospective power analyses for null hypotheses that were not rejected to assess the likely size of the true effect, (3) minimum biologically significant effect sizes be used for all power analyses, and (4) if retrospective power estimates are to be reported, then the I?-level, effect sizes, and sample sizes used in calculations must also be reported.

  17. An assessment of the effects of cell size on AGNPS modeling of watershed runoff

    USGS Publications Warehouse

    Wu, S.-S.; Usery, E.L.; Finn, M.P.; Bosch, D.D.

    2008-01-01

    This study investigates the changes in simulated watershed runoff from the Agricultural NonPoint Source (AGNPS) pollution model as a function of model input cell size resolution for eight different cell sizes (30 m, 60 m, 120 m, 210 m, 240 m, 480 m, 960 m, and 1920 m) for the Little River Watershed (Georgia, USA). Overland cell runoff (area-weighted cell runoff), total runoff volume, clustering statistics, and hot spot patterns were examined for the different cell sizes and trends identified. Total runoff volumes decreased with increasing cell size. Using data sets of 210-m cell size or smaller in conjunction with a representative watershed boundary allows one to model the runoff volumes within 0.2 percent accuracy. The runoff clustering statistics decrease with increasing cell size; a cell size of 960 m or smaller is necessary to indicate significant high-runoff clustering. Runoff hot spot areas have a decreasing trend with increasing cell size; a cell size of 240 m or smaller is required to detect important hot spots. Conclusions regarding cell size effects on runoff estimation cannot be applied to local watershed areas due to the inconsistent changes of runoff volume with cell size; but, optimal cells sizes for clustering and hot spot analyses are applicable to local watershed areas due to the consistent trends.

  18. Four hundred or more participants needed for stable contingency table estimates of clinical prediction rule performance.

    PubMed

    Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan

    2017-02-01

    To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. Statistics 101 for Radiologists.

    PubMed

    Anvari, Arash; Halpern, Elkan F; Samir, Anthony E

    2015-10-01

    Diagnostic tests have wide clinical applications, including screening, diagnosis, measuring treatment effect, and determining prognosis. Interpreting diagnostic test results requires an understanding of key statistical concepts used to evaluate test efficacy. This review explains descriptive statistics and discusses probability, including mutually exclusive and independent events and conditional probability. In the inferential statistics section, a statistical perspective on study design is provided, together with an explanation of how to select appropriate statistical tests. Key concepts in recruiting study samples are discussed, including representativeness and random sampling. Variable types are defined, including predictor, outcome, and covariate variables, and the relationship of these variables to one another. In the hypothesis testing section, we explain how to determine if observed differences between groups are likely to be due to chance. We explain type I and II errors, statistical significance, and study power, followed by an explanation of effect sizes and how confidence intervals can be used to generalize observed effect sizes to the larger population. Statistical tests are explained in four categories: t tests and analysis of variance, proportion analysis tests, nonparametric tests, and regression techniques. We discuss sensitivity, specificity, accuracy, receiver operating characteristic analysis, and likelihood ratios. Measures of reliability and agreement, including κ statistics, intraclass correlation coefficients, and Bland-Altman graphs and analysis, are introduced. © RSNA, 2015.

  20. [An investigation of the statistical power of the effect size in randomized controlled trials for the treatment of patients with type 2 diabetes mellitus using Chinese medicine].

    PubMed

    Ma, Li-Xin; Liu, Jian-Ping

    2012-01-01

    To investigate whether the power of the effect size was based on adequate sample size in randomized controlled trials (RCTs) for the treatment of patients with type 2 diabetes mellitus (T2DM) using Chinese medicine. China Knowledge Resource Integrated Database (CNKI), VIP Database for Chinese Technical Periodicals (VIP), Chinese Biomedical Database (CBM), and Wangfang Data were systematically recruited using terms like "Xiaoke" or diabetes, Chinese herbal medicine, patent medicine, traditional Chinese medicine, randomized, controlled, blinded, and placebo-controlled. Limitation was set on the intervention course > or = 3 months in order to identify the information of outcome assessement and the sample size. Data collection forms were made according to the checking lists found in the CONSORT statement. Independent double data extractions were performed on all included trials. The statistical power of the effects size for each RCT study was assessed using sample size calculation equations. (1) A total of 207 RCTs were included, including 111 superiority trials and 96 non-inferiority trials. (2) Among the 111 superiority trials, fasting plasma glucose (FPG) and glycosylated hemoglobin HbA1c (HbA1c) outcome measure were reported in 9% and 12% of the RCTs respectively with the sample size > 150 in each trial. For the outcome of HbA1c, only 10% of the RCTs had more than 80% power. For FPG, 23% of the RCTs had more than 80% power. (3) In the 96 non-inferiority trials, the outcomes FPG and HbA1c were reported as 31% and 36% respectively. These RCTs had a samples size > 150. For HbA1c only 36% of the RCTs had more than 80% power. For FPG, only 27% of the studies had more than 80% power. The sample size for statistical analysis was distressingly low and most RCTs did not achieve 80% power. In order to obtain a sufficient statistic power, it is recommended that clinical trials should establish clear research objective and hypothesis first, and choose scientific and evidence-based study design and outcome measurements. At the same time, calculate required sample size to ensure a precise research conclusion.

  1. Modeling of LEO Orbital Debris Populations in Centimeter and Millimeter Size Regimes

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Hill, . M.; Horstman, M.; Krisko, P. H.; Liou, J.-C.; Matney, M.; Stansbery, E. G.

    2010-01-01

    The building of the NASA Orbital Debris Engineering Model, whether ORDEM2000 or its recently updated version ORDEM2010, uses as its foundation a number of model debris populations, each truncated at a minimum object-size ranging from 10 micron to 1 m. This paper discusses the development of the ORDEM2010 model debris populations in LEO (low Earth orbit), focusing on centimeter (smaller than 10 cm) and millimeter size regimes. Primary data sets used in the statistical derivation of the cm- and mm-size model populations are from the Haystack radar operated in a staring mode. Unlike cataloged objects of sizes greater than approximately 10 cm, ground-based radars monitor smaller-size debris only in a statistical manner instead of tracking every piece. The mono-static Haystack radar can detect debris as small as approximately 5 mm at moderate LEO altitudes. Estimation of millimeter debris populations (for objects smaller than approximately 6 mm) rests largely on Goldstone radar measurements. The bi-static Goldstone radar can detect 2- to 3-mm objects. The modeling of the cm- and mm-debris populations follows the general approach to developing other ORDEM2010-required model populations for various components and types of debris. It relies on appropriate reference populations to provide necessary prior information on the orbital structures and other important characteristics of the debris objects. NASA's LEO-to-GEO Environment Debris (LEGEND) model is capable of furnishing such reference populations in the desired size range. A Bayesian statistical inference process, commonly adopted in ORDEM2010 model-population derivations, changes a priori distribution into a posteriori distribution and thus refines the reference populations in terms of data. This paper describes key elements and major steps in the statistical derivations of the cm- and mm-size debris populations and presents results. Due to lack of data for near 1-mm sizes, the model populations of 1- to 3.16-mm objects are an empirical extension from larger debris. The extension takes into account the results of micro-debris (from 10 micron to 1 mm) population modeling that is based on shuttle impact data, in the hope of making a smooth transition between micron and millimeter size regimes. This paper also includes a brief discussion on issues and potential future work concerning the analysis and interpretation of Goldstone radar data.

  2. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    PubMed

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  3. Enterprise size and return to work after stroke.

    PubMed

    Hannerz, Harald; Ferm, Linnea; Poulsen, Otto M; Pedersen, Betina Holbæk; Andersen, Lars L

    2012-12-01

    It has been hypothesised that return to work rates among sick-listed workers increases with enterprise size. The aim of the present study was to estimate the effect of enterprise size on the odds of returning to work among previously employed stroke patients in Denmark, 2000-2006. We used a prospective design with a 2 year follow-up period. The study population consisted of 13,178 stroke patients divided into four enterprise sizes categories, according to the place of their employment prior to the stroke: micro (1-9 employees), small (10-49 employees), medium (50-249 employees) and large (>250 employees). The analysis was based on nationwide data on enterprise size from Statistics Denmark merged with data from the Danish occupational hospitalisation register. We found a statistically significant association (p = 0.034); each increase in enterprise size category was followed by an increase in the estimated odds of returning to work. The chances of returning to work after stroke increases as the size of enterprise increases. Preventive efforts and research aimed at finding ways of mitigating the effect are warranted.

  4. Equivalent statistics and data interpretation.

    PubMed

    Francis, Gregory

    2017-08-01

    Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.

  5. Trial Sequential Analysis in systematic reviews with meta-analysis.

    PubMed

    Wetterslev, Jørn; Jakobsen, Janus Christian; Gluud, Christian

    2017-03-06

    Most meta-analyses in systematic reviews, including Cochrane ones, do not have sufficient statistical power to detect or refute even large intervention effects. This is why a meta-analysis ought to be regarded as an interim analysis on its way towards a required information size. The results of the meta-analyses should relate the total number of randomised participants to the estimated required meta-analytic information size accounting for statistical diversity. When the number of participants and the corresponding number of trials in a meta-analysis are insufficient, the use of the traditional 95% confidence interval or the 5% statistical significance threshold will lead to too many false positive conclusions (type I errors) and too many false negative conclusions (type II errors). We developed a methodology for interpreting meta-analysis results, using generally accepted, valid evidence on how to adjust thresholds for significance in randomised clinical trials when the required sample size has not been reached. The Lan-DeMets trial sequential monitoring boundaries in Trial Sequential Analysis offer adjusted confidence intervals and restricted thresholds for statistical significance when the diversity-adjusted required information size and the corresponding number of required trials for the meta-analysis have not been reached. Trial Sequential Analysis provides a frequentistic approach to control both type I and type II errors. We define the required information size and the corresponding number of required trials in a meta-analysis and the diversity (D 2 ) measure of heterogeneity. We explain the reasons for using Trial Sequential Analysis of meta-analysis when the actual information size fails to reach the required information size. We present examples drawn from traditional meta-analyses using unadjusted naïve 95% confidence intervals and 5% thresholds for statistical significance. Spurious conclusions in systematic reviews with traditional meta-analyses can be reduced using Trial Sequential Analysis. Several empirical studies have demonstrated that the Trial Sequential Analysis provides better control of type I errors and of type II errors than the traditional naïve meta-analysis. Trial Sequential Analysis represents analysis of meta-analytic data, with transparent assumptions, and better control of type I and type II errors than the traditional meta-analysis using naïve unadjusted confidence intervals.

  6. Reexamining Sample Size Requirements for Multivariate, Abundance-Based Community Research: When Resources are Limited, the Research Does Not Have to Be.

    PubMed

    Forcino, Frank L; Leighton, Lindsey R; Twerdy, Pamela; Cahill, James F

    2015-01-01

    Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis) to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample) also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1) to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2) to determine the dataset parameters (i.e., evenness, number of taxa, number of samples) that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project), statistically viable results can still be obtained with less of an investment.

  7. Reporting Point and Interval Estimates of Effect-Size for Planned Contrasts: Fixed within Effect Analyses of Variance

    ERIC Educational Resources Information Center

    Robey, Randall R.

    2004-01-01

    The purpose of this tutorial is threefold: (a) review the state of statistical science regarding effect-sizes, (b) illustrate the importance of effect-sizes for interpreting findings in all forms of research and particularly for results of clinical-outcome research, and (c) demonstrate just how easily a criterion on reporting effect-sizes in…

  8. Do Effect-Size Measures Measure up?: A Brief Assessment

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Levin, Joel R.; Leech, Nancy L.

    2003-01-01

    Because of criticisms leveled at statistical hypothesis testing, some researchers have argued that measures of effect size should replace the significance-testing practice. We contend that although effect-size measures have logical appeal, they are also associated with a number of limitations that may result in problematic interpretations of them…

  9. Effect-Size Measures and Meta-Analytic Thinking in Counseling Psychology Research

    ERIC Educational Resources Information Center

    Henson, Robin K.

    2006-01-01

    Effect sizes are critical to result interpretation and synthesis across studies. Although statistical significance testing has historically dominated the determination of result importance, modern views emphasize the role of effect sizes and confidence intervals. This article accessibly discusses how to calculate and interpret the effect sizes…

  10. Effects of Subbasin Size on Topographic Characteristics and Simulated Flow Paths in Sleepers River Watershed, Vermont

    NASA Astrophysics Data System (ADS)

    Wolock, David M.

    1995-08-01

    The effects of subbasin size on topographic characteristics and simulated flow paths were determined for the 111.5-km2 Sleepers River Research Watershed in Vermont using the watershed model TOPMODEL. Topography is parameterized in TOPMODEL as the spatial and statistical distribution of the index ln (a/tan B), where In is the Napierian logarithm, a is the upslope area per unit contour length, and tan B is the slope gradient. The mean, variance, and skew of the ln (a/tan B) distribution were computed for several sets of nested subbasins (0.05 to 111.5 km2)) along streams in the watershed and used as input to TOPMODEL. In general, the statistics of the ln (a/tan B) distribution and the simulated percentage of overland flow in total streamflow increased rapidly for some nested subbasins and decreased rapidly for others as subbasin size increased from 0.05 to 1 km2, generally increased up to a subbasin size of 5 km2, and remained relatively constant at a subbasin size greater than 5 km2. Differences in simulated flow paths among subbasins of all sizes (0.05 to 111.5 km2) were caused by differences in the statistics of the ln (a/tan B) distribution, not by differences in the explicit spatial arrangement of ln (a/tan B) values within the subbasins. Analysis of streamflow chemistry data from the Neversink River watershed in southeastern New York supports the hypothesis that subbasin size affects flow-path characteristics.

  11. Substance-dependence rehab treatment in Thailand: a meta analysis.

    PubMed

    Verachai, Viroj; Kittipichai, Wirin; Konghom, Suwapat; Lukanapichonchut, Lumsum; Sinlapasacran, Narong; Kimsongneun, Nipa; Rergarun, Prachern; Doungnimit, Amawasee

    2009-12-01

    To synthesize the substance-dependence researches focusing on rehab treatment phase. Several criteria were used to select studies for meta analysis. Firstly, the research must have focused on the rehab period on the substance-dependence treatment, secondly, only quantitative researches that used statistics to calculate effect sizes were selected, and thirdly, all researches were from Thai libraries and were done during 1997-2006. The instrument used for data collection was comprised of two sets. The first used to collect the general information of studies including the crucial statistics and test statistics. The second was used to assess the quality of studies. Results from synthesizing 32 separate studies found that 323 effect sizes were computed in terms of the correlation coefficient "r". The psychology approach rehab program was higher in effect size than the network approach (p < 0.05). Additionally, Quasi-experimental studies were higher in effect size than correlation studies (p < 0.05). Among the quasi-experimental studies it was found that TCs revealed the highest effect size (r = 0.76). Among the correlation studies, it was found that the motivation program revealed the highest effect size (r = 0.84). The substance-use rehab treatment programs in Thailand which revealed the high effect size should be adjusted to the current program. However, the narcotic studies which focus on the rehab phase should be synthesized every 5-10 years in order to integrate new concept into the development of future the substance-dependence rehab treatment program, especially those at the research unit of the Drug Dependence Treatment Institute/Centers in Thailand.

  12. Item Analysis Appropriate for Domain-Referenced Classroom Testing. (Project Technical Report Number 1).

    ERIC Educational Resources Information Center

    Nitko, Anthony J.; Hsu, Tse-chi

    Item analysis procedures appropriate for domain-referenced classroom testing are described. A conceptual framework within which item statistics can be considered and promising statistics in light of this framework are presented. The sampling fluctuations of the more promising item statistics for sample sizes comparable to the typical classroom…

  13. Mini-Digest of Education Statistics, 2008. NCES 2009-021

    ERIC Educational Resources Information Center

    Snyder, Thomas D.

    2009-01-01

    This publication is the 14th edition of the "Mini-Digest of Education Statistics," a pocket-sized compilation of statistical information covering the broad field of American education from kindergarten through graduate school. The "Mini-Digest" is designed as an easy reference for materials found in much greater detail in the…

  14. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power

    PubMed Central

    Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943

  15. Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.

    PubMed

    Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon

    2016-01-01

    An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.

  16. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  17. GLIMMPSE Lite: Calculating Power and Sample Size on Smartphone Devices

    PubMed Central

    Munjal, Aarti; Sakhadeo, Uttara R.; Muller, Keith E.; Glueck, Deborah H.; Kreidler, Sarah M.

    2014-01-01

    Researchers seeking to develop complex statistical applications for mobile devices face a common set of difficult implementation issues. In this work, we discuss general solutions to the design challenges. We demonstrate the utility of the solutions for a free mobile application designed to provide power and sample size calculations for univariate, one-way analysis of variance (ANOVA), GLIMMPSE Lite. Our design decisions provide a guide for other scientists seeking to produce statistical software for mobile platforms. PMID:25541688

  18. Combining censored and uncensored data in a U-statistic: design and sample size implications for cell therapy research.

    PubMed

    Moyé, Lemuel A; Lai, Dejian; Jing, Kaiyan; Baraniuk, Mary Sarah; Kwak, Minjung; Penn, Marc S; Wu, Colon O

    2011-01-01

    The assumptions that anchor large clinical trials are rooted in smaller, Phase II studies. In addition to specifying the target population, intervention delivery, and patient follow-up duration, physician-scientists who design these Phase II studies must select the appropriate response variables (endpoints). However, endpoint measures can be problematic. If the endpoint assesses the change in a continuous measure over time, then the occurrence of an intervening significant clinical event (SCE), such as death, can preclude the follow-up measurement. Finally, the ideal continuous endpoint measurement may be contraindicated in a fraction of the study patients, a change that requires a less precise substitution in this subset of participants.A score function that is based on the U-statistic can address these issues of 1) intercurrent SCE's and 2) response variable ascertainments that use different measurements of different precision. The scoring statistic is easy to apply, clinically relevant, and provides flexibility for the investigators' prospective design decisions. Sample size and power formulations for this statistic are provided as functions of clinical event rates and effect size estimates that are easy for investigators to identify and discuss. Examples are provided from current cardiovascular cell therapy research.

  19. A Statistical Skull Geometry Model for Children 0-3 Years Old

    PubMed Central

    Li, Zhigang; Park, Byoung-Keon; Liu, Weiguo; Zhang, Jinhuan; Reed, Matthew P.; Rupp, Jonathan D.; Hoff, Carrie N.; Hu, Jingwen

    2015-01-01

    Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO). To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0–3 YO population. In this study, head CT scans from fifty-six 0–3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models. PMID:25992998

  20. A statistical skull geometry model for children 0-3 years old.

    PubMed

    Li, Zhigang; Park, Byoung-Keon; Liu, Weiguo; Zhang, Jinhuan; Reed, Matthew P; Rupp, Jonathan D; Hoff, Carrie N; Hu, Jingwen

    2015-01-01

    Head injury is the leading cause of fatality and long-term disability for children. Pediatric heads change rapidly in both size and shape during growth, especially for children under 3 years old (YO). To accurately assess the head injury risks for children, it is necessary to understand the geometry of the pediatric head and how morphologic features influence injury causation within the 0-3 YO population. In this study, head CT scans from fifty-six 0-3 YO children were used to develop a statistical model of pediatric skull geometry. Geometric features important for injury prediction, including skull size and shape, skull thickness and suture width, along with their variations among the sample population, were quantified through a series of image and statistical analyses. The size and shape of the pediatric skull change significantly with age and head circumference. The skull thickness and suture width vary with age, head circumference and location, which will have important effects on skull stiffness and injury prediction. The statistical geometry model developed in this study can provide a geometrical basis for future development of child anthropomorphic test devices and pediatric head finite element models.

  1. Statistical Estimation of Orbital Debris Populations with a Spectrum of Object Size

    NASA Technical Reports Server (NTRS)

    Xu, Y. -l; Horstman, M.; Krisko, P. H.; Liou, J. -C; Matney, M.; Stansbery, E. G.; Stokely, C. L.; Whitlock, D.

    2008-01-01

    Orbital debris is a real concern for the safe operations of satellites. In general, the hazard of debris impact is a function of the size and spatial distributions of the debris populations. To describe and characterize the debris environment as reliably as possible, the current NASA Orbital Debris Engineering Model (ORDEM2000) is being upgraded to a new version based on new and better quality data. The data-driven ORDEM model covers a wide range of object sizes from 10 microns to greater than 1 meter. This paper reviews the statistical process for the estimation of the debris populations in the new ORDEM upgrade, and discusses the representation of large-size (greater than or equal to 1 m and greater than or equal to 10 cm) populations by SSN catalog objects and the validation of the statistical approach. Also, it presents results for the populations with sizes of greater than or equal to 3.3 cm, greater than or equal to 1 cm, greater than or equal to 100 micrometers, and greater than or equal to 10 micrometers. The orbital debris populations used in the new version of ORDEM are inferred from data based upon appropriate reference (or benchmark) populations instead of the binning of the multi-dimensional orbital-element space. This paper describes all of the major steps used in the population-inference procedure for each size-range. Detailed discussions on data analysis, parameter definition, the correlation between parameters and data, and uncertainty assessment are included.

  2. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study.

    PubMed

    Nour-Eldein, Hebatallah

    2016-01-01

    With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles.

  3. Statistical methods and errors in family medicine articles between 2010 and 2014-Suez Canal University, Egypt: A cross-sectional study

    PubMed Central

    Nour-Eldein, Hebatallah

    2016-01-01

    Background: With limited statistical knowledge of most physicians it is not uncommon to find statistical errors in research articles. Objectives: To determine the statistical methods and to assess the statistical errors in family medicine (FM) research articles that were published between 2010 and 2014. Methods: This was a cross-sectional study. All 66 FM research articles that were published over 5 years by FM authors with affiliation to Suez Canal University were screened by the researcher between May and August 2015. Types and frequencies of statistical methods were reviewed in all 66 FM articles. All 60 articles with identified inferential statistics were examined for statistical errors and deficiencies. A comprehensive 58-item checklist based on statistical guidelines was used to evaluate the statistical quality of FM articles. Results: Inferential methods were recorded in 62/66 (93.9%) of FM articles. Advanced analyses were used in 29/66 (43.9%). Contingency tables 38/66 (57.6%), regression (logistic, linear) 26/66 (39.4%), and t-test 17/66 (25.8%) were the most commonly used inferential tests. Within 60 FM articles with identified inferential statistics, no prior sample size 19/60 (31.7%), application of wrong statistical tests 17/60 (28.3%), incomplete documentation of statistics 59/60 (98.3%), reporting P value without test statistics 32/60 (53.3%), no reporting confidence interval with effect size measures 12/60 (20.0%), use of mean (standard deviation) to describe ordinal/nonnormal data 8/60 (13.3%), and errors related to interpretation were mainly for conclusions without support by the study data 5/60 (8.3%). Conclusion: Inferential statistics were used in the majority of FM articles. Data analysis and reporting statistics are areas for improvement in FM research articles. PMID:27453839

  4. Assessing Statistical Competencies in Clinical and Translational Science Education: One Size Does Not Fit All

    PubMed Central

    Lindsell, Christopher J.; Welty, Leah J.; Mazumdar, Madhu; Thurston, Sally W.; Rahbar, Mohammad H.; Carter, Rickey E.; Pollock, Bradley H.; Cucchiara, Andrew J.; Kopras, Elizabeth J.; Jovanovic, Borko D.; Enders, Felicity T.

    2014-01-01

    Abstract Introduction Statistics is an essential training component for a career in clinical and translational science (CTS). Given the increasing complexity of statistics, learners may have difficulty selecting appropriate courses. Our question was: what depth of statistical knowledge do different CTS learners require? Methods For three types of CTS learners (principal investigator, co‐investigator, informed reader of the literature), each with different backgrounds in research (no previous research experience, reader of the research literature, previous research experience), 18 experts in biostatistics, epidemiology, and research design proposed levels for 21 statistical competencies. Results Statistical competencies were categorized as fundamental, intermediate, or specialized. CTS learners who intend to become independent principal investigators require more specialized training, while those intending to become informed consumers of the medical literature require more fundamental education. For most competencies, less training was proposed for those with more research background. Discussion When selecting statistical coursework, the learner's research background and career goal should guide the decision. Some statistical competencies are considered to be more important than others. Baseline knowledge assessments may help learners identify appropriate coursework. Conclusion Rather than one size fits all, tailoring education to baseline knowledge, learner background, and future goals increases learning potential while minimizing classroom time. PMID:25212569

  5. "Magnitude-based inference": a statistical review.

    PubMed

    Welsh, Alan H; Knight, Emma J

    2015-04-01

    We consider "magnitude-based inference" and its interpretation by examining in detail its use in the problem of comparing two means. We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how "magnitude-based inference" is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. We show that "magnitude-based inference" is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with "magnitude-based inference" and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using "magnitude-based inference," a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis.

  6. Discovery sequence and the nature of low permeability gas accumulations

    USGS Publications Warehouse

    Attanasi, E.D.

    2005-01-01

    There is an ongoing discussion regarding the geologic nature of accumulations that host gas in low-permeability sandstone environments. This note examines the discovery sequence of the accumulations in low permeability sandstone plays that were classified as continuous-type by the U.S. Geological Survey for the 1995 National Oil and Gas Assessment. It compares the statistical character of historical discovery sequences of accumulations associated with continuous-type sandstone gas plays to those of conventional plays. The seven sandstone plays with sufficient data exhibit declining size with sequence order, on average, and in three of the seven the trend is statistically significant. Simulation experiments show that both a skewed endowment size distribution and a discovery process that mimics sampling proportional to size are necessary to generate a discovery sequence that consistently produces a statistically significant negative size order relationship. The empirical findings suggest that discovery sequence could be used to constrain assessed gas in untested areas. The plays examined represent 134 of the 265 trillion cubic feet of recoverable gas assessed in undeveloped areas of continuous-type gas plays in low permeability sandstone environments reported in the 1995 National Assessment. ?? 2005 International Association for Mathematical Geology.

  7. Set size manipulations reveal the boundary conditions of perceptual ensemble learning.

    PubMed

    Chetverikov, Andrey; Campana, Gianluca; Kristjánsson, Árni

    2017-11-01

    Recent evidence suggests that observers can grasp patterns of feature variations in the environment with surprising efficiency. During visual search tasks where all distractors are randomly drawn from a certain distribution rather than all being homogeneous, observers are capable of learning highly complex statistical properties of distractor sets. After only a few trials (learning phase), the statistical properties of distributions - mean, variance and crucially, shape - can be learned, and these representations affect search during a subsequent test phase (Chetverikov, Campana, & Kristjánsson, 2016). To assess the limits of such distribution learning, we varied the information available to observers about the underlying distractor distributions by manipulating set size during the learning phase in two experiments. We found that robust distribution learning only occurred for large set sizes. We also used set size to assess whether the learning of distribution properties makes search more efficient. The results reveal how a certain minimum of information is required for learning to occur, thereby delineating the boundary conditions of learning of statistical variation in the environment. However, the benefits of distribution learning for search efficiency remain unclear. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Tooth-size discrepancy: A comparison between manual and digital methods

    PubMed Central

    Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge

    2014-01-01

    Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529

  9. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  10. The use of imputed sibling genotypes in sibship-based association analysis: on modeling alternatives, power and model misspecification.

    PubMed

    Minică, Camelia C; Dolan, Conor V; Hottenga, Jouke-Jan; Willemsen, Gonneke; Vink, Jacqueline M; Boomsma, Dorret I

    2013-05-01

    When phenotypic, but no genotypic data are available for relatives of participants in genetic association studies, previous research has shown that family-based imputed genotypes can boost the statistical power when included in such studies. Here, using simulations, we compared the performance of two statistical approaches suitable to model imputed genotype data: the mixture approach, which involves the full distribution of the imputed genotypes and the dosage approach, where the mean of the conditional distribution features as the imputed genotype. Simulations were run by varying sibship size, size of the phenotypic correlations among siblings, imputation accuracy and minor allele frequency of the causal SNP. Furthermore, as imputing sibling data and extending the model to include sibships of size two or greater requires modeling the familial covariance matrix, we inquired whether model misspecification affects power. Finally, the results obtained via simulations were empirically verified in two datasets with continuous phenotype data (height) and with a dichotomous phenotype (smoking initiation). Across the settings considered, the mixture and the dosage approach are equally powerful and both produce unbiased parameter estimates. In addition, the likelihood-ratio test in the linear mixed model appears to be robust to the considered misspecification in the background covariance structure, given low to moderate phenotypic correlations among siblings. Empirical results show that the inclusion in association analysis of imputed sibling genotypes does not always result in larger test statistic. The actual test statistic may drop in value due to small effect sizes. That is, if the power benefit is small, that the change in distribution of the test statistic under the alternative is relatively small, the probability is greater of obtaining a smaller test statistic. As the genetic effects are typically hypothesized to be small, in practice, the decision on whether family-based imputation could be used as a means to increase power should be informed by prior power calculations and by the consideration of the background correlation.

  11. Statistical Analyses of Satellite Cloud Object Data from CERES. Part II; Tropical Convective Cloud Objects During 1998 El Nino and Validation of the Fixed Anvil Temperature Hypothesis

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Wong, Takmeng; Wielicki, Bruce a.; Parker, Lindsay; Lin, Bing; Eitzen, Zachary A.; Branson, Mark

    2006-01-01

    Characteristics of tropical deep convective cloud objects observed over the tropical Pacific during January-August 1998 are examined using the Tropical Rainfall Measuring Mission/ Clouds and the Earth s Radiant Energy System single scanner footprint (SSF) data. These characteristics include the frequencies of occurrence and statistical distributions of cloud physical properties. Their variations with cloud-object size, sea surface temperature (SST), and satellite precessing cycle are analyzed in detail. A cloud object is defined as a contiguous patch of the Earth composed of satellite footprints within a single dominant cloud-system type. It is found that statistical distributions of cloud physical properties are significantly different among three size categories of cloud objects with equivalent diameters of 100 - 150 km (small), 150 - 300 km (medium), and > 300 km (large), respectively, except for the distributions of ice particle size. The distributions for the larger-size category of cloud objects are more skewed towards high SSTs, high cloud tops, low cloud-top temperature, large ice water path, high cloud optical depth, low outgoing longwave (LW) radiation, and high albedo than the smaller-size category. As SST varied from one satellite precessing cycle to another, the changes in macrophysical properties of cloud objects over the entire tropical Pacific were small for the large-size category of cloud objects, relative to those of the small- and medium-size categories. This result suggests that the fixed anvil temperature hypothesis of Hartmann and Larson may be valid for the large-size category. Combining with the result that a higher percentage of the large-size category of cloud objects occurs during higher SST subperiods, this implies that macrophysical properties of cloud objects would be less sensitive to further warming of the climate. On the other hand, when cloud objects are classified according to SSTs where large-scale dynamics plays important roles, statistical characteristics of cloud microphysical properties, optical depth and albedo are not sensitive to the SST, but those of cloud macrophysical properties are strongly dependent upon the SST. Frequency distributions of vertical velocity from the European Center for Medium-range Weather Forecasts model that is matched to each cloud object are used to interpret some of the findings in this study.

  12. Perception of the average size of multiple objects in chimpanzees (Pan troglodytes).

    PubMed

    Imura, Tomoko; Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki

    2017-08-30

    Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. © 2017 The Authors.

  13. Perception of the average size of multiple objects in chimpanzees (Pan troglodytes)

    PubMed Central

    Kawakami, Fumito; Shirai, Nobu; Tomonaga, Masaki

    2017-01-01

    Humans can extract statistical information, such as the average size of a group of objects or the general emotion of faces in a crowd without paying attention to any individual object or face. To determine whether summary perception is unique to humans, we investigated the evolutional origins of this ability by assessing whether chimpanzees, which are closely related to humans, can also determine the average size of multiple visual objects. Five chimpanzees and 18 humans were able to choose the array in which the average size was larger, when presented with a pair of arrays, each containing 12 circles of different or the same sizes. Furthermore, both species were more accurate in judging the average size of arrays consisting of 12 circles of different or the same sizes than they were in judging the average size of arrays consisting of a single circle. Our findings could not be explained by the use of a strategy in which the chimpanzee detected the largest or smallest circle among those in the array. Our study provides the first evidence that chimpanzees can perceive the average size of multiple visual objects. This indicates that the ability to compute the statistical properties of a complex visual scene is not unique to humans, but is shared between both species. PMID:28835550

  14. A Statistical Model for Estimation of Fish Density Including Correlation in Size, Space, Time and between Species from Research Survey Data

    PubMed Central

    Bastardie, Francois

    2014-01-01

    Trawl survey data with high spatial and seasonal coverage were analysed using a variant of the Log Gaussian Cox Process (LGCP) statistical model to estimate unbiased relative fish densities. The model estimates correlations between observations according to time, space, and fish size and includes zero observations and over-dispersion. The model utilises the fact the correlation between numbers of fish caught increases when the distance in space and time between the fish decreases, and the correlation between size groups in a haul increases when the difference in size decreases. Here the model is extended in two ways. Instead of assuming a natural scale size correlation, the model is further developed to allow for a transformed length scale. Furthermore, in the present application, the spatial- and size-dependent correlation between species was included. For cod (Gadus morhua) and whiting (Merlangius merlangus), a common structured size correlation was fitted, and a separable structure between the time and space-size correlation was found for each species, whereas more complex structures were required to describe the correlation between species (and space-size). The within-species time correlation is strong, whereas the correlations between the species are weaker over time but strong within the year. PMID:24911631

  15. Geant4 models for simulation of hadron/ion nuclear interactions at moderate and low energies.

    NASA Astrophysics Data System (ADS)

    Ivantchenko, Anton; Ivanchenko, Vladimir; Quesada, Jose-Manuel; Wright, Dennis

    The Geant4 toolkit is intended for Monte Carlo simulation of particle transport in media. It was initially designed for High Energy Physics purposes such as experiments at the Large Hadron Collider (LHC) at CERN. The toolkit offers a set of models allowing effective simulation of cosmic ray interactions with different materials. For moderate and low energy hadron/ion interactions with nuclei there are a number of competitive models: Binary and Bertini intra-nuclear cascade models, quantum molecular dynamic model (QMD), INCL/ABLA cascade model, and Chiral Invariant Phase Space Decay model (CHIPS). We report the status of these models for the recent version of Geant4 (release 9.3, December 2009). The Bertini cascade in-ternal cross sections were upgraded. The native Geant4 precompound and deexcitation models were used in the Binary cascade and QMD. They were significantly improved including emis-sion of light fragments, the Fermi break-up model, the General Evaporation Model (GEM), the multi-fragmentation model, and the fission model. Comparisons between model predictions and data for thin target experiments for neutron, proton, light ions, and isotope production are presented and discussed. The focus of these validations is concentrated on target materials important for space missions.

  16. New Development on Modelling Fluctuations and Fragmentation in Heavy-Ion Collisions

    NASA Astrophysics Data System (ADS)

    Lin, Hao; Danielewicz, Pawel

    2017-09-01

    During heavy-ion collisions (HIC), colliding nuclei form an excited composite system. Instabilities present in the system may deform the shape of the system exotically, leading to a break-up into fragments. Many experimental efforts have been devoted to the nuclear multifragmentation phenomenon, while traditional HIC models, lacking in proper treatment of fluctuations, fall short in explaining it. In view of this, we are developing a new model to implement realistic fluctuations into transport simulation. The new model is motivated by the Brownian motion description of colliding particles. The effects of two-body collisions are recast in one-body diffusion processes. Vastly different dynamical paths are sampled by solving Langevin equations in momentum space. It is the stochastic sampling of dynamical paths that leads to a wide spread of exit channels. In addition, the nucleon degree of freedom is used to enhance the fluctuations. The model has been tested in reactions such as 112Sn + 112Sn and 58Ni + 58Ni, where reasonable results are yielded. An exploratory comparison on the 112Sn + 112Sn reaction at 50 MeV/nucleon with two other models, the stochastic mean-field (SMF) and the antisymmetrized molecular dynamics (AMD) models, has also been conducted. Work supported by the NSF Grant No. PHY-1403906.

  17. Statistical distributions of avalanche size and waiting times in an inter-sandpile cascade model

    NASA Astrophysics Data System (ADS)

    Batac, Rene; Longjas, Anthony; Monterola, Christopher

    2012-02-01

    Sandpile-based models have successfully shed light on key features of nonlinear relaxational processes in nature, particularly the occurrence of fat-tailed magnitude distributions and exponential return times, from simple local stress redistributions. In this work, we extend the existing sandpile paradigm into an inter-sandpile cascade, wherein the avalanches emanating from a uniformly-driven sandpile (first layer) is used to trigger the next (second layer), and so on, in a successive fashion. Statistical characterizations reveal that avalanche size distributions evolve from a power-law p(S)≈S-1.3 for the first layer to gamma distributions p(S)≈Sαexp(-S/S0) for layers far away from the uniformly driven sandpile. The resulting avalanche size statistics is found to be associated with the corresponding waiting time distribution, as explained in an accompanying analytic formulation. Interestingly, both the numerical and analytic models show good agreement with actual inventories of non-uniformly driven events in nature.

  18. OCT Amplitude and Speckle Statistics of Discrete Random Media.

    PubMed

    Almasian, Mitra; van Leeuwen, Ton G; Faber, Dirk J

    2017-11-01

    Speckle, amplitude fluctuations in optical coherence tomography (OCT) images, contains information on sub-resolution structural properties of the imaged sample. Speckle statistics could therefore be utilized in the characterization of biological tissues. However, a rigorous theoretical framework relating OCT speckle statistics to structural tissue properties has yet to be developed. As a first step, we present a theoretical description of OCT speckle, relating the OCT amplitude variance to size and organization for samples of discrete random media (DRM). Starting the calculations from the size and organization of the scattering particles, we analytically find expressions for the OCT amplitude mean, amplitude variance, the backscattering coefficient and the scattering coefficient. We assume fully developed speckle and verify the validity of this assumption by experiments on controlled samples of silica microspheres suspended in water. We show that the OCT amplitude variance is sensitive to sub-resolution changes in size and organization of the scattering particles. Experimentally determined and theoretically calculated optical properties are compared and in good agreement.

  19. Solar granulation and statistical crystallography: A modeling approach using size-shape relations

    NASA Technical Reports Server (NTRS)

    Noever, D. A.

    1994-01-01

    The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.

  20. How conservative is Fisher's exact test? A quantitative evaluation of the two-sample comparative binomial trial.

    PubMed

    Crans, Gerald G; Shuster, Jonathan J

    2008-08-15

    The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd

  1. Weighting by Inverse Variance or by Sample Size in Random-Effects Meta-Analysis

    ERIC Educational Resources Information Center

    Marin-Martinez, Fulgencio; Sanchez-Meca, Julio

    2010-01-01

    Most of the statistical procedures in meta-analysis are based on the estimation of average effect sizes from a set of primary studies. The optimal weight for averaging a set of independent effect sizes is the inverse variance of each effect size, but in practice these weights have to be estimated, being affected by sampling error. When assuming a…

  2. Statistical considerations for agroforestry studies

    Treesearch

    James A. Baldwin

    1993-01-01

    Statistical topics that related to agroforestry studies are discussed. These included study objectives, populations of interest, sampling schemes, sample sizes, estimation vs. hypothesis testing, and P-values. In addition, a relatively new and very much improved histogram display is described.

  3. North American transportation : statistics on Canadian, Mexican, and United States transportation

    DOT National Transportation Integrated Search

    1994-05-01

    North American Transportation: Statistics on Canadian, Mexican, and United States transportation contains extensive data on the size and scope, use, employment, fuel consumption, and economic role of each country's transportation system. It was publi...

  4. 2012 Market Report on Wind Technologies in Distributed Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orrell, Alice C.

    2013-08-01

    An annual report on U.S. wind power in distributed applications – expanded to include small, mid-size, and utility-scale installations – including key statistics, economic data, installation, capacity, and generation statistics, and more.

  5. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications

    PubMed Central

    Chaibub Neto, Elias

    2015-01-01

    In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson’s sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling. PMID:26125965

  6. Generalizations and Extensions of the Probability of Superiority Effect Size Estimator

    ERIC Educational Resources Information Center

    Ruscio, John; Gera, Benjamin Lee

    2013-01-01

    Researchers are strongly encouraged to accompany the results of statistical tests with appropriate estimates of effect size. For 2-group comparisons, a probability-based effect size estimator ("A") has many appealing properties (e.g., it is easy to understand, robust to violations of parametric assumptions, insensitive to outliers). We review…

  7. Beyond Cohen's "d": Alternative Effect Size Measures for Between-Subject Designs

    ERIC Educational Resources Information Center

    Peng, Chao-Ying Joanne; Chen, Li-Ting

    2014-01-01

    Given the long history of discussion of issues surrounding statistical testing and effect size indices and various attempts by the American Psychological Association and by the American Educational Research Association to encourage the reporting of effect size, most journals in education and psychology have witnessed an increase in effect size…

  8. 75 FR 48815 - Medicaid Program and Children's Health Insurance Program (CHIP); Revisions to the Medicaid...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-11

    ... size may be reduced by the finite population correction factor. The finite population correction is a statistical formula utilized to determine sample size where the population is considered finite rather than... program may notify us and the annual sample size will be reduced by the finite population correction...

  9. Estimation of census and effective population sizes: the increasing usefulness of DNA-based approaches

    Treesearch

    Gordon Luikart; Nils Ryman; David A. Tallmon; Michael K. Schwartz; Fred W. Allendorf

    2010-01-01

    Population census size (NC) and effective population sizes (Ne) are two crucial parameters that influence population viability, wildlife management decisions, and conservation planning. Genetic estimators of both NC and Ne are increasingly widely used because molecular markers are increasingly available, statistical methods are improving rapidly, and genetic estimators...

  10. Sampling stratospheric aerosols with impactors

    NASA Technical Reports Server (NTRS)

    Oberbeck, Verne R.

    1989-01-01

    Derivation of statistically significant size distributions from impactor samples of rarefield stratospheric aerosols imposes difficult sampling constraints on collector design. It is shown that it is necessary to design impactors of different size for each range of aerosol size collected so as to obtain acceptable levels of uncertainty with a reasonable amount of data reduction.

  11. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature.

    PubMed

    Szucs, Denes; Ioannidis, John P A

    2017-03-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64-1.46) for nominally statistically significant results and D = 0.24 (0.11-0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience.

  12. Improved population estimates through the use of auxiliary information

    USGS Publications Warehouse

    Johnson, D.H.; Ralph, C.J.; Scott, J.M.

    1981-01-01

    When estimating the size of a population of birds, the investigator may have, in addition to an estimator based on a statistical sample, information on one of several auxiliary variables, such as: (1) estimates of the population made on previous occasions, (2) measures of habitat variables associated with the size of the population, and (3) estimates of the population sizes of other species that correlate with the species of interest. Although many studies have described the relationships between each of these kinds of data and the population size to be estimated, very little work has been done to improve the estimator by incorporating such auxiliary information. A statistical methodology termed 'empirical Bayes' seems to be appropriate to these situations. The potential that empirical Bayes methodology has for improved estimation of the population size of the Mallard (Anas platyrhynchos) is explored. In the example considered, three empirical Bayes estimators were found to reduce the error by one-fourth to one-half of that of the usual estimator.

  13. Grain-Size Based Additivity Models for Scaling Multi-rate Uranyl Surface Complexation in Subsurface Sediments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Xiaoying; Liu, Chongxuan; Hu, Bill X.

    The additivity model assumed that field-scale reaction properties in a sediment including surface area, reactive site concentration, and reaction rate can be predicted from field-scale grain-size distribution by linearly adding reaction properties estimated in laboratory for individual grain-size fractions. This study evaluated the additivity model in scaling mass transfer-limited, multi-rate uranyl (U(VI)) surface complexation reactions in a contaminated sediment. Experimental data of rate-limited U(VI) desorption in a stirred flow-cell reactor were used to estimate the statistical properties of the rate constants for individual grain-size fractions, which were then used to predict rate-limited U(VI) desorption in the composite sediment. The resultmore » indicated that the additivity model with respect to the rate of U(VI) desorption provided a good prediction of U(VI) desorption in the composite sediment. However, the rate constants were not directly scalable using the additivity model. An approximate additivity model for directly scaling rate constants was subsequently proposed and evaluated. The result found that the approximate model provided a good prediction of the experimental results within statistical uncertainty. This study also found that a gravel-size fraction (2 to 8 mm), which is often ignored in modeling U(VI) sorption and desorption, is statistically significant to the U(VI) desorption in the sediment.« less

  14. Confidence crisis of results in biomechanics research.

    PubMed

    Knudson, Duane

    2017-11-01

    Many biomechanics studies have small sample sizes and incorrect statistical analyses, so reporting of inaccurate inferences and inflated magnitude of effects are common in the field. This review examines these issues in biomechanics research and summarises potential solutions from research in other fields to increase the confidence in the experimental effects reported in biomechanics. Authors, reviewers and editors of biomechanics research reports are encouraged to improve sample sizes and the resulting statistical power, improve reporting transparency, improve the rigour of statistical analyses used, and increase the acceptance of replication studies to improve the validity of inferences from data in biomechanics research. The application of sports biomechanics research results would also improve if a larger percentage of unbiased effects and their uncertainty were reported in the literature.

  15. Origin of Pareto-like spatial distributions in ecosystems.

    PubMed

    Manor, Alon; Shnerb, Nadav M

    2008-12-31

    Recent studies of cluster distribution in various ecosystems revealed Pareto statistics for the size of spatial colonies. These results were supported by cellular automata simulations that yield robust criticality for endogenous pattern formation based on positive feedback. We show that this patch statistics is a manifestation of the law of proportionate effect. Mapping the stochastic model to a Markov birth-death process, the transition rates are shown to scale linearly with cluster size. This mapping provides a connection between patch statistics and the dynamics of the ecosystem; the "first passage time" for different colonies emerges as a powerful tool that discriminates between endogenous and exogenous clustering mechanisms. Imminent catastrophic shifts (such as desertification) manifest themselves in a drastic change of the stability properties of spatial colonies.

  16. Tests of Mediation: Paradoxical Decline in Statistical Power as a Function of Mediator Collinearity

    PubMed Central

    Beasley, T. Mark

    2013-01-01

    Increasing the correlation between the independent variable and the mediator (a coefficient) increases the effect size (ab) for mediation analysis; however, increasing a by definition increases collinearity in mediation models. As a result, the standard error of product tests increase. The variance inflation due to increases in a at some point outweighs the increase of the effect size (ab) and results in a loss of statistical power. This phenomenon also occurs with nonparametric bootstrapping approaches because the variance of the bootstrap distribution of ab approximates the variance expected from normal theory. Both variances increase dramatically when a exceeds the b coefficient, thus explaining the power decline with increases in a. Implications for statistical analysis and applied researchers are discussed. PMID:24954952

  17. An Investigation of Integrated Sizing for US Army Men and Women

    DTIC Science & Technology

    1981-08-01

    number) Anthropometry U.S. Army Field Clothing Mee, surem, ent (s) Military Personnel Sizing (Clothing) Body Size Men Clothing Design Sizes...are larger. ’Ibid. ŕ°White, Robert M. and Edmund Churchill, 1.971, The Body Size of Soldiers, U.S. Army Anthropometry - 1966, Technical Report 72-51...1977, Anthropometry of Women in the U.S. Army--1977; Report No. 2 - The Basic Univariate Statistics, Technical Report NATICK/TR-77/024, U.S. Army

  18. True external diameter better predicts hemodynamic performance of bioprosthetic aortic valves than the manufacturers' stated size.

    PubMed

    Cevasco, Marisa; Mick, Stephanie L; Kwon, Michael; Lee, Lawrence S; Chen, Edward P; Chen, Frederick Y

    2013-05-01

    Currently, there is no universal standard for sizing bioprosthetic aortic valves. Hence, a standardized comparison was performed to clarify this issue. Every size of four commercially available bioprosthetic aortic valves marketed in the United States (Biocor Supra; Mosaic Ultra; Magna Ease; Mitroflow) was obtained. Subsequently, custom sizers were created that were accurate to 0.0025 mm to represent aortic roots 18 mm through 32 mm, and these were used to measure the external diameter of each valve. Using the effective orifice area (EOA) and transvalvular pressure gradient (TPG) data submitted to the FDA, a comparison was made between the hemodynamic properties of valves with equivalent manufacturer stated sizes and valves with equivalent measured external diameters. Based on manufacturer size alone, the valves at first seemed to be hemodynamically different from each other, with Mitroflow valves appearing to be hemodynamically superior, having a large EOA and equivalent or superior TPG (p < 0.05). However, Mitroflow valves had a larger measured external diameter than the other valves of a given numerical manufacturer size. Valves with equivalent external diameters were then compared, regardless of the stated manufacturer sizes. For truly equivalently sized valves (i.e., by measured external diameter) there was no clear hemodynamic difference. There was no statistical difference in the EOAs between the Biocor Supra, Mosaic Ultra, and Mitroflow valves, and the Magna Ease valve had a statistically smaller EOA (p < 0.05). On comparing the mean TPG, the Biocor Supra and Mitroflow valves had statistically equivalent gradients to each other, as did the Mosaic Ultra and Magna Ease valves. When comparing valves of the same numerical manufacturer size, there appears to be a difference in hemodynamic performance across different manufacturers' valves according to FDA data. However, comparing equivalently measured valves eliminates the differences between valves produced by different manufacturers.

  19. Inflammation response and cytotoxic effects in human THP-1 cells of size-fractionated PM10 extracts in a polluted urban site.

    PubMed

    Schilirò, T; Alessandria, L; Bonetta, S; Carraro, E; Gilli, G

    2016-02-01

    To contribute to a greater characterization of the airborne particulate matter's toxicity, size-fractionated PM10 was sampled during different seasons in a polluted urban site in Torino, a northern Italian city. Three main size fractions (PM10 - 3 μm; PM3 - 0.95 μm; PM < 0.95 μm) extracts (organic and aqueous) were assayed with THP-1 cells to evaluate their effects on cell proliferation, LDH activity, TNFα, IL-8 and CYP1A1 expression. The mean PM10 concentrations were statistically different in summer and in winter and the finest fraction PM<0.95 was always higher than the others. Size-fractionated PM10 extracts, sampled in an urban traffic meteorological-chemical station produced size-related toxicological effects in relation to season and particles extraction. The PM summer extracts induced a significant release of LDH compared to winter and produced a size-related effect, with higher values measured with PM10-3. Exposure to size-fractionated PM10 extracts did not induce significant expression of TNFα. IL-8 expression was influenced by exposure to size-fractionated PM10 extracts and statistically significant differences were found between kind of extracts for both seasons. The mean fold increases in CYP1A1 expression were statistically different in summer and in winter; winter fraction extracts produced a size-related effect, in particular for organic samples with higher values measured with PM<0.95 extracts. Our results confirm that the only measure of PM can be misleading for the assessment of air quality moreover we support efforts toward identifying potential effect-based tools (e.g. in vitro test) that could be used in the context of the different monitoring programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. The Attenuation of Correlation Coefficients: A Statistical Literacy Issue

    ERIC Educational Resources Information Center

    Trafimow, David

    2016-01-01

    Much of the science reported in the media depends on correlation coefficients. But the size of correlation coefficients depends, in part, on the reliability with which the correlated variables are measured. Understanding this is a statistical literacy issue.

  1. Seabed mapping and characterization of sediment variability using the usSEABED data base

    USGS Publications Warehouse

    Goff, J.A.; Jenkins, C.J.; Jeffress, Williams S.

    2008-01-01

    We present a methodology for statistical analysis of randomly located marine sediment point data, and apply it to the US continental shelf portions of usSEABED mean grain size records. The usSEABED database, like many modern, large environmental datasets, is heterogeneous and interdisciplinary. We statistically test the database as a source of mean grain size data, and from it provide a first examination of regional seafloor sediment variability across the entire US continental shelf. Data derived from laboratory analyses ("extracted") and from word-based descriptions ("parsed") are treated separately, and they are compared statistically and deterministically. Data records are selected for spatial analysis by their location within sample regions: polygonal areas defined in ArcGIS chosen by geography, water depth, and data sufficiency. We derive isotropic, binned semivariograms from the data, and invert these for estimates of noise variance, field variance, and decorrelation distance. The highly erratic nature of the semivariograms is a result both of the random locations of the data and of the high level of data uncertainty (noise). This decorrelates the data covariance matrix for the inversion, and largely prevents robust estimation of the fractal dimension. Our comparison of the extracted and parsed mean grain size data demonstrates important differences between the two. In particular, extracted measurements generally produce finer mean grain sizes, lower noise variance, and lower field variance than parsed values. Such relationships can be used to derive a regionally dependent conversion factor between the two. Our analysis of sample regions on the US continental shelf revealed considerable geographic variability in the estimated statistical parameters of field variance and decorrelation distance. Some regional relationships are evident, and overall there is a tendency for field variance to be higher where the average mean grain size is finer grained. Surprisingly, parsed and extracted noise magnitudes correlate with each other, which may indicate that some portion of the data variability that we identify as "noise" is caused by real grain size variability at very short scales. Our analyses demonstrate that by applying a bias-correction proxy, usSEABED data can be used to generate reliable interpolated maps of regional mean grain size and sediment character. 

  2. Statistical analyses support power law distributions found in neuronal avalanches.

    PubMed

    Klaus, Andreas; Yu, Shan; Plenz, Dietmar

    2011-01-01

    The size distribution of neuronal avalanches in cortical networks has been reported to follow a power law distribution with exponent close to -1.5, which is a reflection of long-range spatial correlations in spontaneous neuronal activity. However, identifying power law scaling in empirical data can be difficult and sometimes controversial. In the present study, we tested the power law hypothesis for neuronal avalanches by using more stringent statistical analyses. In particular, we performed the following steps: (i) analysis of finite-size scaling to identify scale-free dynamics in neuronal avalanches, (ii) model parameter estimation to determine the specific exponent of the power law, and (iii) comparison of the power law to alternative model distributions. Consistent with critical state dynamics, avalanche size distributions exhibited robust scaling behavior in which the maximum avalanche size was limited only by the spatial extent of sampling ("finite size" effect). This scale-free dynamics suggests the power law as a model for the distribution of avalanche sizes. Using both the Kolmogorov-Smirnov statistic and a maximum likelihood approach, we found the slope to be close to -1.5, which is in line with previous reports. Finally, the power law model for neuronal avalanches was compared to the exponential and to various heavy-tail distributions based on the Kolmogorov-Smirnov distance and by using a log-likelihood ratio test. Both the power law distribution without and with exponential cut-off provided significantly better fits to the cluster size distributions in neuronal avalanches than the exponential, the lognormal and the gamma distribution. In summary, our findings strongly support the power law scaling in neuronal avalanches, providing further evidence for critical state dynamics in superficial layers of cortex.

  3. Orphan therapies: making best use of postmarket data.

    PubMed

    Maro, Judith C; Brown, Jeffrey S; Dal Pan, Gerald J; Li, Lingling

    2014-08-01

    Postmarket surveillance of the comparative safety and efficacy of orphan therapeutics is challenging, particularly when multiple therapeutics are licensed for the same orphan indication. To make best use of product-specific registry data collected to fulfill regulatory requirements, we propose the creation of a distributed electronic health data network among registries. Such a network could support sequential statistical analyses designed to detect early warnings of excess risks. We use a simulated example to explore the circumstances under which a distributed network may prove advantageous. We perform sample size calculations for sequential and non-sequential statistical studies aimed at comparing the incidence of hepatotoxicity following initiation of two newly licensed therapies for homozygous familial hypercholesterolemia. We calculate the sample size savings ratio, or the proportion of sample size saved if one conducted a sequential study as compared to a non-sequential study. Then, using models to describe the adoption and utilization of these therapies, we simulate when these sample sizes are attainable in calendar years. We then calculate the analytic calendar time savings ratio, analogous to the sample size savings ratio. We repeat these analyses for numerous scenarios. Sequential analyses detect effect sizes earlier or at the same time as non-sequential analyses. The most substantial potential savings occur when the market share is more imbalanced (i.e., 90% for therapy A) and the effect size is closest to the null hypothesis. However, due to low exposure prevalence, these savings are difficult to realize within the 30-year time frame of this simulation for scenarios in which the outcome of interest occurs at or more frequently than one event/100 person-years. We illustrate a process to assess whether sequential statistical analyses of registry data performed via distributed networks may prove a worthwhile infrastructure investment for pharmacovigilance.

  4. Identification of optimal mask size parameter for noise filtering in 99mTc-methylene diphosphonate bone scintigraphy images.

    PubMed

    Pandey, Anil K; Bisht, Chandan S; Sharma, Param D; ArunRaj, Sreedharan Thankarajan; Taywade, Sameer; Patel, Chetan; Bal, Chandrashekhar; Kumar, Rakesh

    2017-11-01

    Tc-methylene diphosphonate (Tc-MDP) bone scintigraphy images have limited number of counts per pixel. A noise filtering method based on local statistics of the image produces better results than a linear filter. However, the mask size has a significant effect on image quality. In this study, we have identified the optimal mask size that yields a good smooth bone scan image. Forty four bone scan images were processed using mask sizes 3, 5, 7, 9, 11, 13, and 15 pixels. The input and processed images were reviewed in two steps. In the first step, the images were inspected and the mask sizes that produced images with significant loss of clinical details in comparison with the input image were excluded. In the second step, the image quality of the 40 sets of images (each set had input image, and its corresponding three processed images with 3, 5, and 7-pixel masks) was assessed by two nuclear medicine physicians. They selected one good smooth image from each set of images. The image quality was also assessed quantitatively with a line profile. Fisher's exact test was used to find statistically significant differences in image quality processed with 5 and 7-pixel mask at a 5% cut-off. A statistically significant difference was found between the image quality processed with 5 and 7-pixel mask at P=0.00528. The identified optimal mask size to produce a good smooth image was found to be 7 pixels. The best mask size for the John-Sen Lee filter was found to be 7×7 pixels, which yielded Tc-methylene diphosphonate bone scan images with the highest acceptable smoothness.

  5. Power of mental health nursing research: a statistical analysis of studies in the International Journal of Mental Health Nursing.

    PubMed

    Gaskin, Cadeyrn J; Happell, Brenda

    2013-02-01

    Having sufficient power to detect effect sizes of an expected magnitude is a core consideration when designing studies in which inferential statistics will be used. The main aim of this study was to investigate the statistical power in studies published in the International Journal of Mental Health Nursing. From volumes 19 (2010) and 20 (2011) of the journal, studies were analysed for their power to detect small, medium, and large effect sizes, according to Cohen's guidelines. The power of the 23 studies included in this review to detect small, medium, and large effects was 0.34, 0.79, and 0.94, respectively. In 90% of papers, no adjustments for experiment-wise error were reported. With a median of nine inferential tests per paper, the mean experiment-wise error rate was 0.51. A priori power analyses were only reported in 17% of studies. Although effect sizes for correlations and regressions were routinely reported, effect sizes for other tests (χ(2)-tests, t-tests, ANOVA/MANOVA) were largely absent from the papers. All types of effect sizes were infrequently interpreted. Researchers are strongly encouraged to conduct power analyses when designing studies, and to avoid scattergun approaches to data analysis (i.e. undertaking large numbers of tests in the hope of finding 'significant' results). Because reviewing effect sizes is essential for determining the clinical significance of study findings, researchers would better serve the field of mental health nursing if they reported and interpreted effect sizes. © 2012 The Authors. International Journal of Mental Health Nursing © 2012 Australian College of Mental Health Nurses Inc.

  6. SEDPAK—A comprehensive operational system and data-processing package in APPLESOFT BASIC for a settling tube, sediment analyzer

    NASA Astrophysics Data System (ADS)

    Goldbery, R.; Tehori, O.

    SEDPAK provides a comprehensive software package for operation of a settling tube and sand analyzer (2-0.063 mm) and includes data-processing programs for statistical and graphic output of results. The programs are menu-driven and written in APPLESOFT BASIC, conforming with APPLE 3.3 DOS. Data storage and retrieval from disc is an important feature of SEDPAK. Additional features of SEDPAK include condensation of raw settling data via standard size-calibration curves to yield statistical grain-size parameters, plots of grain-size frequency distributions and cumulative log/probability curves. The program also has a module for processing of grain-size frequency data from sieved samples. An addition feature of SEDPAK is the option for automatic data processing and graphic output of a sequential or nonsequential array of samples on one side of a disc.

  7. Sample size in psychological research over the past 30 years.

    PubMed

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  8. Effective Thermal Conductivity of an Aluminum Foam + Water Two Phase System

    NASA Technical Reports Server (NTRS)

    Moskito, John

    1996-01-01

    This study examined the effect of volume fraction and pore size on the effective thermal conductivity of an aluminum foam and water system. Nine specimens of aluminum foam representing a matrix of three volume fractions (4-8% by vol.) and three pore sizes (2-4 mm) were tested with water to determine relationships to the effective thermal conductivity. It was determined that increases in volume fraction of the aluminum phase were correlated to increases in the effective thermal conductivity. It was not statistically possible to prove that changes in pore size of the aluminum foam correlated to changes in the effective thermal conductivity. However, interaction effects between the volume fraction and pore size of the foam were statistically significant. Ten theoretical models were selected from the published literature to compare against the experimental data. Models by Asaad, Hadley, and de Vries provided effective thermal conductivity predictions within a 95% confidence interval.

  9. Statistical Modeling of Robotic Random Walks on Different Terrain

    NASA Astrophysics Data System (ADS)

    Naylor, Austin; Kinnaman, Laura

    Issues of public safety, especially with crowd dynamics and pedestrian movement, have been modeled by physicists using methods from statistical mechanics over the last few years. Complex decision making of humans moving on different terrains can be modeled using random walks (RW) and correlated random walks (CRW). The effect of different terrains, such as a constant increasing slope, on RW and CRW was explored. LEGO robots were programmed to make RW and CRW with uniform step sizes. Level ground tests demonstrated that the robots had the expected step size distribution and correlation angles (for CRW). The mean square displacement was calculated for each RW and CRW on different terrains and matched expected trends. The step size distribution was determined to change based on the terrain; theoretical predictions for the step size distribution were made for various simple terrains. It's Dr. Laura Kinnaman, not sure where to put the Prefix.

  10. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  11. A Statistical Analysis of the Economic Drivers of Battery Energy Storage in Commercial Buildings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Long, Matthew; Simpkins, Travis; Cutler, Dylan

    There is significant interest in using battery energy storage systems (BESS) to reduce peak demand charges, and therefore the life cycle cost of electricity, in commercial buildings. This paper explores the drivers of economic viability of BESS in commercial buildings through statistical analysis. A sample population of buildings was generated, a techno-economic optimization model was used to size and dispatch the BESS, and the resulting optimal BESS sizes were analyzed for relevant predictor variables. Explanatory regression analyses were used to demonstrate that peak demand charges are the most significant predictor of an economically viable battery, and that the shape ofmore » the load profile is the most significant predictor of the size of the battery.« less

  12. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  13. Computing physical properties with quantum Monte Carlo methods with statistical fluctuations independent of system size.

    PubMed

    Assaraf, Roland

    2014-12-01

    We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.

  14. Normal Approximations to the Distributions of the Wilcoxon Statistics: Accurate to What "N"? Graphical Insights

    ERIC Educational Resources Information Center

    Bellera, Carine A.; Julien, Marilyse; Hanley, James A.

    2010-01-01

    The Wilcoxon statistics are usually taught as nonparametric alternatives for the 1- and 2-sample Student-"t" statistics in situations where the data appear to arise from non-normal distributions, or where sample sizes are so small that we cannot check whether they do. In the past, critical values, based on exact tail areas, were…

  15. "What If" Analyses: Ways to Interpret Statistical Significance Test Results Using EXCEL or "R"

    ERIC Educational Resources Information Center

    Ozturk, Elif

    2012-01-01

    The present paper aims to review two motivations to conduct "what if" analyses using Excel and "R" to understand the statistical significance tests through the sample size context. "What if" analyses can be used to teach students what statistical significance tests really do and in applied research either prospectively to estimate what sample size…

  16. Illinois Public Library Statistics: A Guide for Librarians and Trustees, 1990-1991.

    ERIC Educational Resources Information Center

    Illinois Univ., Urbana. Graduate School of Library and Information Science.

    The fourth in a series, this publication presents a statistical picture of Illinois Public Libraries during the 1990-1991 fiscal year. Its purpose is to provide librarians and trustees with statistics that can be compared to those of other libraries of similar size and environment to determine whether the library is above or below the average for…

  17. Texture Classification by Texton: Statistical versus Binary

    PubMed Central

    Guo, Zhenhua; Zhang, Zhongcheng; Li, Xiu; Li, Qin; You, Jane

    2014-01-01

    Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. PMID:24520346

  18. Post-stratified estimation: with-in strata and total sample size recommendations

    Treesearch

    James A. Westfall; Paul L. Patterson; John W. Coulston

    2011-01-01

    Post-stratification is used to reduce the variance of estimates of the mean. Because the stratification is not fixed in advance, within-strata sample sizes can be quite small. The survey statistics literature provides some guidance on minimum within-strata sample sizes; however, the recommendations and justifications are inconsistent and apply broadly for many...

  19. The Future of Small- and Medium-Sized Communities in the Prairie Region.

    ERIC Educational Resources Information Center

    Wellar, Barry S., Ed.

    Four papers are featured. The first is a statistical overview and analysis of past, present and future happenings to small communities in the Region; it focuses on two indicators: (1) population growth or declining community class size and, (2) the changing distribution of commercial outlets by community class size. The other three papers report…

  20. Class Size and Student Evaluations in Sweden

    ERIC Educational Resources Information Center

    Westerlund, Joakim

    2008-01-01

    This paper examines the effect of class size on student evaluations of the quality of an introductory mathematics course at Lund University in Sweden. In contrast to much other studies, we find a large negative, and statistically significant, effect of class size on the quality of the course. This result appears to be quite robust, as almost all…

  1. Consomic mouse strain selection based on effect size measurement, statistical significance testing and integrated behavioral z-scoring: focus on anxiety-related behavior and locomotion.

    PubMed

    Labots, M; Laarakker, M C; Ohl, F; van Lith, H A

    2016-06-29

    Selecting chromosome substitution strains (CSSs, also called consomic strains/lines) used in the search for quantitative trait loci (QTLs) consistently requires the identification of the respective phenotypic trait of interest and is simply based on a significant difference between a consomic and host strain. However, statistical significance as represented by P values does not necessarily predicate practical importance. We therefore propose a method that pays attention to both the statistical significance and the actual size of the observed effect. The present paper extends on this approach and describes in more detail the use of effect size measures (Cohen's d, partial eta squared - η p (2) ) together with the P value as statistical selection parameters for the chromosomal assignment of QTLs influencing anxiety-related behavior and locomotion in laboratory mice. The effect size measures were based on integrated behavioral z-scoring and were calculated in three experiments: (A) a complete consomic male mouse panel with A/J as the donor strain and C57BL/6J as the host strain. This panel, including host and donor strains, was analyzed in the modified Hole Board (mHB). The consomic line with chromosome 19 from A/J (CSS-19A) was selected since it showed increased anxiety-related behavior, but similar locomotion compared to its host. (B) Following experiment A, female CSS-19A mice were compared with their C57BL/6J counterparts; however no significant differences and effect sizes close to zero were found. (C) A different consomic mouse strain (CSS-19PWD), with chromosome 19 from PWD/PhJ transferred on the genetic background of C57BL/6J, was compared with its host strain. Here, in contrast with CSS-19A, there was a decreased overall anxiety in CSS-19PWD compared to C57BL/6J males, but not locomotion. This new method shows an improved way to identify CSSs for QTL analysis for anxiety-related behavior using a combination of statistical significance testing and effect sizes. In addition, an intercross between CSS-19A and CSS-19PWD may be of interest for future studies on the genetic background of anxiety-related behavior.

  2. Quasi-experimental study designs series-paper 10: synthesizing evidence for effects collected from quasi-experimental studies presents surmountable challenges.

    PubMed

    Becker, Betsy Jane; Aloe, Ariel M; Duvendack, Maren; Stanley, T D; Valentine, Jeffrey C; Fretheim, Atle; Tugwell, Peter

    2017-09-01

    To outline issues of importance to analytic approaches to the synthesis of quasi-experiments (QEs) and to provide a statistical model for use in analysis. We drew on studies of statistics, epidemiology, and social-science methodology to outline methods for synthesis of QE studies. The design and conduct of QEs, effect sizes from QEs, and moderator variables for the analysis of those effect sizes were discussed. Biases, confounding, design complexities, and comparisons across designs offer serious challenges to syntheses of QEs. Key components of meta-analyses of QEs were identified, including the aspects of QE study design to be coded and analyzed. Of utmost importance are the design and statistical controls implemented in the QEs. Such controls and any potential sources of bias and confounding must be modeled in analyses, along with aspects of the interventions and populations studied. Because of such controls, effect sizes from QEs are more complex than those from randomized experiments. A statistical meta-regression model that incorporates important features of the QEs under review was presented. Meta-analyses of QEs provide particular challenges, but thorough coding of intervention characteristics and study methods, along with careful analysis, should allow for sound inferences. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. [Laser's biostimulation in healing or crural ulcerations].

    PubMed

    Król, P; Franek, A; Huńka-Zurawińska, W; Bil, J; Swist, D; Polak, A; Bendkowski, W

    2001-11-01

    The objective of this paper was to evaluate effect of laser's biostimulation on the process of healing of crural ulcerations. Three comparative groups of patients, A, B and C, were made at random from the patients with venous crural ulcerations. The group A consisted of 17, the group B 15, the group C 17 patients. The patients in all comparative groups were treated pharmacologically and got compress therapy. Ulcerations at patients in group A were additionally irradiated by light of biostimulation's laser (810 nm) in this way that every time ulcerations got dose of energy 4 J/cm2. The patient's in-group B additionally got blind trial (with placebo in the form of quasi-laserotherapy). The evaluated factors were to estimate how laser's biostimulation causes any changes of the size of the ulcers and of the volume of tissue defect. The speed of changes of size and volume of tissue defect per week was calculated. After the treatment there was statistically significant decrease of size of ulcers in all comparative groups while there was no statistically significant difference between the groups observed. After the treatment there was statistically significant decrease of volume of ulcers only in groups A and C but there was no statistically significant difference between the groups observed.

  4. Balance exercise for persons with multiple sclerosis using Wii games: a randomised, controlled multi-centre study.

    PubMed

    Nilsagård, Ylva E; Forsberg, Anette S; von Koch, Lena

    2013-02-01

    The use of interactive video games is expanding within rehabilitation. The evidence base is, however, limited. Our aim was to evaluate the effects of a Nintendo Wii Fit® balance exercise programme on balance function and walking ability in people with multiple sclerosis (MS). A multi-centre, randomised, controlled single-blinded trial with random allocation to exercise or no exercise. The exercise group participated in a programme of 12 supervised 30-min sessions of balance exercises using Wii games, twice a week for 6-7 weeks. Primary outcome was the Timed Up and Go test (TUG). In total, 84 participants were enrolled; four were lost to follow-up. After the intervention, there were no statistically significant differences between groups but effect sizes for the TUG, TUGcognitive and, the Dynamic Gait Index (DGI) were moderate and small for all other measures. Statistically significant improvements within the exercise group were present for all measures (large to moderate effect sizes) except in walking speed and balance confidence. The non-exercise group showed statistically significant improvements for the Four Square Step Test and the DGI. In comparison with no intervention, a programme of supervised balance exercise using Nintendo Wii Fit® did not render statistically significant differences, but presented moderate effect sizes for several measures of balance performance.

  5. What Should Researchers Expect When They Replicate Studies? A Statistical View of Replicability in Psychological Science.

    PubMed

    Patil, Prasad; Peng, Roger D; Leek, Jeffrey T

    2016-07-01

    A recent study of the replicability of key psychological findings is a major contribution toward understanding the human side of the scientific process. Despite the careful and nuanced analysis reported, the simple narrative disseminated by the mass, social, and scientific media was that in only 36% of the studies were the original results replicated. In the current study, however, we showed that 77% of the replication effect sizes reported were within a 95% prediction interval calculated using the original effect size. Our analysis suggests two critical issues in understanding replication of psychological studies. First, researchers' intuitive expectations for what a replication should show do not always match with statistical estimates of replication. Second, when the results of original studies are very imprecise, they create wide prediction intervals-and a broad range of replication effects that are consistent with the original estimates. This may lead to effects that replicate successfully, in that replication results are consistent with statistical expectations, but do not provide much information about the size (or existence) of the true effect. In this light, the results of the Reproducibility Project: Psychology can be viewed as statistically consistent with what one might expect when performing a large-scale replication experiment. © The Author(s) 2016.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy,more » and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics (which we discussed in [1]) where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel. We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.« less

  7. Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens

    NASA Astrophysics Data System (ADS)

    Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl

    2016-01-01

    As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.

  8. Use of disposable graduated biopsy forceps improves accuracy of polyp size measurements during endoscopy.

    PubMed

    Jin, Hei-Ying; Leng, Qiang

    2015-01-14

    To determine the accuracy of endoscopic polyp size measurements using disposable graduated biopsy forceps (DGBF). Gradations accurate to 1 mm were assessed with the wire of disposable graduated biopsy forceps. When a polyp was noted, endoscopists determined the width of the polyp; then, the graduated biopsy forceps was inserted and the largest diameter of the tumor was measured. After excision, during surgery or endoscopy, the polyp was measured using the vernier caliper. One hundred and thirty-three colorectal polyps from 119 patients were studied. The mean diameter, by post-polypectomy measurement, was 0.92 ± 0.69 cm; 83 were < 1 cm, 36 were between 1 and 2 cm, and 14 were > 2 cm. The mean diameter, by visual estimation, was 1.15 ± 0.88 cm; compared to the actual size measured using vernier calipers, the difference was statistically significant. The mean diameter measured using the DGBF was 0.93 ± 0.68 cm; compared to the actual size measured using vernier calipers, this difference was not statistically significant. The ratio between the mean size estimated by visual estimation and the actual size was significantly different from that between the mean size estimated using the DGBF and the actual size (1.26 ± 0.30 vs 1.02 ± 0.11). The accuracy of polyp size estimation was low by visual assessment; however, it improved when the DGBF was used.

  9. Adaptive evolution toward larger size in mammals

    PubMed Central

    Baker, Joanna; Meade, Andrew; Pagel, Mark; Venditti, Chris

    2015-01-01

    The notion that large body size confers some intrinsic advantage to biological species has been debated for centuries. Using a phylogenetic statistical approach that allows the rate of body size evolution to vary across a phylogeny, we find a long-term directional bias toward increasing size in the mammals. This pattern holds separately in 10 of 11 orders for which sufficient data are available and arises from a tendency for accelerated rates of evolution to produce increases, but not decreases, in size. On a branch-by-branch basis, increases in body size have been more than twice as likely as decreases, yielding what amounts to millions and millions of years of rapid and repeated increases in size away from the small ancestral mammal. These results are the first evidence, to our knowledge, from extant species that are compatible with Cope’s rule: the pattern of body size increase through time observed in the mammalian fossil record. We show that this pattern is unlikely to be explained by several nonadaptive mechanisms for increasing size and most likely represents repeated responses to new selective circumstances. By demonstrating that it is possible to uncover ancient evolutionary trends from a combination of a phylogeny and appropriate statistical models, we illustrate how data from extant species can complement paleontological accounts of evolutionary history, opening up new avenues of investigation for both. PMID:25848031

  10. Methodological quality of behavioural weight loss studies: a systematic review

    PubMed Central

    Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.

    2018-01-01

    Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775

  11. “Magnitude-based Inference”: A Statistical Review

    PubMed Central

    Welsh, Alan H.; Knight, Emma J.

    2015-01-01

    ABSTRACT Purpose We consider “magnitude-based inference” and its interpretation by examining in detail its use in the problem of comparing two means. Methods We extract from the spreadsheets, which are provided to users of the analysis (http://www.sportsci.org/), a precise description of how “magnitude-based inference” is implemented. We compare the implemented version of the method with general descriptions of it and interpret the method in familiar statistical terms. Results and Conclusions We show that “magnitude-based inference” is not a progressive improvement on modern statistics. The additional probabilities introduced are not directly related to the confidence interval but, rather, are interpretable either as P values for two different nonstandard tests (for different null hypotheses) or as approximate Bayesian calculations, which also lead to a type of test. We also discuss sample size calculations associated with “magnitude-based inference” and show that the substantial reduction in sample sizes claimed for the method (30% of the sample size obtained from standard frequentist calculations) is not justifiable so the sample size calculations should not be used. Rather than using “magnitude-based inference,” a better solution is to be realistic about the limitations of the data and use either confidence intervals or a fully Bayesian analysis. PMID:25051387

  12. Variations in Eyeball Diameters of the Healthy Adults

    PubMed Central

    Bekerman, Inessa; Gottlieb, Paul

    2014-01-01

    The purpose of the current research was to reevaluate the normative data on the eyeball diameters. Methods. In a prospective cohort study, the CT data of consecutive 250 adults with healthy eyes were collected and analyzed, and sagittal, transverse, and axial diameters of both eyeballs were measured. The data obtained from the left eye and from the right eye were compared. The correlation analysis was performed with the following variables: orbit size, gender, age, and ethnic background. Results. We did not find statistically significant differences correlated with gender of the patients and their age. The right eyeball was slightly smaller than the left one but this difference was statistically insignificant (P = 0.17). We did not find statistically significant differences of the eyeball sizes among the ethnicities we dealt with. Strong correlation was found between the transverse diameter and the width of the orbit (r = 0.88). Conclusion. The size of a human adult eye is approximately 24.2 mm (transverse) × 23.7 mm (sagittal) × 22.0–24.8 mm (axial) with no significant difference between sexes and age groups. In the transverse diameter, the eyeball size may vary from 21 mm to 27 mm. These data might be useful in ophthalmological, oculoplastic, and neurological practice. PMID:25431659

  13. Variations in eyeball diameters of the healthy adults.

    PubMed

    Bekerman, Inessa; Gottlieb, Paul; Vaiman, Michael

    2014-01-01

    The purpose of the current research was to reevaluate the normative data on the eyeball diameters. Methods. In a prospective cohort study, the CT data of consecutive 250 adults with healthy eyes were collected and analyzed, and sagittal, transverse, and axial diameters of both eyeballs were measured. The data obtained from the left eye and from the right eye were compared. The correlation analysis was performed with the following variables: orbit size, gender, age, and ethnic background. Results. We did not find statistically significant differences correlated with gender of the patients and their age. The right eyeball was slightly smaller than the left one but this difference was statistically insignificant (P = 0.17). We did not find statistically significant differences of the eyeball sizes among the ethnicities we dealt with. Strong correlation was found between the transverse diameter and the width of the orbit (r = 0.88). Conclusion. The size of a human adult eye is approximately 24.2 mm (transverse) × 23.7 mm (sagittal) × 22.0-24.8 mm (axial) with no significant difference between sexes and age groups. In the transverse diameter, the eyeball size may vary from 21 mm to 27 mm. These data might be useful in ophthalmological, oculoplastic, and neurological practice.

  14. Fitting statistical distributions to sea duck count data: implications for survey design and abundance estimation

    USGS Publications Warehouse

    Zipkin, Elise F.; Leirness, Jeffery B.; Kinlan, Brian P.; O'Connell, Allan F.; Silverman, Emily D.

    2014-01-01

    Determining appropriate statistical distributions for modeling animal count data is important for accurate estimation of abundance, distribution, and trends. In the case of sea ducks along the U.S. Atlantic coast, managers want to estimate local and regional abundance to detect and track population declines, to define areas of high and low use, and to predict the impact of future habitat change on populations. In this paper, we used a modified marked point process to model survey data that recorded flock sizes of Common eiders, Long-tailed ducks, and Black, Surf, and White-winged scoters. The data come from an experimental aerial survey, conducted by the United States Fish & Wildlife Service (USFWS) Division of Migratory Bird Management, during which east-west transects were flown along the Atlantic Coast from Maine to Florida during the winters of 2009–2011. To model the number of flocks per transect (the points), we compared the fit of four statistical distributions (zero-inflated Poisson, zero-inflated geometric, zero-inflated negative binomial and negative binomial) to data on the number of species-specific sea duck flocks that were recorded for each transect flown. To model the flock sizes (the marks), we compared the fit of flock size data for each species to seven statistical distributions: positive Poisson, positive negative binomial, positive geometric, logarithmic, discretized lognormal, zeta and Yule–Simon. Akaike’s Information Criterion and Vuong’s closeness tests indicated that the negative binomial and discretized lognormal were the best distributions for all species for the points and marks, respectively. These findings have important implications for estimating sea duck abundances as the discretized lognormal is a more skewed distribution than the Poisson and negative binomial, which are frequently used to model avian counts; the lognormal is also less heavy-tailed than the power law distributions (e.g., zeta and Yule–Simon), which are becoming increasingly popular for group size modeling. Choosing appropriate statistical distributions for modeling flock size data is fundamental to accurately estimating population summaries, determining required survey effort, and assessing and propagating uncertainty through decision-making processes.

  15. Probabilistic Mesomechanical Fatigue Model

    NASA Technical Reports Server (NTRS)

    Tryon, Robert G.

    1997-01-01

    A probabilistic mesomechanical fatigue life model is proposed to link the microstructural material heterogeneities to the statistical scatter in the macrostructural response. The macrostructure is modeled as an ensemble of microelements. Cracks nucleation within the microelements and grow from the microelements to final fracture. Variations of the microelement properties are defined using statistical parameters. A micromechanical slip band decohesion model is used to determine the crack nucleation life and size. A crack tip opening displacement model is used to determine the small crack growth life and size. Paris law is used to determine the long crack growth life. The models are combined in a Monte Carlo simulation to determine the statistical distribution of total fatigue life for the macrostructure. The modeled response is compared to trends in experimental observations from the literature.

  16. The Two-Dimensional Gabor Function Adapted to Natural Image Statistics: A Model of Simple-Cell Receptive Fields and Sparse Structure in Images.

    PubMed

    Loxley, P N

    2017-10-01

    The two-dimensional Gabor function is adapted to natural image statistics, leading to a tractable probabilistic generative model that can be used to model simple cell receptive field profiles, or generate basis functions for sparse coding applications. Learning is found to be most pronounced in three Gabor function parameters representing the size and spatial frequency of the two-dimensional Gabor function and characterized by a nonuniform probability distribution with heavy tails. All three parameters are found to be strongly correlated, resulting in a basis of multiscale Gabor functions with similar aspect ratios and size-dependent spatial frequencies. A key finding is that the distribution of receptive-field sizes is scale invariant over a wide range of values, so there is no characteristic receptive field size selected by natural image statistics. The Gabor function aspect ratio is found to be approximately conserved by the learning rules and is therefore not well determined by natural image statistics. This allows for three distinct solutions: a basis of Gabor functions with sharp orientation resolution at the expense of spatial-frequency resolution, a basis of Gabor functions with sharp spatial-frequency resolution at the expense of orientation resolution, or a basis with unit aspect ratio. Arbitrary mixtures of all three cases are also possible. Two parameters controlling the shape of the marginal distributions in a probabilistic generative model fully account for all three solutions. The best-performing probabilistic generative model for sparse coding applications is found to be a gaussian copula with Pareto marginal probability density functions.

  17. Got power? A systematic review of sample size adequacy in health professions education research.

    PubMed

    Cook, David A; Hatala, Rose

    2015-03-01

    Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

  18. Using the bootstrap to establish statistical significance for relative validity comparisons among patient-reported outcome measures

    PubMed Central

    2013-01-01

    Background Relative validity (RV), a ratio of ANOVA F-statistics, is often used to compare the validity of patient-reported outcome (PRO) measures. We used the bootstrap to establish the statistical significance of the RV and to identify key factors affecting its significance. Methods Based on responses from 453 chronic kidney disease (CKD) patients to 16 CKD-specific and generic PRO measures, RVs were computed to determine how well each measure discriminated across clinically-defined groups of patients compared to the most discriminating (reference) measure. Statistical significance of RV was quantified by the 95% bootstrap confidence interval. Simulations examined the effects of sample size, denominator F-statistic, correlation between comparator and reference measures, and number of bootstrap replicates. Results The statistical significance of the RV increased as the magnitude of denominator F-statistic increased or as the correlation between comparator and reference measures increased. A denominator F-statistic of 57 conveyed sufficient power (80%) to detect an RV of 0.6 for two measures correlated at r = 0.7. Larger denominator F-statistics or higher correlations provided greater power. Larger sample size with a fixed denominator F-statistic or more bootstrap replicates (beyond 500) had minimal impact. Conclusions The bootstrap is valuable for establishing the statistical significance of RV estimates. A reasonably large denominator F-statistic (F > 57) is required for adequate power when using the RV to compare the validity of measures with small or moderate correlations (r < 0.7). Substantially greater power can be achieved when comparing measures of a very high correlation (r > 0.9). PMID:23721463

  19. [Study of the reliability in one dimensional size measurement with digital slit lamp microscope].

    PubMed

    Wang, Tao; Qi, Chaoxiu; Li, Qigen; Dong, Lijie; Yang, Jiezheng

    2010-11-01

    To study the reliability of digital slit lamp microscope as a tool for quantitative analysis in one dimensional size measurement. Three single-blinded observers acquired and repeatedly measured the images with a size of 4.00 mm and 10.00 mm on the vernier caliper, which simulatated the human eye pupil and cornea diameter under China-made digital slit lamp microscope in the objective magnification of 4 times, 10 times, 16 times, 25 times, 40 times and 4 times, 10 times, 16 times, respectively. The correctness and precision of measurement were compared. The images with 4 mm size were measured by three investigators and the average values were located between 3.98 to 4.06. For the images with 10.00 mm size, the average values fell within 10.00 ~ 10.04. Measurement results of 4.00 mm images showed, except A4, B25, C16 and C25, significant difference was noted between the measured value and the true value. Regarding measurement results of 10.00 mm iamges indicated, except A10, statistical significance was found between the measured value and the true value. In terms of comparing the results of the same size measured at different magnifications by the same investigator, except for investigators A's measurements of 10.00 mm dimension, the measurement results by all the remaining investigators presented statistical significance at different magnifications. Compared measurements of the same size with different magnifications, measurements of 4.00 mm in 4-fold magnification had no significant difference among the investigators', the remaining results were statistically significant. The coefficient of variation of all measurement results were less than 5%; as magnification increased, the coefficient of variation decreased. The measurement of digital slit lamp microscope in one-dimensional size has good reliability,and should be performed for reliability analysis before used for quantitative analysis to reduce systematic errors.

  20. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  1. Clusters in irregular areas and lattices.

    PubMed

    Wieczorek, William F; Delmerico, Alan M; Rogerson, Peter A; Wong, David W S

    2012-01-01

    Geographic areas of different sizes and shapes of polygons that represent counts or rate data are often encountered in social, economic, health, and other information. Often political or census boundaries are used to define these areas because the information is available only for those geographies. Therefore, these types of boundaries are frequently used to define neighborhoods in spatial analyses using geographic information systems and related approaches such as multilevel models. When point data can be geocoded, it is possible to examine the impact of polygon shape on spatial statistical properties, such as clustering. We utilized point data (alcohol outlets) to examine the issue of polygon shape and size on visualization and statistical properties. The point data were allocated to regular lattices (hexagons and squares) and census areas for zip-code tabulation areas and tracts. The number of units in the lattices was set to be similar to the number of tract and zip-code areas. A spatial clustering statistic and visualization were used to assess the impact of polygon shape for zip- and tract-sized units. Results showed substantial similarities and notable differences across shape and size. The specific circumstances of a spatial analysis that aggregates points to polygons will determine the size and shape of the areal units to be used. The irregular polygons of census units may reflect underlying characteristics that could be missed by large regular lattices. Future research to examine the potential for using a combination of irregular polygons and regular lattices would be useful.

  2. The impact of particle size and initial solid loading on thermochemical pretreatment of wheat straw for improving sugar recovery.

    PubMed

    Rojas-Rejón, Oscar A; Sánchez, Arturo

    2014-07-01

    This work studies the effect of initial solid load (4-32 %; w/v, DS) and particle size (0.41-50 mm) on monosaccharide yield of wheat straw subjected to dilute H(2)SO(4) (0.75 %, v/v) pretreatment and enzymatic saccharification. Response surface methodology (RSM) based on a full factorial design (FFD) was used for the statistical analysis of pretreatment and enzymatic hydrolysis. The highest xylose yield obtained during pretreatment (ca. 86 %; of theoretical) was achieved at 4 % (w/v, DS) and 25 mm. The solid fraction obtained from the first set of experiments was subjected to enzymatic hydrolysis at constant enzyme dosage (17 FPU/g); statistical analysis revealed that glucose yield was favored with solids pretreated at low initial solid loads and small particle sizes. Dynamic experiments showed that glucose yield did not increase after 48 h of enzymatic hydrolysis. Once established pretreatment conditions, experiments were carried out with several initial solid loading (4-24 %; w/v, DS) and enzyme dosages (5-50 FPU/g). Two straw sizes (0.41 and 50 mm) were used for verification purposes. The highest glucose yield (ca. 55 %; of theoretical) was achieved at 4 % (w/v, DS), 0.41 mm and 50 FPU/g. Statistical analysis of experiments showed that at low enzyme dosage, particle size had a remarkable effect over glucose yield and initial solid load was the main factor for glucose yield.

  3. Sample size and power considerations in network meta-analysis

    PubMed Central

    2012-01-01

    Background Network meta-analysis is becoming increasingly popular for establishing comparative effectiveness among multiple interventions for the same disease. Network meta-analysis inherits all methodological challenges of standard pairwise meta-analysis, but with increased complexity due to the multitude of intervention comparisons. One issue that is now widely recognized in pairwise meta-analysis is the issue of sample size and statistical power. This issue, however, has so far only received little attention in network meta-analysis. To date, no approaches have been proposed for evaluating the adequacy of the sample size, and thus power, in a treatment network. Findings In this article, we develop easy-to-use flexible methods for estimating the ‘effective sample size’ in indirect comparison meta-analysis and network meta-analysis. The effective sample size for a particular treatment comparison can be interpreted as the number of patients in a pairwise meta-analysis that would provide the same degree and strength of evidence as that which is provided in the indirect comparison or network meta-analysis. We further develop methods for retrospectively estimating the statistical power for each comparison in a network meta-analysis. We illustrate the performance of the proposed methods for estimating effective sample size and statistical power using data from a network meta-analysis on interventions for smoking cessation including over 100 trials. Conclusion The proposed methods are easy to use and will be of high value to regulatory agencies and decision makers who must assess the strength of the evidence supporting comparative effectiveness estimates. PMID:22992327

  4. Metro U.S.A. Data Sheet: Population Estimates and Selected Demographic Indicators for the Metropolitan Areas of the United States. Special edition of the United States Population Data Sheet.

    ERIC Educational Resources Information Center

    Population Reference Bureau, Inc., Washington, DC.

    This poster-size data sheet presents population estimates and selected demographic indicators for the nation's 281 metropolitan areas. These areas are divided into 261 Metropolitan Statistical Areas (MSAs) and 20 Consolidated Metropolitan Statistical Areas (CMSAs), reporting units which replace the Standard Metropolitan Statistical Areas (SMSAs)…

  5. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment

    PubMed Central

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P.; Patterson, Nick; Price, Alkes L.

    2014-01-01

    Motivation: Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. Results: In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1–5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case–control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of χ2 association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Availability and implementation: Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. Contact: bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:24990607

  6. Estimating size and scope economies in the Portuguese water sector using the Bayesian stochastic frontier analysis.

    PubMed

    Carvalho, Pedro; Marques, Rui Cunha

    2016-02-15

    This study aims to search for economies of size and scope in the Portuguese water sector applying Bayesian and classical statistics to make inference in stochastic frontier analysis (SFA). This study proves the usefulness and advantages of the application of Bayesian statistics for making inference in SFA over traditional SFA which just uses classical statistics. The resulting Bayesian methods allow overcoming some problems that arise in the application of the traditional SFA, such as the bias in small samples and skewness of residuals. In the present case study of the water sector in Portugal, these Bayesian methods provide more plausible and acceptable results. Based on the results obtained we found that there are important economies of output density, economies of size, economies of vertical integration and economies of scope in the Portuguese water sector, pointing out to the huge advantages in undertaking mergers by joining the retail and wholesale components and by joining the drinking water and wastewater services. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Predicting Slag Generation in Sub-Scale Test Motors Using a Neural Network

    NASA Technical Reports Server (NTRS)

    Wiesenberg, Brent

    1999-01-01

    Generation of slag (aluminum oxide) is an important issue for the Reusable Solid Rocket Motor (RSRM). Thiokol performed testing to quantify the relationship between raw material variations and slag generation in solid propellants by testing sub-scale motors cast with propellant containing various combinations of aluminum fuel and ammonium perchlorate (AP) oxidizer particle sizes. The test data were analyzed using statistical methods and an artificial neural network. This paper primarily addresses the neural network results with some comparisons to the statistical results. The neural network showed that the particle sizes of both the aluminum and unground AP have a measurable effect on slag generation. The neural network analysis showed that aluminum particle size is the dominant driver in slag generation, about 40% more influential than AP. The network predictions of the amount of slag produced during firing of sub-scale motors were 16% better than the predictions of a statistically derived empirical equation. Another neural network successfully characterized the slag generated during full-scale motor tests. The success is attributable to the ability of neural networks to characterize multiple complex factors including interactions that affect slag generation.

  8. Tests of Independence in Contingency Tables with Small Samples: A Comparison of Statistical Power.

    ERIC Educational Resources Information Center

    Parshall, Cynthia G.; Kromrey, Jeffrey D.

    1996-01-01

    Power and Type I error rates were estimated for contingency tables with small sample sizes for the following four types of tests: (1) Pearson's chi-square; (2) chi-square with Yates's continuity correction; (3) the likelihood ratio test; and (4) Fisher's Exact Test. Various marginal distributions, sample sizes, and effect sizes were examined. (SLD)

  9. Correct Effect Size Estimates for Strength of Association Statistics: Comment on Odgaard and Fowler (2010)

    ERIC Educational Resources Information Center

    Lerner, Matthew D.; Mikami, Amori Yee

    2013-01-01

    Odgaard and Fowler (2010) articulated the importance of reporting confidence intervals (CIs) on effect size estimates, and they provided useful formulas for doing so. However, one of their reported formulas, pertaining to the calculation of CIs on strength of association effect sizes (e.g., R[squared] or [eta][squared]), is erroneous. This comment…

  10. Effects of epidemic threshold definition on disease spread statistics

    NASA Astrophysics Data System (ADS)

    Lagorio, C.; Migueles, M. V.; Braunstein, L. A.; López, E.; Macri, P. A.

    2009-03-01

    We study the statistical properties of SIR epidemics in random networks, when an epidemic is defined as only those SIR propagations that reach or exceed a minimum size sc. Using percolation theory to calculate the average fractional size of an epidemic, we find that the strength of the spanning link percolation cluster P∞ is an upper bound to . For small values of sc, P∞ is no longer a good approximation, and the average fractional size has to be computed directly. We find that the choice of sc is generally (but not always) guided by the network structure and the value of T of the disease in question. If the goal is to always obtain P∞ as the average epidemic size, one should choose sc to be the typical size of the largest percolation cluster at the critical percolation threshold for the transmissibility. We also study Q, the probability that an SIR propagation reaches the epidemic mass sc, and find that it is well characterized by percolation theory. We apply our results to real networks (DIMES and Tracerouter) to measure the consequences of the choice sc on predictions of average outcome sizes of computer failure epidemics.

  11. Informal Statistics Help Desk

    NASA Technical Reports Server (NTRS)

    Young, M.; Koslovsky, M.; Schaefer, Caroline M.; Feiveson, A. H.

    2017-01-01

    Back by popular demand, the JSC Biostatistics Laboratory and LSAH statisticians are offering an opportunity to discuss your statistical challenges and needs. Take the opportunity to meet the individuals offering expert statistical support to the JSC community. Join us for an informal conversation about any questions you may have encountered with issues of experimental design, analysis, or data visualization. Get answers to common questions about sample size, repeated measures, statistical assumptions, missing data, multiple testing, time-to-event data, and when to trust the results of your analyses.

  12. ELISPOTs Produced by CD8 and CD4 Cells Follow Log Normal Size Distribution Permitting Objective Counting

    PubMed Central

    Karulin, Alexey Y.; Karacsony, Kinga; Zhang, Wenji; Targoni, Oleg S.; Moldovan, Ioana; Dittrich, Marcus; Sundararaman, Srividya; Lehmann, Paul V.

    2015-01-01

    Each positive well in ELISPOT assays contains spots of variable sizes that can range from tens of micrometers up to a millimeter in diameter. Therefore, when it comes to counting these spots the decision on setting the lower and the upper spot size thresholds to discriminate between non-specific background noise, spots produced by individual T cells, and spots formed by T cell clusters is critical. If the spot sizes follow a known statistical distribution, precise predictions on minimal and maximal spot sizes, belonging to a given T cell population, can be made. We studied the size distributional properties of IFN-γ, IL-2, IL-4, IL-5 and IL-17 spots elicited in ELISPOT assays with PBMC from 172 healthy donors, upon stimulation with 32 individual viral peptides representing defined HLA Class I-restricted epitopes for CD8 cells, and with protein antigens of CMV and EBV activating CD4 cells. A total of 334 CD8 and 80 CD4 positive T cell responses were analyzed. In 99.7% of the test cases, spot size distributions followed Log Normal function. These data formally demonstrate that it is possible to establish objective, statistically validated parameters for counting T cell ELISPOTs. PMID:25612115

  13. Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature

    PubMed Central

    Szucs, Denes; Ioannidis, John P. A.

    2017-01-01

    We have empirically assessed the distribution of published effect sizes and estimated power by analyzing 26,841 statistical records from 3,801 cognitive neuroscience and psychology papers published recently. The reported median effect size was D = 0.93 (interquartile range: 0.64–1.46) for nominally statistically significant results and D = 0.24 (0.11–0.42) for nonsignificant results. Median power to detect small, medium, and large effects was 0.12, 0.44, and 0.73, reflecting no improvement through the past half-century. This is so because sample sizes have remained small. Assuming similar true effect sizes in both disciplines, power was lower in cognitive neuroscience than in psychology. Journal impact factors negatively correlated with power. Assuming a realistic range of prior probabilities for null hypotheses, false report probability is likely to exceed 50% for the whole literature. In light of our findings, the recently reported low replication success in psychology is realistic, and worse performance may be expected for cognitive neuroscience. PMID:28253258

  14. Two decades of change in transportation reflections from transportation statistics annual reports 1994-2014.

    DOT National Transportation Integrated Search

    2015-01-01

    The Bureau of Transportation Statistics (BTS) : provides information to support understanding : and decision-making related to the transportation : system, including the size and extent of the : system, how it is used, how well it works, and its : co...

  15. Principles of Statistics: What the Sports Medicine Professional Needs to Know.

    PubMed

    Riemann, Bryan L; Lininger, Monica R

    2018-07-01

    Understanding the results and statistics reported in original research remains a large challenge for many sports medicine practitioners and, in turn, may be among one of the biggest barriers to integrating research into sports medicine practice. The purpose of this article is to provide minimal essentials a sports medicine practitioner needs to know about interpreting statistics and research results to facilitate the incorporation of the latest evidence into practice. Topics covered include the difference between statistical significance and clinical meaningfulness; effect sizes and confidence intervals; reliability statistics, including the minimal detectable difference and minimal important difference; and statistical power. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Angiographic lesion size associated with LOC387715 A69S genotype in subfoveal polypoidal choroidal vasculopathy.

    PubMed

    Sakurada, Yoichi; Kubota, Takeo; Imasawa, Mitsuhiro; Tsumura, Toyoaki; Mabuchi, Fumihiko; Tanabe, Naohiko; Iijima, Hiroyuki

    2009-01-01

    To investigate whether the LOC387715/ARMS2 variants are associated with an angiographic phenotype, including lesion size and composition, in subfoveal polypoidal choroidal vasculopathy. Ninety-two subjects with symptomatic subfoveal polypoidal choroidal vasculopathy, whose visual acuity was from 0.1 to 0.5 on the Landolt chart, were genotyped for the LOC387715 polymorphism (rs10490924) using denaturing high-performance chromatography. The angiographic phenotype, including lesion composition and size, was evaluated by evaluators who were masked for the genotype. Lesion size was assessed by the greatest linear dimension based on fluorescein or indocyanine green angiography. Although there was no statistically significant difference in lesion size on indocyanine green angiography (P = 0.36, Kruskal-Wallis test) and in lesion composition (P = 0.59, chi-square test) among the 3 genotypes, there was a statistically significant difference in lesion size on fluorescein angiography (P = 0.0022, Kruskal-Wallis test). The LOC387715 A69S genotype is not associated with lesion composition or size on indocyanine green angiography but with lesion size on fluorescein angiography in patients with subfoveal polypoidal choroidal vasculopathy. Because fluorescein angiography findings represent secondary exudative changes, including subretinal hemorrhages and retinal pigment epithelial detachment, the results in the present study likely indicate that the T allele at the LOC387715 gene is associated with the exudative activity of polypoidal lesions.

  17. Investigation of pore size and energy distributions by statistical physics formalism applied to agriculture products

    NASA Astrophysics Data System (ADS)

    Aouaini, Fatma; Knani, Salah; Yahia, Manel Ben; Bahloul, Neila; Ben Lamine, Abdelmottaleb; Kechaou, Nabil

    2015-12-01

    In this paper, we present a new investigation that allows determining the pore size distribution (PSD) in a porous medium. This PSD is achieved by using the desorption isotherms of four varieties of olive leaves. This is by the means of statistical physics formalism and Kelvin's law. The results are compared with those obtained with scanning electron microscopy. The effect of temperature on the distribution function of pores has been studied. The influence of each parameter on the PSD is interpreted. A similar function of adsorption energy distribution, AED, is deduced from the PSD.

  18. Designing image segmentation studies: Statistical power, sample size and reference standard quality.

    PubMed

    Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C

    2017-12-01

    Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  19. A normative inference approach for optimal sample sizes in decisions from experience

    PubMed Central

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  20. Transmission Bottleneck Size Estimation from Pathogen Deep-Sequencing Data, with an Application to Human Influenza A Virus.

    PubMed

    Sobel Leonard, Ashley; Weissman, Daniel B; Greenbaum, Benjamin; Ghedin, Elodie; Koelle, Katia

    2017-07-15

    The bottleneck governing infectious disease transmission describes the size of the pathogen population transferred from the donor to the recipient host. Accurate quantification of the bottleneck size is particularly important for rapidly evolving pathogens such as influenza virus, as narrow bottlenecks reduce the amount of transferred viral genetic diversity and, thus, may decrease the rate of viral adaptation. Previous studies have estimated bottleneck sizes governing viral transmission by using statistical analyses of variants identified in pathogen sequencing data. These analyses, however, did not account for variant calling thresholds and stochastic viral replication dynamics within recipient hosts. Because these factors can skew bottleneck size estimates, we introduce a new method for inferring bottleneck sizes that accounts for these factors. Through the use of a simulated data set, we first show that our method, based on beta-binomial sampling, accurately recovers transmission bottleneck sizes, whereas other methods fail to do so. We then apply our method to a data set of influenza A virus (IAV) infections for which viral deep-sequencing data from transmission pairs are available. We find that the IAV transmission bottleneck size estimates in this study are highly variable across transmission pairs, while the mean bottleneck size of 196 virions is consistent with a previous estimate for this data set. Furthermore, regression analysis shows a positive association between estimated bottleneck size and donor infection severity, as measured by temperature. These results support findings from experimental transmission studies showing that bottleneck sizes across transmission events can be variable and influenced in part by epidemiological factors. IMPORTANCE The transmission bottleneck size describes the size of the pathogen population transferred from the donor to the recipient host and may affect the rate of pathogen adaptation within host populations. Recent advances in sequencing technology have enabled bottleneck size estimation from pathogen genetic data, although there is not yet a consistency in the statistical methods used. Here, we introduce a new approach to infer the bottleneck size that accounts for variant identification protocols and noise during pathogen replication. We show that failing to account for these factors leads to an underestimation of bottleneck sizes. We apply this method to an existing data set of human influenza virus infections, showing that transmission is governed by a loose, but highly variable, transmission bottleneck whose size is positively associated with the severity of infection of the donor. Beyond advancing our understanding of influenza virus transmission, we hope that this work will provide a standardized statistical approach for bottleneck size estimation for viral pathogens. Copyright © 2017 Sobel Leonard et al.

  1. A Binomial Test of Group Differences with Correlated Outcome Measures

    ERIC Educational Resources Information Center

    Onwuegbuzie, Anthony J.; Levin, Joel R.; Ferron, John M.

    2011-01-01

    Building on previous arguments for why educational researchers should not provide effect-size estimates in the face of statistically nonsignificant outcomes (Robinson & Levin, 1997), Onwuegbuzie and Levin (2005) proposed a 3-step statistical approach for assessing group differences when multiple outcome measures are individually analyzed…

  2. Double asymptotics for the chi-square statistic.

    PubMed

    Rempała, Grzegorz A; Wesołowski, Jacek

    2016-12-01

    Consider distributional limit of the Pearson chi-square statistic when the number of classes m n increases with the sample size n and [Formula: see text]. Under mild moment conditions, the limit is Gaussian for λ = ∞, Poisson for finite λ > 0, and degenerate for λ = 0.

  3. A Civilian/Military Trauma Institute: National Trauma Coordinating Center

    DTIC Science & Technology

    2015-12-01

    zip codes was used in “proximity to violence” analysis. Data were analyzed using SPSS (version 20.0, SPSS Inc., Chicago, IL). Multivariable linear...number of adverse events and serious events was not statistically higher in one group, the incidence of deep venous thrombosis (DVT) was statistically ...subjects the lack of statistical difference on multivariate analysis may be related to an underpowered sample size. It was recommended that the

  4. Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics

    PubMed Central

    Dowding, Irene; Haufe, Stefan

    2018-01-01

    Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885

  5. The use of the effect size in JCR Spanish journals of psychology: from theory to fact.

    PubMed

    García García, Juan; Ortega Campos, Elena; De la Fuente Sánchez, Leticia

    2011-11-01

    In 1999, Wilkinson and the Task Force on Statistical Inference published "Statistical Methods and Psychology: Guidelines and Explanation." The authors made several recommendations about how to improve the quality of Psychology research papers. One of these was to report some effect-size index in the results of the research. In 2001, the fifth edition of the Publication Manual of the American Psychological Association included this recommendation. In Spain, in 2003, scientific journals like Psicothema or the International Journal of Clinical and Health Psychology (IJCHP) published editorials and papers expressing the need to calculate the effect size in the research papers. The aim of this study is to determine whether the papers published from 2003 to 2008 in the four Spanish journals indexed in the Journal Citation Reports have reported some effect-size index of their results. The findings indicate that, in general, the followup of the norm has been scanty, though the evolution over the analyzed period is different depending on the journal.

  6. Use of disposable graduated biopsy forceps improves accuracy of polyp size measurements during endoscopy

    PubMed Central

    Jin, Hei-Ying; Leng, Qiang

    2015-01-01

    AIM: To determine the accuracy of endoscopic polyp size measurements using disposable graduated biopsy forceps (DGBF). METHODS: Gradations accurate to 1 mm were assessed with the wire of disposable graduated biopsy forceps. When a polyp was noted, endoscopists determined the width of the polyp; then, the graduated biopsy forceps was inserted and the largest diameter of the tumor was measured. After excision, during surgery or endoscopy, the polyp was measured using the vernier caliper. RESULTS: One hundred and thirty-three colorectal polyps from 119 patients were studied. The mean diameter, by post-polypectomy measurement, was 0.92 ± 0.69 cm; 83 were < 1 cm, 36 were between 1 and 2 cm, and 14 were > 2 cm. The mean diameter, by visual estimation, was 1.15 ± 0.88 cm; compared to the actual size measured using vernier calipers, the difference was statistically significant. The mean diameter measured using the DGBF was 0.93 ± 0.68 cm; compared to the actual size measured using vernier calipers, this difference was not statistically significant. The ratio between the mean size estimated by visual estimation and the actual size was significantly different from that between the mean size estimated using the DGBF and the actual size (1.26 ± 0.30 vs 1.02 ± 0.11). CONCLUSION: The accuracy of polyp size estimation was low by visual assessment; however, it improved when the DGBF was used. PMID:25605986

  7. Effects of the turnover rate on the size distribution of firms: An application of the kinetic exchange models

    NASA Astrophysics Data System (ADS)

    Chakrabarti, Anindya S.

    2012-12-01

    We address the issue of the distribution of firm size. To this end we propose a model of firms in a closed, conserved economy populated with zero-intelligence agents who continuously move from one firm to another. We then analyze the size distribution and related statistics obtained from the model. There are three well known statistical features obtained from the panel study of the firms i.e., the power law in size (in terms of income and/or employment), the Laplace distribution in the growth rates and the slowly declining standard deviation of the growth rates conditional on the firm size. First, we show that the model generalizes the usual kinetic exchange models with binary interaction to interactions between an arbitrary number of agents. When the number of interacting agents is in the order of the system itself, it is possible to decouple the model. We provide exact results on the distributions which are not known yet for binary interactions. Our model easily reproduces the power law for the size distribution of firms (Zipf’s law). The fluctuations in the growth rate falls with increasing size following a power law (though the exponent does not match with the data). However, the distribution of the difference of the firm size in this model has Laplace distribution whereas the real data suggests that the difference of the log of sizes has the same distribution.

  8. Sampling surface and subsurface particle-size distributions in wadable gravel-and cobble-bed streams for analyses in sediment transport, hydraulics, and streambed monitoring

    Treesearch

    Kristin Bunte; Steven R. Abt

    2001-01-01

    This document provides guidance for sampling surface and subsurface sediment from wadable gravel-and cobble-bed streams. After a short introduction to streams types and classifications in gravel-bed rivers, the document explains the field and laboratory measurement of particle sizes and the statistical analysis of particle-size distributions. Analysis of particle...

  9. Comparison of Double-Freeze versus Modified Triple-Freeze Pulmonary Cryoablation and Hemorrhage Volume Using Different Probe Sizes in an In Vivo Porcine Lung.

    PubMed

    Pan, Patrick J; Bansal, Anshuman K; Genshaft, Scott J; Kim, Grace H; Suh, Robert D; Abtin, Fereidoun

    2018-05-01

    To determine size of ablation zone and pulmonary hemorrhage in double-freeze (DF) vs modified triple-freeze (mTF) cryoablation protocols with different probe sizes in porcine lung. In 10 healthy adult pigs, 20 pulmonary cryoablations were performed using either a 2.4-mm or a 1.7-mm probe. Either conventional DF or mTF protocol was used. Serial noncontrast CT scans were performed during ablations. Ablation iceball and hemorrhage volumes were measured and compared between protocols and probe sizes. With 1.7-mm probe, greater peak iceball volume was observed with DF compared with mTF, although difference was not statistically significant (16.1 mL ± 1.9 vs 8.8 mL ± 3.6, P = .07). With 2.4-mm probe, DF and mTF produced similar peak iceball volumes (14.0 mL ± 2.8 vs 14.6 mL ± 2.7, P = .88). Midcycle hemorrhage was significantly larger with DF with the 1.7-mm probe (94.3 mL ± 22.2 vs 19.6 mL ± 2.1, P = .02) and with both sizes combined (93.2 mL ± 17.5 vs. 50.9 mL ± 12.6, P = .048). Rate of hemorrhage increase was significantly higher in DF (10.4 mL/min vs 5.1 mL/min, P = .003). End-cycle hemorrhage was visibly larger in DF compared with mTF across probe sizes, although differences were not statistically significant (P = .14 for 1.7 mm probe, P = .18 for 2.4 mm probe, and P = .07 for both probes combined). Rate of increase in hemorrhage during the last thaw period was not statistically different between DF and mTF (3.0 mL/min vs 2.8 mL/min, P = .992). mTF reduced rate of midcycle hemorrhage compared with DF. With mTF, midcycle hemorrhage was significantly smaller with 1.7-mm probe; although noticeably smaller with 2.4-mm probe, statistical significance was not achieved. Iceball size was not significantly different across both protocols and probe types. Copyright © 2017 SIR. Published by Elsevier Inc. All rights reserved.

  10. The Ups and Downs of Repeated Cleavage and Internal Fragment Production in Top-Down Proteomics.

    PubMed

    Lyon, Yana A; Riggs, Dylan; Fornelli, Luca; Compton, Philip D; Julian, Ryan R

    2018-01-01

    Analysis of whole proteins by mass spectrometry, or top-down proteomics, has several advantages over methods relying on proteolysis. For example, proteoforms can be unambiguously identified and examined. However, from a gas-phase ion-chemistry perspective, proteins are enormous molecules that present novel challenges relative to peptide analysis. Herein, the statistics of cleaving the peptide backbone multiple times are examined to evaluate the inherent propensity for generating internal versus terminal ions. The raw statistics reveal an inherent bias favoring production of terminal ions, which holds true regardless of protein size. Importantly, even if the full suite of internal ions is generated by statistical dissociation, terminal ions are predicted to account for at least 50% of the total ion current, regardless of protein size, if there are three backbone dissociations or fewer. Top-down analysis should therefore be a viable approach for examining proteins of significant size. Comparison of the purely statistical analysis with actual top-down data derived from ultraviolet photodissociation (UVPD) and higher-energy collisional dissociation (HCD) reveals that terminal ions account for much of the total ion current in both experiments. Terminal ion production is more favored in UVPD relative to HCD, which is likely due to differences in the mechanisms controlling fragmentation. Importantly, internal ions are not found to dominate from either the theoretical or experimental point of view. Graphical abstract ᅟ.

  11. Simulating statistics of lightning-induced and man made fires

    NASA Astrophysics Data System (ADS)

    Krenn, R.; Hergarten, S.

    2009-04-01

    The frequency-area distributions of forest fires show power-law behavior with scaling exponents α in a quite narrow range, relating wildfire research to the theoretical framework of self-organized criticality. Examples of self-organized critical behavior can be found in computer simulations of simple cellular automata. The established self-organized critical Drossel-Schwabl forest fire model (DS-FFM) is one of the most widespread models in this context. Despite its qualitative agreement with event-size statistics from nature, its applicability is still questioned. Apart from general concerns that the DS-FFM apparently oversimplifies the complex nature of forest dynamics, it significantly overestimates the frequency of large fires. We present a straightforward modification of the model rules that increases the scaling exponent α by approximately 1•3 and brings the simulated event-size statistics close to those observed in nature. In addition, combined simulations of both the original and the modified model predict a dependence of the overall distribution on the ratio of lightning induced and man made fires as well as a difference between their respective event-size statistics. The increase of the scaling exponent with decreasing lightning probability as well as the splitting of the partial distributions are confirmed by the analysis of the Canadian Large Fire Database. As a consequence, lightning induced and man made forest fires cannot be treated separately in wildfire modeling, hazard assessment and forest management.

  12. The Ups and Downs of Repeated Cleavage and Internal Fragment Production in Top-Down Proteomics

    NASA Astrophysics Data System (ADS)

    Lyon, Yana A.; Riggs, Dylan; Fornelli, Luca; Compton, Philip D.; Julian, Ryan R.

    2018-01-01

    Analysis of whole proteins by mass spectrometry, or top-down proteomics, has several advantages over methods relying on proteolysis. For example, proteoforms can be unambiguously identified and examined. However, from a gas-phase ion-chemistry perspective, proteins are enormous molecules that present novel challenges relative to peptide analysis. Herein, the statistics of cleaving the peptide backbone multiple times are examined to evaluate the inherent propensity for generating internal versus terminal ions. The raw statistics reveal an inherent bias favoring production of terminal ions, which holds true regardless of protein size. Importantly, even if the full suite of internal ions is generated by statistical dissociation, terminal ions are predicted to account for at least 50% of the total ion current, regardless of protein size, if there are three backbone dissociations or fewer. Top-down analysis should therefore be a viable approach for examining proteins of significant size. Comparison of the purely statistical analysis with actual top-down data derived from ultraviolet photodissociation (UVPD) and higher-energy collisional dissociation (HCD) reveals that terminal ions account for much of the total ion current in both experiments. Terminal ion production is more favored in UVPD relative to HCD, which is likely due to differences in the mechanisms controlling fragmentation. Importantly, internal ions are not found to dominate from either the theoretical or experimental point of view. [Figure not available: see fulltext.

  13. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  14. Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garten, C.T. Jr.; Wullschleger, S.D.

    Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less

  15. Maxillary sinus augmentation by crestal access: a retrospective study on cavity size and outcome correlation.

    PubMed

    Spinato, Sergio; Bernardello, Fabio; Galindo-Moreno, Pablo; Zaffe, Davide

    2015-12-01

    Cone-beam computed tomography (CBCT) and radiographic outcomes of crestal sinus elevation, performed using mineralized human bone allograft, were analyzed to correlate results with maxillary sinus size. A total of 60 sinus augmentations in 60 patients, with initial bone ≤5 mm, were performed. Digital radiographs were taken at surgical implant placement time up to post-prosthetic loading follow-up (12-72 months), when CBCT evaluation was carried out. Marginal bone loss (MBL) was radiographically analyzed at 6 months and follow-up time post-loading. Sinus size (BPD), implant distance from palatal (PID) and buccal wall (BID), and absence of bone coverage of implant (intra-sinus bone loss--IBL) were evaluated and statistically evaluated by ANOVA and linear regression analyses. MBL increased as a function of time. MBL at final follow-up was statistically associated with MBL at 6 months. A statistically significant correlation of IBL with wall distance and of IBL/mm with time was identified with greater values in wide sinuses (WS ≥ 13.27 mm) than in narrow sinuses (NS < 13.27 mm). This study is the first quantitative and statistically significant confirmation that crestal technique with residual ridge height <5 mm is more appropriate and predictable, in terms of intra-sinus bone coverage, in narrow than in WS. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  16. Effect of various binning methods and ROI sizes on the accuracy of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT

    NASA Astrophysics Data System (ADS)

    Kim, Namkug; Seo, Joon Beom; Sung, Yu Sub; Park, Bum-Woo; Lee, Youngjoo; Park, Seong Hoon; Lee, Young Kyung; Kang, Suk-Ho

    2008-03-01

    To find optimal binning, variable binning size linear binning (LB) and non-linear binning (NLB) methods were tested. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. To find optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of textural analysis at HRCT Six-hundred circular regions of interest (ROI) with 10, 20, and 30 pixel diameter, comprising of each 100 ROIs representing six regional disease patterns (normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS) were marked by an experienced radiologist from HRCT images. Histogram (mean) and co-occurrence matrix (mean and SD of angular second moment, contrast, correlation, entropy, and inverse difference momentum) features were employed to test binning and ROI effects. To find optimal binning, variable binning size LB (bin size Q: 4~30, 32, 64, 128, 144, 196, 256, 384) and NLB (Q: 4~30) methods (K-means, and Fuzzy C-means clustering) were tested. For automated classification, a SVM classifier was implemented. To assess cross-validation of the system, a five-folding method was used. Each test was repeatedly performed twenty times. Overall accuracies with every combination of variable ROIs, and binning sizes were statistically compared. In case of small binning size (Q <= 10), NLB shows significant better accuracy than the LB. K-means NLB (Q = 26) is statistically significant better than every LB. In case of 30x30 ROI size and most of binning size, the K-means method showed better than other NLB and LB methods. When optimal binning and other parameters were set, overall sensitivity of the classifier was 92.85%. The sensitivity and specificity of the system for each class were as follows: NL, 95%, 97.9%; GGO, 80%, 98.9%; RO 85%, 96.9%; HC, 94.7%, 97%; EMPH, 100%, 100%; and CONS, 100%, 100%, respectively. We determined the optimal binning method and ROI size of the automatic classification system for differentiation between diffuse infiltrative lung diseases on the basis of texture features at HRCT.

  17. Tumor size measured by preoperative ultrasonography and postoperative pathologic examination in papillary thyroid carcinoma: relative differences according to size, calcification and coexisting thyroiditis.

    PubMed

    Yoon, Young Hoon; Kwon, Ki Ryun; Kwak, Seo Young; Ryu, Kyeung A; Choi, Bobae; Kim, Jin-Man; Koo, Bon Seok

    2014-05-01

    Ultrasonography (US) is a useful diagnostic modality for evaluation of the size and features of thyroid nodules. Tumor size is a key indicator of the surgical extent of thyroid cancer. We evaluated the difference in tumor sizes measured by preoperative US and postoperative pathologic examination in papillary thyroid carcinoma (PTC). We reviewed the medical records of 172 consecutive patients, who underwent thyroidectomy for PTC treatment. We compared tumor size, as measured by preoperative US, with that in postoperative specimens. And we analyzed a number of factors potentially influencing the size measurement, including cancer size, calcification and coexisting thyroiditis. The mean size of the tumor measured by preoperative US was 11.4, and 10.2 mm by postoperative pathologic examination. The mean percentage difference (US-pathology/US) of tumor sizes measured by preoperative US and postoperative pathologic examination was 9.9 ± 19.3%, which was statistically significant (p < 0.001). When the effect of tumor size (≤10.0 vs. 10.1-20.0 vs. >20.0 mm) and the presence of calcification or coexisting thyroiditis on the tumor size discrepancy between the two measurements was analyzed, the mean percentage differences according to tumor size (9.1 vs. 11.2% vs. 9.8%, p = 0.842), calcification (9.2 vs. 10.2%, p = 0.756) and coexisting thyroiditis (17.6 vs. 9.5%, p = 0.223) did not show statistical significance. Tumor sizes measured in postoperative pathology were ~90% of those measured by preoperative US in PTC; this was not affected by tumor size, the presence of calcification or coexisting thyroiditis. When the surgical extent of PTC treatment according to tumor size measured by US is determined, the relative difference between tumor sizes measured by preoperative US and postoperative pathologic examination should be considered.

  18. Standard and reduced radiation dose liver CT images: adaptive statistical iterative reconstruction versus model-based iterative reconstruction-comparison of findings and image quality.

    PubMed

    Shuman, William P; Chan, Keith T; Busey, Janet M; Mitsumori, Lee M; Choi, Eunice; Koprowicz, Kent M; Kanal, Kalpana M

    2014-12-01

    To investigate whether reduced radiation dose liver computed tomography (CT) images reconstructed with model-based iterative reconstruction ( MBIR model-based iterative reconstruction ) might compromise depiction of clinically relevant findings or might have decreased image quality when compared with clinical standard radiation dose CT images reconstructed with adaptive statistical iterative reconstruction ( ASIR adaptive statistical iterative reconstruction ). With institutional review board approval, informed consent, and HIPAA compliance, 50 patients (39 men, 11 women) were prospectively included who underwent liver CT. After a portal venous pass with ASIR adaptive statistical iterative reconstruction images, a 60% reduced radiation dose pass was added with MBIR model-based iterative reconstruction images. One reviewer scored ASIR adaptive statistical iterative reconstruction image quality and marked findings. Two additional independent reviewers noted whether marked findings were present on MBIR model-based iterative reconstruction images and assigned scores for relative conspicuity, spatial resolution, image noise, and image quality. Liver and aorta Hounsfield units and image noise were measured. Volume CT dose index and size-specific dose estimate ( SSDE size-specific dose estimate ) were recorded. Qualitative reviewer scores were summarized. Formal statistical inference for signal-to-noise ratio ( SNR signal-to-noise ratio ), contrast-to-noise ratio ( CNR contrast-to-noise ratio ), volume CT dose index, and SSDE size-specific dose estimate was made (paired t tests), with Bonferroni adjustment. Two independent reviewers identified all 136 ASIR adaptive statistical iterative reconstruction image findings (n = 272) on MBIR model-based iterative reconstruction images, scoring them as equal or better for conspicuity, spatial resolution, and image noise in 94.1% (256 of 272), 96.7% (263 of 272), and 99.3% (270 of 272), respectively. In 50 image sets, two reviewers (n = 100) scored overall image quality as sufficient or good with MBIR model-based iterative reconstruction in 99% (99 of 100). Liver SNR signal-to-noise ratio was significantly greater for MBIR model-based iterative reconstruction (10.8 ± 2.5 [standard deviation] vs 7.7 ± 1.4, P < .001); there was no difference for CNR contrast-to-noise ratio (2.5 ± 1.4 vs 2.4 ± 1.4, P = .45). For ASIR adaptive statistical iterative reconstruction and MBIR model-based iterative reconstruction , respectively, volume CT dose index was 15.2 mGy ± 7.6 versus 6.2 mGy ± 3.6; SSDE size-specific dose estimate was 16.4 mGy ± 6.6 versus 6.7 mGy ± 3.1 (P < .001). Liver CT images reconstructed with MBIR model-based iterative reconstruction may allow up to 59% radiation dose reduction compared with the dose with ASIR adaptive statistical iterative reconstruction , without compromising depiction of findings or image quality. © RSNA, 2014.

  19. Bone repair of critical size defects treated with mussel powder associated or not with bovine bone graft: histologic and histomorphometric study in rat calvaria.

    PubMed

    Trotta, Daniel Rizzo; Gorny, Clayton; Zielak, João César; Gonzaga, Carla Castiglia; Giovanini, Allan Fernando; Deliberador, Tatiana Miranda

    2014-09-01

    The objective of this study was to evaluate the bone repair of critical size defects treated with mussel powder with or without additional bovine bone. Critical size defects of 5 mm were realized in the calvaria of 70 rats, which were randomly divided in 5 groups - Control (C), Autogenous Bone (AB), Mussel Powder (MP), Mussel Powder and Bovine Bone (MP-BB) and Bovine Bone (BB). Histological and histomorphometric analysis were performed 30 and 90 days after the surgical procedures (ANOVA e Tukey p < 0.05). After 30 days, the measures of remaining particles were: 28.36% (MP-BB), 26.63% (BB) and 8.64% (MP) with a statistically significant difference between BB and MP. The percentage of osseous matrix after 30 days was, AB (55.17%), 23.31% (BB), 11.66% (MP) and 10.71% (MP-BB) with statistically significant differences among all groups. After 90 days the figures were 25.05% (BB), 21.53% (MP-BB) and 1.97% (MP) with statistically significant differences between MP-BB and MP. Percentages of new bone formation after 90 days were 89.47% (AB), 35.70% (BB), 26.48% (MP-BB) and 7.37% (MP) with statistically significant differences between AB and the other groups. Within the limits of this study, we conclude that mussel powder, with or without additional bovine bone, did not induce new bone formation and did not repair critical size defects in rat calvaria. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  20. Radiographic Response to Yttrium-90 Radioembolization in Anterior Versus Posterior Liver Segments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ibrahim, Saad M.; Lewandowski, Robert J.; Ryu, Robert K.

    2008-11-15

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria.more » Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm{sup 3}, respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.« less

  1. Radiographic response to yttrium-90 radioembolization in anterior versus posterior liver segments.

    PubMed

    Ibrahim, Saad M; Lewandowski, Robert J; Ryu, Robert K; Sato, Kent T; Gates, Vanessa L; Mulcahy, Mary F; Kulik, Laura; Larson, Andrew C; Omary, Reed A; Salem, Riad

    2008-01-01

    The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria. Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm(3), respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.

  2. Methodological issues with adaptation of clinical trial design.

    PubMed

    Hung, H M James; Wang, Sue-Jane; O'Neill, Robert T

    2006-01-01

    Adaptation of clinical trial design generates many issues that have not been resolved for practical applications, though statistical methodology has advanced greatly. This paper focuses on some methodological issues. In one type of adaptation such as sample size re-estimation, only the postulated value of a parameter for planning the trial size may be altered. In another type, the originally intended hypothesis for testing may be modified using the internal data accumulated at an interim time of the trial, such as changing the primary endpoint and dropping a treatment arm. For sample size re-estimation, we make a contrast between an adaptive test weighting the two-stage test statistics with the statistical information given by the original design and the original sample mean test with a properly corrected critical value. We point out the difficulty in planning a confirmatory trial based on the crude information generated by exploratory trials. In regards to selecting a primary endpoint, we argue that the selection process that allows switching from one endpoint to the other with the internal data of the trial is not very likely to gain a power advantage over the simple process of selecting one from the two endpoints by testing them with an equal split of alpha (Bonferroni adjustment). For dropping a treatment arm, distributing the remaining sample size of the discontinued arm to other treatment arms can substantially improve the statistical power of identifying a superior treatment arm in the design. A common difficult methodological issue is that of how to select an adaptation rule in the trial planning stage. Pre-specification of the adaptation rule is important for the practicality consideration. Changing the originally intended hypothesis for testing with the internal data generates great concerns to clinical trial researchers.

  3. A Review of Meta-Analysis Packages in R

    ERIC Educational Resources Information Center

    Polanin, Joshua R.; Hennessy, Emily A.; Tanner-Smith, Emily E.

    2017-01-01

    Meta-analysis is a statistical technique that allows an analyst to synthesize effect sizes from multiple primary studies. To estimate meta-analysis models, the open-source statistical environment R is quickly becoming a popular choice. The meta-analytic community has contributed to this growth by developing numerous packages specific to…

  4. Min and Max Exponential Extreme Interval Values and Statistics

    ERIC Educational Resources Information Center

    Jance, Marsha; Thomopoulos, Nick

    2009-01-01

    The extreme interval values and statistics (expected value, median, mode, standard deviation, and coefficient of variation) for the smallest (min) and largest (max) values of exponentially distributed variables with parameter ? = 1 are examined for different observation (sample) sizes. An extreme interval value g[subscript a] is defined as a…

  5. Statistical Significance vs. Practical Significance: An Exploration through Health Education

    ERIC Educational Resources Information Center

    Rosen, Brittany L.; DeMaria, Andrea L.

    2012-01-01

    The purpose of this paper is to examine the differences between statistical and practical significance, including strengths and criticisms of both methods, as well as provide information surrounding the application of various effect sizes and confidence intervals within health education research. Provided are recommendations, explanations and…

  6. 76 FR 70451 - Agency Information Collection Activities; Proposed Collection; Comment Request; Extension

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-14

    ... commodities to keep records that substantiate ``cents off,'' ``introductory offer,'' and/or ``economy size... United States, 2010, U.S. Department of Labor, U.S. Bureau of Labor Statistics (May 2011) (``BLS National... information such as costs, sales statistics, inventories, formulas, patterns devices, manufacturing processes...

  7. An analytic treatment of gravitational microlensing for sources of finite size at large optical depths

    NASA Technical Reports Server (NTRS)

    Deguchi, Shuji; Watson, William D.

    1988-01-01

    Statistical methods are developed for gravitational lensing in order to obtain analytic expressions for the average surface brightness that include the effects of microlensing by stellar (or other compact) masses within the lensing galaxy. The primary advance here is in utilizing a Markoff technique to obtain expressions that are valid for sources of finite size when the surface density of mass in the lensing galaxy is large. The finite size of the source is probably the key consideration for the occurrence of microlensing by individual stars. For the intensity from a particular location, the parameter which governs the importance of microlensing is determined. Statistical methods are also formulated to assess the time variation of the surface brightness due to the random motion of the masses that cause the microlensing.

  8. EFFECTS OF LASER RADIATION ON MATTER: Influence of fluctuations of the size and number of surface microdefects on the thresholds of laser plasma formation

    NASA Astrophysics Data System (ADS)

    Borets-Pervak, I. Yu; Vorob'ev, V. S.

    1990-08-01

    An analysis is made of the influence of the statistical scatter of the size of thermally insulated microdefects and of their number in the focusing spot on the threshold energies of plasma formation by microsecond laser pulses interacting with metal surfaces. The coordinates of the laser pulse intensity and the surface density of the laser energy are used in constructing plasma formation regions corresponding to different numbers of microdefects within the focusing spot area; the same coordinates are used to represent laser pulses. Various threshold and nonthreshold plasma formation mechanisms are discussed. The sizes of microdefects and their statistical characteristics deduced from limited experimental data provide a consistent description of the characteristics of plasma formation near polished and nonpolished surfaces.

  9. Using the Bootstrap Method to Evaluate the Critical Range of Misfit for Polytomous Rasch Fit Statistics.

    PubMed

    Seol, Hyunsoo

    2016-06-01

    The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.

  10. A Virtual Study of Grid Resolution on Experiments of a Highly-Resolved Turbulent Plume

    NASA Astrophysics Data System (ADS)

    Maisto, Pietro M. F.; Marshall, Andre W.; Gollner, Michael J.; Fire Protection Engineering Department Collaboration

    2017-11-01

    An accurate representation of sub-grid scale turbulent mixing is critical for modeling fire plumes and smoke transport. In this study, PLIF and PIV diagnostics are used with the saltwater modeling technique to provide highly-resolved instantaneous field measurements in unconfined turbulent plumes useful for statistical analysis, physical insight, and model validation. The effect of resolution was investigated employing a virtual interrogation window (of varying size) applied to the high-resolution field measurements. Motivated by LES low-pass filtering concepts, the high-resolution experimental data in this study can be analyzed within the interrogation windows (i.e. statistics at the sub-grid scale) and on interrogation windows (i.e. statistics at the resolved scale). A dimensionless resolution threshold (L/D*) criterion was determined to achieve converged statistics on the filtered measurements. Such a criterion was then used to establish the relative importance between large and small-scale turbulence phenomena while investigating specific scales for the turbulent flow. First order data sets start to collapse at a resolution of 0.3D*, while for second and higher order statistical moments the interrogation window size drops down to 0.2D*.

  11. MISR Global Aerosol Product Assessment by Comparison with AERONET

    NASA Technical Reports Server (NTRS)

    Kahn, Ralph A.; Gaitley, Barbara J.; Garay, Michael J.; Diner, David J.; Eck, Thomas F.; Smirnov, Alexander; Holben, Brent N.

    2010-01-01

    A statistical approach is used to assess the quality of the MISR Version 22 (V22) aerosol products. Aerosol Optical Depth (AOD) retrieval results are improved relative to the early post- launch values reported by Kahn et al. [2005a], varying with particle type category. Overall, about 70% to 75% of MISR AOD retrievals fall within 0.05 or 20% AOD of the paired validation data, and about 50% to 55% are within 0.03 or 10% AOD, except at sites where dust, or mixed dust and smoke, are commonly found. Retrieved particle microphysical properties amount to categorical values, such as three groupings in size: "small," "medium," and "large." For particle size, ground-based AERONET sun photometer Angstrom Exponents are used to assess statistically the corresponding MISR values, which are interpreted in terms of retrieved size categories. Coincident Single-Scattering Albedo (SSA) and fraction AOD spherical data are too limited for statistical validation. V22 distinguishes two or three size bins, depending on aerosol type, and about two bins in SSA (absorbing vs. non-absorbing), as well as spherical vs. non-spherical particles, under good retrieval conditions. Particle type sensitivity varies considerably with conditions, and is diminished for mid-visible AOD below about 0.15 or 0.2. Based on these results, specific algorithm upgrades are proposed, and are being investigated by the MISR team for possible implementation in future versions of the product.

  12. Statistical analyses to support guidelines for marine avian sampling. Final report

    USGS Publications Warehouse

    Kinlan, Brian P.; Zipkin, Elise; O'Connell, Allan F.; Caldow, Chris

    2012-01-01

    Interest in development of offshore renewable energy facilities has led to a need for high-quality, statistically robust information on marine wildlife distributions. A practical approach is described to estimate the amount of sampling effort required to have sufficient statistical power to identify species-specific “hotspots” and “coldspots” of marine bird abundance and occurrence in an offshore environment divided into discrete spatial units (e.g., lease blocks), where “hotspots” and “coldspots” are defined relative to a reference (e.g., regional) mean abundance and/or occurrence probability for each species of interest. For example, a location with average abundance or occurrence that is three times larger the mean (3x effect size) could be defined as a “hotspot,” and a location that is three times smaller than the mean (1/3x effect size) as a “coldspot.” The choice of the effect size used to define hot and coldspots will generally depend on a combination of ecological and regulatory considerations. A method is also developed for testing the statistical significance of possible hotspots and coldspots. Both methods are illustrated with historical seabird survey data from the USGS Avian Compendium Database. Our approach consists of five main components: 1. A review of the primary scientific literature on statistical modeling of animal group size and avian count data to develop a candidate set of statistical distributions that have been used or may be useful to model seabird counts. 2. Statistical power curves for one-sample, one-tailed Monte Carlo significance tests of differences of observed small-sample means from a specified reference distribution. These curves show the power to detect "hotspots" or "coldspots" of occurrence and abundance at a range of effect sizes, given assumptions which we discuss. 3. A model selection procedure, based on maximum likelihood fits of models in the candidate set, to determine an appropriate statistical distribution to describe counts of a given species in a particular region and season. 4. Using a large database of historical at-sea seabird survey data, we applied this technique to identify appropriate statistical distributions for modeling a variety of species, allowing the distribution to vary by season. For each species and season, we used the selected distribution to calculate and map retrospective statistical power to detect hotspots and coldspots, and map pvalues from Monte Carlo significance tests of hotspots and coldspots, in discrete lease blocks designated by the U.S. Department of Interior, Bureau of Ocean Energy Management (BOEM). 5. Because our definition of hotspots and coldspots does not explicitly include variability over time, we examine the relationship between the temporal scale of sampling and the proportion of variance captured in time series of key environmental correlates of marine bird abundance, as well as available marine bird abundance time series, and use these analyses to develop recommendations for the temporal distribution of sampling to adequately represent both shortterm and long-term variability. We conclude by presenting a schematic “decision tree” showing how this power analysis approach would fit in a general framework for avian survey design, and discuss implications of model assumptions and results. We discuss avenues for future development of this work, and recommendations for practical implementation in the context of siting and wildlife assessment for offshore renewable energy development projects.

  13. Survival analysis and classification methods for forest fire size

    PubMed Central

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at “being held” (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at “being held” exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances. PMID:29320497

  14. Survival analysis and classification methods for forest fire size.

    PubMed

    Tremblay, Pier-Olivier; Duchesne, Thierry; Cumming, Steven G

    2018-01-01

    Factors affecting wildland-fire size distribution include weather, fuels, and fire suppression activities. We present a novel application of survival analysis to quantify the effects of these factors on a sample of sizes of lightning-caused fires from Alberta, Canada. Two events were observed for each fire: the size at initial assessment (by the first fire fighters to arrive at the scene) and the size at "being held" (a state when no further increase in size is expected). We developed a statistical classifier to try to predict cases where there will be a growth in fire size (i.e., the size at "being held" exceeds the size at initial assessment). Logistic regression was preferred over two alternative classifiers, with covariates consistent with similar past analyses. We conducted survival analysis on the group of fires exhibiting a size increase. A screening process selected three covariates: an index of fire weather at the day the fire started, the fuel type burning at initial assessment, and a factor for the type and capabilities of the method of initial attack. The Cox proportional hazards model performed better than three accelerated failure time alternatives. Both fire weather and fuel type were highly significant, with effects consistent with known fire behaviour. The effects of initial attack method were not statistically significant, but did suggest a reverse causality that could arise if fire management agencies were to dispatch resources based on a-priori assessment of fire growth potentials. We discuss how a more sophisticated analysis of larger data sets could produce unbiased estimates of fire suppression effect under such circumstances.

  15. Characteristic fragment size distributions in dynamic fragmentation

    NASA Astrophysics Data System (ADS)

    Zhou, Fenghua; Molinari, Jean-François; Ramesh, K. T.

    2006-06-01

    The one-dimensional fragmentation of a dynamically expanding ring (Mott's problem) is studied numerically to obtain the fragment signatures under different strain rates. An empirical formula is proposed to calculate an average fragment size. Rayleigh distribution is found to describe the statistical properties of the fragment populations.

  16. A reliability evaluation methodology for memory chips for space applications when sample size is small

    NASA Technical Reports Server (NTRS)

    Chen, Y.; Nguyen, D.; Guertin, S.; Berstein, J.; White, M.; Menke, R.; Kayali, S.

    2003-01-01

    This paper presents a reliability evaluation methodology to obtain the statistical reliability information of memory chips for space applications when the test sample size needs to be kept small because of the high cost of the radiation hardness memories.

  17. Exploring Organizational Learning Mechanisms in Small-Size Business Enterprises

    ERIC Educational Resources Information Center

    Graham, Carroll M.; Nafukho, Fredrick M.

    2008-01-01

    The primary purpose of this study was to determine the importance of existing organizational learning mechanisms and establish the size and magnitude of the relationship among the organizational learning mechanisms. Of great import also was to determine whether statistically significant relationships existed among the organizational learning…

  18. Statistical electric field and switching time distributions in PZT 1Nb2Sr ceramics: Crystal- and microstructure effects

    NASA Astrophysics Data System (ADS)

    Zhukov, Sergey; Kungl, Hans; Genenko, Yuri A.; von Seggern, Heinz

    2014-01-01

    Dispersive polarization response of ferroelectric PZT ceramics is analyzed assuming the inhomogeneous field mechanism of polarization switching. In terms of this model, the local polarization switching proceeds according to the Kolmogorov-Avrami-Ishibashi scenario with the switching time determined by the local electric field. As a result, the total polarization reversal is dominated by the statistical distribution of the local field magnitudes. Microscopic parameters of this model (the high-field switching time and the activation field) as well as the statistical field and consequent switching time distributions due to disorder at a mesoscopic scale can be directly determined from a set of experiments measuring the time dependence of the total polarization switching, when applying electric fields of different magnitudes. PZT 1Nb2Sr ceramics with Zr/Ti ratios 51.5/48.5, 52.25/47.75, and 60/40 with four different grain sizes each were analyzed following this approach. Pronounced differences of field and switching time distributions were found depending on the Zr/Ti ratios. Varying grain size also affects polarization reversal parameters, but in another way. The field distributions remain almost constant with grain size whereas switching times and activation field tend to decrease with increasing grain size. The quantitative changes of the latter parameters with grain size are very different depending on composition. The origin of the effects on the field and switching time distributions are related to differences in structural and microstructural characteristics of the materials and are discussed with respect to the hysteresis loops observed under bipolar electrical cycling.

  19. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    PubMed

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure of relative efficiency might be less than the measure in the literature under some conditions, underestimating the relative efficiency. The relative efficiency of unequal versus equal cluster sizes defined using the noncentrality parameter suggests a sample size approach that is a flexible alternative and a useful complement to existing methods.

  20. Analysis of Noise Mechanisms in Cell-Size Control.

    PubMed

    Modi, Saurabh; Vargas-Garcia, Cesar Augusto; Ghusinga, Khem Raj; Singh, Abhyudai

    2017-06-06

    At the single-cell level, noise arises from multiple sources, such as inherent stochasticity of biomolecular processes, random partitioning of resources at division, and fluctuations in cellular growth rates. How these diverse noise mechanisms combine to drive variations in cell size within an isoclonal population is not well understood. Here, we investigate the contributions of different noise sources in well-known paradigms of cell-size control, such as adder (division occurs after adding a fixed size from birth), sizer (division occurs after reaching a size threshold), and timer (division occurs after a fixed time from birth). Analysis reveals that variation in cell size is most sensitive to errors in partitioning of volume among daughter cells, and not surprisingly, this process is well regulated among microbes. Moreover, depending on the dominant noise mechanism, different size-control strategies (or a combination of them) provide efficient buffering of size variations. We further explore mixer models of size control, where a timer phase precedes/follows an adder, as has been proposed in Caulobacter crescentus. Although mixing a timer and an adder can sometimes attenuate size variations, it invariably leads to higher-order moments growing unboundedly over time. This results in a power-law distribution for the cell size, with an exponent that depends inversely on the noise in the timer phase. Consistent with theory, we find evidence of power-law statistics in the tail of C. crescentus cell-size distribution, although there is a discrepancy between the observed power-law exponent and that predicted from the noise parameters. The discrepancy, however, is removed after data reveal that the size added by individual newborns in the adder phase itself exhibits power-law statistics. Taken together, this study provides key insights into the role of noise mechanisms in size homeostasis, and suggests an inextricable link between timer-based models of size control and heavy-tailed cell-size distributions. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  1. Criteria for a State-of-the-Art Vision Test System

    DTIC Science & Technology

    1985-05-01

    tests are enumerated for possible inclusion in a battery of candidate vision tests to be statistically examined for validity as predictors of aircrew...derived subset thereof) of vision tests may be given to a series of individuals, and statistical tests may be used to determine which visual functions...no target. Statistical analysis of the responses would set a threshold level, which would define the smallest size - (most distant target) or least

  2. A basic introduction to statistics for the orthopaedic surgeon.

    PubMed

    Bertrand, Catherine; Van Riet, Roger; Verstreken, Frederik; Michielsen, Jef

    2012-02-01

    Orthopaedic surgeons should review the orthopaedic literature in order to keep pace with the latest insights and practices. A good understanding of basic statistical principles is of crucial importance to the ability to read articles critically, to interpret results and to arrive at correct conclusions. This paper explains some of the key concepts in statistics, including hypothesis testing, Type I and Type II errors, testing of normality, sample size and p values.

  3. A procedure for classifying textural facies in gravel‐bed rivers

    USGS Publications Warehouse

    Buffington, John M.; Montgomery, David R.

    1999-01-01

    Textural patches (i.e., grain‐size facies) are commonly observed in gravel‐bed channels and are of significance for both physical and biological processes at subreach scales. We present a general framework for classifying textural patches that allows modification for particular study goals, while maintaining a basic degree of standardization. Textures are classified using a two‐tier system of ternary diagrams that identifies the relative abundance of major size classes and subcategories of the dominant size. An iterative procedure of visual identification and quantitative grain‐size measurement is used. A field test of our classification indicates that it affords reasonable statistical discrimination of median grain size and variance of bed‐surface textures. We also explore the compromise between classification simplicity and accuracy. We find that statistically meaningful textural discrimination requires use of both tiers of our classification. Furthermore, we find that simplified variants of the two‐tier scheme are less accurate but may be more practical for field studies which do not require a high level of textural discrimination or detailed description of grain‐size distributions. Facies maps provide a natural template for stratifying other physical and biological measurements and produce a retrievable and versatile database that can be used as a component of channel monitoring efforts.

  4. The size of a pilot study for a clinical trial should be calculated in relation to considerations of precision and efficiency.

    PubMed

    Sim, Julius; Lewis, Martyn

    2012-03-01

    To investigate methods to determine the size of a pilot study to inform a power calculation for a randomized controlled trial (RCT) using an interval/ratio outcome measure. Calculations based on confidence intervals (CIs) for the sample standard deviation (SD). Based on CIs for the sample SD, methods are demonstrated whereby (1) the observed SD can be adjusted to secure the desired level of statistical power in the main study with a specified level of confidence; (2) the sample for the main study, if calculated using the observed SD, can be adjusted, again to obtain the desired level of statistical power in the main study; (3) the power of the main study can be calculated for the situation in which the SD in the pilot study proves to be an underestimate of the true SD; and (4) an "efficient" pilot size can be determined to minimize the combined size of the pilot and main RCT. Trialists should calculate the appropriate size of a pilot study, just as they should the size of the main RCT, taking into account the twin needs to demonstrate efficiency in terms of recruitment and to produce precise estimates of treatment effect. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Fundamental Investigation of the Microstructural Parameters to Improve Dynamic Response in Al-Cu Model System

    DTIC Science & Technology

    2014-05-01

    grain size. Recrystallization was then induced via annealing just above the solvus temperature. After quenching , the bars were immediately placed into...that the values were statistically significant. Precipitate sizes ranged from approximately 100 nanometers in diameter up to 2-5 microns in diameter

  6. A D-Estimator for Single-Case Designs

    ERIC Educational Resources Information Center

    Shadish, William; Hedges, Larry; Pustejovsky, James; Rindskopf, David

    2012-01-01

    Over the last 10 years, numerous authors have proposed effect size estimators for single-case designs. None, however, has been shown to be equivalent to the usual between-groups standardized mean difference statistic, sometimes called d. The present paper remedies that omission. Most effect size estimators for single-case designs use the…

  7. Random Distribution Pattern and Non-adaptivity of Genome Size in a Highly Variable Population of Festuca pallens

    PubMed Central

    Šmarda, Petr; Bureš, Petr; Horová, Lucie

    2007-01-01

    Background and Aims The spatial and statistical distribution of genome sizes and the adaptivity of genome size to some types of habitat, vegetation or microclimatic conditions were investigated in a tetraploid population of Festuca pallens. The population was previously documented to vary highly in genome size and is assumed as a model for the study of the initial stages of genome size differentiation. Methods Using DAPI flow cytometry, samples were measured repeatedly with diploid Festuca pallens as the internal standard. Altogether 172 plants from 57 plots (2·25 m2), distributed in contrasting habitats over the whole locality in South Moravia, Czech Republic, were sampled. The differences in DNA content were confirmed by the double peaks of simultaneously measured samples. Key Results At maximum, a 1·115-fold difference in genome size was observed. The statistical distribution of genome sizes was found to be continuous and best fits the extreme (Gumbel) distribution with rare occurrences of extremely large genomes (positive-skewed), as it is similar for the log-normal distribution of the whole Angiosperms. Even plants from the same plot frequently varied considerably in genome size and the spatial distribution of genome sizes was generally random and unautocorrelated (P > 0·05). The observed spatial pattern and the overall lack of correlations of genome size with recognized vegetation types or microclimatic conditions indicate the absence of ecological adaptivity of genome size in the studied population. Conclusions These experimental data on intraspecific genome size variability in Festuca pallens argue for the absence of natural selection and the selective non-significance of genome size in the initial stages of genome size differentiation, and corroborate the current hypothetical model of genome size evolution in Angiosperms (Bennetzen et al., 2005, Annals of Botany 95: 127–132). PMID:17565968

  8. Aggregate and individual replication probability within an explicit model of the research process.

    PubMed

    Miller, Jeff; Schwarz, Wolf

    2011-09-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by obtaining either a statistically significant result in the same direction or any effect in that direction. We analyze both the probability of successfully replicating a particular experimental effect (i.e., the individual replication probability) and the average probability of successful replication across different studies within some research context (i.e., the aggregate replication probability), and we identify the conditions under which the latter can be approximated using the formulas of Killeen (2005a, 2007). We show how both of these probabilities depend on parameters of the research context that would rarely be known in practice. In addition, we show that the statistical uncertainty associated with the size of an initial observed effect would often prevent accurate estimation of the desired individual replication probability even if these research context parameters were known exactly. We conclude that accurate estimates of replication probability are generally unattainable.

  9. Raindrop Size Distribution in Different Climatic Regimes from Disdrometer and Dual-Polarized Radar Analysis.

    NASA Astrophysics Data System (ADS)

    Bringi, V. N.; Chandrasekar, V.; Hubbert, J.; Gorgucci, E.; Randeu, W. L.; Schoenhuber, M.

    2003-01-01

    The application of polarimetric radar data to the retrieval of raindrop size distribution parameters and rain rate in samples of convective and stratiform rain types is presented. Data from the Colorado State University (CSU), CHILL, NCAR S-band polarimetric (S-Pol), and NASA Kwajalein radars are analyzed for the statistics and functional relation of these parameters with rain rate. Surface drop size distribution measurements using two different disdrometers (2D video and RD-69) from a number of climatic regimes are analyzed and compared with the radar retrievals in a statistical and functional approach. The composite statistics based on disdrometer and radar retrievals suggest that, on average, the two parameters (generalized intercept and median volume diameter) for stratiform rain distributions lie on a straight line with negative slope, which appears to be consistent with variations in the microphysics of stratiform precipitation (melting of larger, dry snow particles versus smaller, rimed ice particles). In convective rain, `maritime-like' and `continental-like' clusters could be identified in the same two-parameter space that are consistent with the different multiplicative coefficients in the Z = aR1.5 relations quoted in the literature for maritime and continental regimes.

  10. Bright high z SnIa: A challenge for ΛCDM

    NASA Astrophysics Data System (ADS)

    Perivolaropoulos, L.; Shafieloo, A.

    2009-06-01

    It has recently been pointed out by Kowalski et. al. [Astrophys. J. 686, 749 (2008).ASJOAB0004-637X10.1086/589937] that there is “an unexpected brightness of the SnIa data at z>1.” We quantify this statement by constructing a new statistic which is applicable directly on the type Ia supernova (SnIa) distance moduli. This statistic is designed to pick up systematic brightness trends of SnIa data points with respect to a best fit cosmological model at high redshifts. It is based on binning the normalized differences between the SnIa distance moduli and the corresponding best fit values in the context of a specific cosmological model (e.g. ΛCDM). These differences are normalized by the standard errors of the observed distance moduli. We then focus on the highest redshift bin and extend its size toward lower redshifts until the binned normalized difference (BND) changes sign (crosses 0) at a redshift zc (bin size Nc). The bin size Nc of this crossing (the statistical variable) is then compared with the corresponding crossing bin size Nmc for Monte Carlo data realizations based on the best fit model. We find that the crossing bin size Nc obtained from the Union08 and Gold06 data with respect to the best fit ΛCDM model is anomalously large compared to Nmc of the corresponding Monte Carlo data sets obtained from the best fit ΛCDM in each case. In particular, only 2.2% of the Monte Carlo ΛCDM data sets are consistent with the Gold06 value of Nc while the corresponding probability for the Union08 value of Nc is 5.3%. Thus, according to this statistic, the probability that the high redshift brightness bias of the Union08 and Gold06 data sets is realized in the context of a (w0,w1)=(-1,0) model (ΛCDM cosmology) is less than 6%. The corresponding realization probability in the context of a (w0,w1)=(-1.4,2) model is more than 30% for both the Union08 and the Gold06 data sets indicating a much better consistency for this model with respect to the BND statistic.

  11. Inferred Lunar Boulder Distributions at Decimeter Scales

    NASA Technical Reports Server (NTRS)

    Baloga, S. M.; Glaze, L. S.; Spudis, P. D.

    2012-01-01

    Block size distributions of impact deposits on the Moon are diagnostic of the impact process and environmental effects, such as target lithology and weathering. Block size distributions are also important factors in trafficability, habitability, and possibly the identification of indigenous resources. Lunar block sizes have been investigated for many years for many purposes [e.g., 1-3]. An unresolved issue is the extent to which lunar block size distributions can be extrapolated to scales smaller than limits of resolution of direct measurement. This would seem to be a straightforward statistical application, but it is complicated by two issues. First, the cumulative size frequency distribution of observable boulders rolls over due to resolution limitations at the small end. Second, statistical regression provides the best fit only around the centroid of the data [4]. Confidence and prediction limits splay away from the best fit at the endpoints resulting in inferences in the boulder density at the CPR scale that can differ by many orders of magnitude [4]. These issues were originally investigated by Cintala and McBride [2] using Surveyor data. The objective of this study was to determine whether the measured block size distributions from Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC-NAC) images (m-scale resolution) can be used to infer the block size distribution at length scales comparable to Mini-RF Circular Polarization Ratio (CPR) scales, nominally taken as 10 cm. This would set the stage for assessing correlations of inferred block size distributions with CPR returns [6].

  12. Outbreak statistics and scaling laws for externally driven epidemics.

    PubMed

    Singh, Sarabjeet; Myers, Christopher R

    2014-04-01

    Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.

  13. Processing statistics: an examination of focused and distributed attention using event related potentials.

    PubMed

    Baijal, Shruti; Nakatani, Chie; van Leeuwen, Cees; Srinivasan, Narayanan

    2013-06-07

    Human observers show remarkable efficiency in statistical estimation; they are able, for instance, to estimate the mean size of visual objects, even if their number exceeds the capacity limits of focused attention. This ability has been understood as the result of a distinct mode of attention, i.e. distributed attention. Compared to the focused attention mode, working memory representations under distributed attention are proposed to be more compressed, leading to reduced working memory loads. An alternate proposal is that distributed attention uses less structured, feature-level representations. These would fill up working memory (WM) more, even when target set size is low. Using event-related potentials, we compared WM loading in a typical distributed attention task (mean size estimation) to that in a corresponding focused attention task (object recognition), using a measure called contralateral delay activity (CDA). Participants performed both tasks on 2, 4, or 8 different-sized target disks. In the recognition task, CDA amplitude increased with set size; notably, however, in the mean estimation task the CDA amplitude was high regardless of set size. In particular for set-size 2, the amplitude was higher in the mean estimation task than in the recognition task. The result showed that the task involves full WM loading even with a low target set size. This suggests that in the distributed attention mode, representations are not compressed, but rather less structured than under focused attention conditions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Statistical inference for tumor growth inhibition T/C ratio.

    PubMed

    Wu, Jianrong

    2010-09-01

    The tumor growth inhibition T/C ratio is commonly used to quantify treatment effects in drug screening tumor xenograft experiments. The T/C ratio is converted to an antitumor activity rating using an arbitrary cutoff point and often without any formal statistical inference. Here, we applied a nonparametric bootstrap method and a small sample likelihood ratio statistic to make a statistical inference of the T/C ratio, including both hypothesis testing and a confidence interval estimate. Furthermore, sample size and power are also discussed for statistical design of tumor xenograft experiments. Tumor xenograft data from an actual experiment were analyzed to illustrate the application.

  15. After p Values: The New Statistics for Undergraduate Neuroscience Education.

    PubMed

    Calin-Jageman, Robert J

    2017-01-01

    Statistical inference is a methodological cornerstone for neuroscience education. For many years this has meant inculcating neuroscience majors into null hypothesis significance testing with p values. There is increasing concern, however, about the pervasive misuse of p values. It is time to start planning statistics curricula for neuroscience majors that replaces or de-emphasizes p values. One promising alternative approach is what Cumming has dubbed the "New Statistics", an approach that emphasizes effect sizes, confidence intervals, meta-analysis, and open science. I give an example of the New Statistics in action and describe some of the key benefits of adopting this approach in neuroscience education.

  16. Touch Precision Modulates Visual Bias.

    PubMed

    Misceo, Giovanni F; Jones, Maurice D

    2018-01-01

    The sensory precision hypothesis holds that different seen and felt cues about the size of an object resolve themselves in favor of the more reliable modality. To examine this precision hypothesis, 60 college students were asked to look at one size while manually exploring another unseen size either with their bare fingers or, to lessen the reliability of touch, with their fingers sleeved in rigid tubes. Afterwards, the participants estimated either the seen size or the felt size by finding a match from a visual display of various sizes. Results showed that the seen size biased the estimates of the felt size when the reliability of touch decreased. This finding supports the interaction between touch reliability and visual bias predicted by statistically optimal models of sensory integration.

  17. Avalanches and generalized memory associativity in a network model for conscious and unconscious mental functioning

    NASA Astrophysics Data System (ADS)

    Siddiqui, Maheen; Wedemann, Roseli S.; Jensen, Henrik Jeldtoft

    2018-01-01

    We explore statistical characteristics of avalanches associated with the dynamics of a complex-network model, where two modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's ideas regarding the neuroses and that consciousness is related with symbolic and linguistic memory activity in the brain. It incorporates the Stariolo-Tsallis generalization of the Boltzmann Machine in order to model memory retrieval and associativity. In the present work, we define and measure avalanche size distributions during memory retrieval, in order to gain insight regarding basic aspects of the functioning of these complex networks. The avalanche sizes defined for our model should be related to the time consumed and also to the size of the neuronal region which is activated, during memory retrieval. This allows the qualitative comparison of the behaviour of the distribution of cluster sizes, obtained during fMRI measurements of the propagation of signals in the brain, with the distribution of avalanche sizes obtained in our simulation experiments. This comparison corroborates the indication that the Nonextensive Statistical Mechanics formalism may indeed be more well suited to model the complex networks which constitute brain and mental structure.

  18. A pilot randomized trial of two cognitive rehabilitation interventions for mild cognitive impairment: caregiver outcomes.

    PubMed

    Cuc, Andrea V; Locke, Dona E C; Duncan, Noah; Fields, Julie A; Snyder, Charlene Hoffman; Hanna, Sherrie; Lunde, Angela; Smith, Glenn E; Chandler, Melanie

    2017-12-01

    This study aims to provide effect size estimates of the impact of two cognitive rehabilitation interventions provided to patients with mild cognitive impairment: computerized brain fitness exercise and memory support system on support partners' outcomes of depression, anxiety, quality of life, and partner burden. A randomized controlled pilot trial was performed. At 6 months, the partners from both treatment groups showed stable to improved depression scores, while partners in an untreated control group showed worsening depression over 6 months. There were no statistically significant differences on anxiety, quality of life, or burden outcomes in this small pilot trial; however, effect sizes were moderate, suggesting that the sample sizes in this pilot study were not adequate to detect statistical significance. Either form of cognitive rehabilitation may help partners' mood, compared with providing no treatment. However, effect size estimates related to other partner outcomes (i.e., burden, quality of life, and anxiety) suggest that follow-up efficacy trials will need sample sizes of at least 30-100 people per group to accurately determine significance. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    PubMed

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  20. A probabilistic mechanical model for prediction of aggregates’ size distribution effect on concrete compressive strength

    NASA Astrophysics Data System (ADS)

    Miled, Karim; Limam, Oualid; Sab, Karam

    2012-06-01

    To predict aggregates' size distribution effect on the concrete compressive strength, a probabilistic mechanical model is proposed. Within this model, a Voronoi tessellation of a set of non-overlapping and rigid spherical aggregates is used to describe the concrete microstructure. Moreover, aggregates' diameters are defined as statistical variables and their size distribution function is identified to the experimental sieve curve. Then, an inter-aggregate failure criterion is proposed to describe the compressive-shear crushing of the hardened cement paste when concrete is subjected to uniaxial compression. Using a homogenization approach based on statistical homogenization and on geometrical simplifications, an analytical formula predicting the concrete compressive strength is obtained. This formula highlights the effects of cement paste strength and aggregates' size distribution and volume fraction on the concrete compressive strength. According to the proposed model, increasing the concrete strength for the same cement paste and the same aggregates' volume fraction is obtained by decreasing both aggregates' maximum size and the percentage of coarse aggregates. Finally, the validity of the model has been discussed through a comparison with experimental results (15 concrete compressive strengths ranging between 46 and 106 MPa) taken from literature and showing a good agreement with the model predictions.

  1. Volatility measurement with directional change in Chinese stock market: Statistical property and investment strategy

    NASA Astrophysics Data System (ADS)

    Ma, Junjun; Xiong, Xiong; He, Feng; Zhang, Wei

    2017-04-01

    The stock price fluctuation is studied in this paper with intrinsic time perspective. The event, directional change (DC) or overshoot, are considered as time scale of price time series. With this directional change law, its corresponding statistical properties and parameter estimation is tested in Chinese stock market. Furthermore, a directional change trading strategy is proposed for invest in the market portfolio in Chinese stock market, and both in-sample and out-of-sample performance are compared among the different method of model parameter estimation. We conclude that DC method can capture important fluctuations in Chinese stock market and gain profit due to the statistical property that average upturn overshoot size is bigger than average downturn directional change size. The optimal parameter of DC method is not fixed and we obtained 1.8% annual excess return with this DC-based trading strategy.

  2. PSYCHOLOGY. Estimating the reproducibility of psychological science.

    PubMed

    2015-08-28

    Reproducibility is a defining feature of science, but the extent to which it characterizes current research is unknown. We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects. Correlational tests suggest that replication success was better predicted by the strength of original evidence than by characteristics of the original and replication teams. Copyright © 2015, American Association for the Advancement of Science.

  3. Fast and accurate imputation of summary statistics enhances evidence of functional enrichment.

    PubMed

    Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P; Patterson, Nick; Price, Alkes L

    2014-10-15

    Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1-5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case-control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of [Formula: see text] association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary materials are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  4. What is a species? A new universal method to measure differentiation and assess the taxonomic rank of allopatric populations, using continuous variables

    PubMed Central

    Donegan, Thomas M.

    2018-01-01

    Abstract Existing models for assigning species, subspecies, or no taxonomic rank to populations which are geographically separated from one another were analyzed. This was done by subjecting over 3,000 pairwise comparisons of vocal or biometric data based on birds to a variety of statistical tests that have been proposed as measures of differentiation. One current model which aims to test diagnosability (Isler et al. 1998) is highly conservative, applying a hard cut-off, which excludes from consideration differentiation below diagnosis. It also includes non-overlap as a requirement, a measure which penalizes increases to sample size. The “species scoring” model of Tobias et al. (2010) involves less drastic cut-offs, but unlike Isler et al. (1998), does not control adequately for sample size and attributes scores in many cases to differentiation which is not statistically significant. Four different models of assessing effect sizes were analyzed: using both pooled and unpooled standard deviations and controlling for sample size using t-distributions or omitting to do so. Pooled standard deviations produced more conservative effect sizes when uncontrolled for sample size but less conservative effect sizes when so controlled. Pooled models require assumptions to be made that are typically elusive or unsupported for taxonomic studies. Modifications to improving these frameworks are proposed, including: (i) introducing statistical significance as a gateway to attributing any weighting to findings of differentiation; (ii) abandoning non-overlap as a test; (iii) recalibrating Tobias et al. (2010) scores based on effect sizes controlled for sample size using t-distributions. A new universal method is proposed for measuring differentiation in taxonomy using continuous variables and a formula is proposed for ranking allopatric populations. This is based first on calculating effect sizes using unpooled standard deviations, controlled for sample size using t-distributions, for a series of different variables. All non-significant results are excluded by scoring them as zero. Distance between any two populations is calculated using Euclidian summation of non-zeroed effect size scores. If the score of an allopatric pair exceeds that of a related sympatric pair, then the allopatric population can be ranked as species and, if not, then at most subspecies rank should be assigned. A spreadsheet has been programmed and is being made available which allows this and other tests of differentiation and rank studied in this paper to be rapidly analyzed. PMID:29780266

  5. Student Performance in an Introductory Business Statistics Course: Does Delivery Mode Matter?

    ERIC Educational Resources Information Center

    Haughton, Jonathan; Kelly, Alison

    2015-01-01

    Approximately 600 undergraduates completed an introductory business statistics course in 2013 in one of two learning environments at Suffolk University, a mid-sized private university in Boston, Massachusetts. The comparison group completed the course in a traditional classroom-based environment, whereas the treatment group completed the course in…

  6. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    PubMed

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  7. Distinct polymer physics principles govern chromatin dynamics in mouse and Drosophila topological domains.

    PubMed

    Ea, Vuthy; Sexton, Tom; Gostan, Thierry; Herviou, Laurie; Baudement, Marie-Odile; Zhang, Yunzhe; Berlivet, Soizik; Le Lay-Taha, Marie-Noëlle; Cathala, Guy; Lesne, Annick; Victor, Jean-Marc; Fan, Yuhong; Cavalli, Giacomo; Forné, Thierry

    2015-08-15

    In higher eukaryotes, the genome is partitioned into large "Topologically Associating Domains" (TADs) in which the chromatin displays favoured long-range contacts. While a crumpled/fractal globule organization has received experimental supports at higher-order levels, the organization principles that govern chromatin dynamics within these TADs remain unclear. Using simple polymer models, we previously showed that, in mouse liver cells, gene-rich domains tend to adopt a statistical helix shape when no significant locus-specific interaction takes place. Here, we use data from diverse 3C-derived methods to explore chromatin dynamics within mouse and Drosophila TADs. In mouse Embryonic Stem Cells (mESC), that possess large TADs (median size of 840 kb), we show that the statistical helix model, but not globule models, is relevant not only in gene-rich TADs, but also in gene-poor and gene-desert TADs. Interestingly, this statistical helix organization is considerably relaxed in mESC compared to liver cells, indicating that the impact of the constraints responsible for this organization is weaker in pluripotent cells. Finally, depletion of histone H1 in mESC alters local chromatin flexibility but not the statistical helix organization. In Drosophila, which possesses TADs of smaller sizes (median size of 70 kb), we show that, while chromatin compaction and flexibility are finely tuned according to the epigenetic landscape, chromatin dynamics within TADs is generally compatible with an unconstrained polymer configuration. Models issued from polymer physics can accurately describe the organization principles governing chromatin dynamics in both mouse and Drosophila TADs. However, constraints applied on this dynamics within mammalian TADs have a peculiar impact resulting in a statistical helix organization.

  8. Multiple category-lot quality assurance sampling: a new classification system with application to schistosomiasis control.

    PubMed

    Olives, Casey; Valadez, Joseph J; Brooker, Simon J; Pagano, Marcello

    2012-01-01

    Originally a binary classifier, Lot Quality Assurance Sampling (LQAS) has proven to be a useful tool for classification of the prevalence of Schistosoma mansoni into multiple categories (≤10%, >10 and <50%, ≥50%), and semi-curtailed sampling has been shown to effectively reduce the number of observations needed to reach a decision. To date the statistical underpinnings for Multiple Category-LQAS (MC-LQAS) have not received full treatment. We explore the analytical properties of MC-LQAS, and validate its use for the classification of S. mansoni prevalence in multiple settings in East Africa. We outline MC-LQAS design principles and formulae for operating characteristic curves. In addition, we derive the average sample number for MC-LQAS when utilizing semi-curtailed sampling and introduce curtailed sampling in this setting. We also assess the performance of MC-LQAS designs with maximum sample sizes of n=15 and n=25 via a weighted kappa-statistic using S. mansoni data collected in 388 schools from four studies in East Africa. Overall performance of MC-LQAS classification was high (kappa-statistic of 0.87). In three of the studies, the kappa-statistic for a design with n=15 was greater than 0.75. In the fourth study, where these designs performed poorly (kappa-statistic less than 0.50), the majority of observations fell in regions where potential error is known to be high. Employment of semi-curtailed and curtailed sampling further reduced the sample size by as many as 0.5 and 3.5 observations per school, respectively, without increasing classification error. This work provides the needed analytics to understand the properties of MC-LQAS for assessing the prevalance of S. mansoni and shows that in most settings a sample size of 15 children provides a reliable classification of schools.

  9. The intriguing evolution of effect sizes in biomedical research over time: smaller but more often statistically significant.

    PubMed

    Monsarrat, Paul; Vergnes, Jean-Noel

    2018-01-01

    In medicine, effect sizes (ESs) allow the effects of independent variables (including risk/protective factors or treatment interventions) on dependent variables (e.g., health outcomes) to be quantified. Given that many public health decisions and health care policies are based on ES estimates, it is important to assess how ESs are used in the biomedical literature and to investigate potential trends in their reporting over time. Through a big data approach, the text mining process automatically extracted 814 120 ESs from 13 322 754 PubMed abstracts. Eligible ESs were risk ratio, odds ratio, and hazard ratio, along with their confidence intervals. Here we show a remarkable decrease of ES values in PubMed abstracts between 1990 and 2015 while, concomitantly, results become more often statistically significant. Medians of ES values have decreased over time for both "risk" and "protective" values. This trend was found in nearly all fields of biomedical research, with the most marked downward tendency in genetics. Over the same period, the proportion of statistically significant ESs increased regularly: among the abstracts with at least 1 ES, 74% were statistically significant in 1990-1995, vs 85% in 2010-2015. whereas decreasing ESs could be an intrinsic evolution in biomedical research, the concomitant increase of statistically significant results is more intriguing. Although it is likely that growing sample sizes in biomedical research could explain these results, another explanation may lie in the "publish or perish" context of scientific research, with the probability of a growing orientation toward sensationalism in research reports. Important provisions must be made to improve the credibility of biomedical research and limit waste of resources. © The Authors 2017. Published by Oxford University Press.

  10. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  11. Automated sampling assessment for molecular simulations using the effective sample size

    PubMed Central

    Zhang, Xin; Bhatt, Divesh; Zuckerman, Daniel M.

    2010-01-01

    To quantify the progress in the development of algorithms and forcefields used in molecular simulations, a general method for the assessment of the sampling quality is needed. Statistical mechanics principles suggest the populations of physical states characterize equilibrium sampling in a fundamental way. We therefore develop an approach for analyzing the variances in state populations, which quantifies the degree of sampling in terms of the effective sample size (ESS). The ESS estimates the number of statistically independent configurations contained in a simulated ensemble. The method is applicable to both traditional dynamics simulations as well as more modern (e.g., multi–canonical) approaches. Our procedure is tested in a variety of systems from toy models to atomistic protein simulations. We also introduce a simple automated procedure to obtain approximate physical states from dynamic trajectories: this allows sample–size estimation in systems for which physical states are not known in advance. PMID:21221418

  12. Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes.

    PubMed

    Uhl, Jonathan T; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A W; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R; Liaw, P K; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A

    2015-11-17

    Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.

  13. Universal Quake Statistics: From Compressed Nanocrystals to Earthquakes

    PubMed Central

    Uhl, Jonathan T.; Pathak, Shivesh; Schorlemmer, Danijel; Liu, Xin; Swindeman, Ryan; Brinkman, Braden A. W.; LeBlanc, Michael; Tsekenis, Georgios; Friedman, Nir; Behringer, Robert; Denisov, Dmitry; Schall, Peter; Gu, Xiaojun; Wright, Wendelin J.; Hufnagel, Todd; Jennings, Andrew; Greer, Julia R.; Liaw, P. K.; Becker, Thorsten; Dresen, Georg; Dahmen, Karin A.

    2015-01-01

    Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes. PMID:26572103

  14. Adsorption of diclofenac and nimesulide on activated carbon: Statistical physics modeling and effect of adsorbate size

    NASA Astrophysics Data System (ADS)

    Sellaoui, Lotfi; Mechi, Nesrine; Lima, Éder Cláudio; Dotto, Guilherme Luiz; Ben Lamine, Abdelmottaleb

    2017-10-01

    Based on statistical physics elements, the equilibrium adsorption of diclofenac (DFC) and nimesulide (NM) on activated carbon was analyzed by a multilayer model with saturation. The paper aimed to describe experimentally and theoretically the adsorption process and study the effect of adsorbate size using the model parameters. From numerical simulation, the number of molecules per site showed that the adsorbate molecules (DFC and NM) were mostly anchored in both sides of the pore walls. The receptor sites density increase suggested that additional sites appeared during the process, to participate in DFC and NM adsorption. The description of the adsorption energy behavior indicated that the process was physisorption. Finally, by a model parameters correlation, the size effect of the adsorbate was deduced indicating that the molecule dimension has a negligible effect on the DFC and NM adsorption.

  15. Experimental toxicology: Issues of statistics, experimental design, and replication.

    PubMed

    Briner, Wayne; Kirwan, Jeral

    2017-01-01

    The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Some challenges with statistical inference in adaptive designs.

    PubMed

    Hung, H M James; Wang, Sue-Jane; Yang, Peiling

    2014-01-01

    Adaptive designs have generated a great deal of attention to clinical trial communities. The literature contains many statistical methods to deal with added statistical uncertainties concerning the adaptations. Increasingly encountered in regulatory applications are adaptive statistical information designs that allow modification of sample size or related statistical information and adaptive selection designs that allow selection of doses or patient populations during the course of a clinical trial. For adaptive statistical information designs, a few statistical testing methods are mathematically equivalent, as a number of articles have stipulated, but arguably there are large differences in their practical ramifications. We pinpoint some undesirable features of these methods in this work. For adaptive selection designs, the selection based on biomarker data for testing the correlated clinical endpoints may increase statistical uncertainty in terms of type I error probability, and most importantly the increased statistical uncertainty may be impossible to assess.

  17. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data

    PubMed Central

    Chen, Yi-Hau

    2017-01-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA. PMID:28622336

  18. A knowledge-based T2-statistic to perform pathway analysis for quantitative proteomic data.

    PubMed

    Lai, En-Yu; Chen, Yi-Hau; Wu, Kun-Pin

    2017-06-01

    Approaches to identify significant pathways from high-throughput quantitative data have been developed in recent years. Still, the analysis of proteomic data stays difficult because of limited sample size. This limitation also leads to the practice of using a competitive null as common approach; which fundamentally implies genes or proteins as independent units. The independent assumption ignores the associations among biomolecules with similar functions or cellular localization, as well as the interactions among them manifested as changes in expression ratios. Consequently, these methods often underestimate the associations among biomolecules and cause false positives in practice. Some studies incorporate the sample covariance matrix into the calculation to address this issue. However, sample covariance may not be a precise estimation if the sample size is very limited, which is usually the case for the data produced by mass spectrometry. In this study, we introduce a multivariate test under a self-contained null to perform pathway analysis for quantitative proteomic data. The covariance matrix used in the test statistic is constructed by the confidence scores retrieved from the STRING database or the HitPredict database. We also design an integrating procedure to retain pathways of sufficient evidence as a pathway group. The performance of the proposed T2-statistic is demonstrated using five published experimental datasets: the T-cell activation, the cAMP/PKA signaling, the myoblast differentiation, and the effect of dasatinib on the BCR-ABL pathway are proteomic datasets produced by mass spectrometry; and the protective effect of myocilin via the MAPK signaling pathway is a gene expression dataset of limited sample size. Compared with other popular statistics, the proposed T2-statistic yields more accurate descriptions in agreement with the discussion of the original publication. We implemented the T2-statistic into an R package T2GA, which is available at https://github.com/roqe/T2GA.

  19. Confidence intervals for effect sizes: compliance and clinical significance in the Journal of Consulting and clinical Psychology.

    PubMed

    Odgaard, Eric C; Fowler, Robert L

    2010-06-01

    In 2005, the Journal of Consulting and Clinical Psychology (JCCP) became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest editorial effort to improve statistical reporting practices in any APA journal in at least a decade, in this article we investigate the efficacy of that change. All intervention studies published in JCCP in 2003, 2004, 2007, and 2008 were reviewed. Each article was coded for method of clinical significance, type of ES, and type of associated CI, broken down by statistical test (F, t, chi-square, r/R(2), and multivariate modeling). By 2008, clinical significance compliance was 75% (up from 31%), with 94% of studies reporting some measure of ES (reporting improved for individual statistical tests ranging from eta(2) = .05 to .17, with reasonable CIs). Reporting of CIs for ESs also improved, although only to 40%. Also, the vast majority of reported CIs used approximations, which become progressively less accurate for smaller sample sizes and larger ESs (cf. Algina & Kessleman, 2003). Changes are near asymptote for ESs and clinical significance, but CIs lag behind. As CIs for ESs are required for primary outcomes, we show how to compute CIs for the vast majority of ESs reported in JCCP, with an example of how to use CIs for ESs as a method to assess clinical significance.

  20. The relation between statistical power and inference in fMRI

    PubMed Central

    Wager, Tor D.; Yarkoni, Tal

    2017-01-01

    Statistically underpowered studies can result in experimental failure even when all other experimental considerations have been addressed impeccably. In fMRI the combination of a large number of dependent variables, a relatively small number of observations (subjects), and a need to correct for multiple comparisons can decrease statistical power dramatically. This problem has been clearly addressed yet remains controversial—especially in regards to the expected effect sizes in fMRI, and especially for between-subjects effects such as group comparisons and brain-behavior correlations. We aimed to clarify the power problem by considering and contrasting two simulated scenarios of such possible brain-behavior correlations: weak diffuse effects and strong localized effects. Sampling from these scenarios shows that, particularly in the weak diffuse scenario, common sample sizes (n = 20–30) display extremely low statistical power, poorly represent the actual effects in the full sample, and show large variation on subsequent replications. Empirical data from the Human Connectome Project resembles the weak diffuse scenario much more than the localized strong scenario, which underscores the extent of the power problem for many studies. Possible solutions to the power problem include increasing the sample size, using less stringent thresholds, or focusing on a region-of-interest. However, these approaches are not always feasible and some have major drawbacks. The most prominent solutions that may help address the power problem include model-based (multivariate) prediction methods and meta-analyses with related synthesis-oriented approaches. PMID:29155843

  1. How Many Subjects are Needed for a Visual Field Normative Database? A Comparison of Ground Truth and Bootstrapped Statistics.

    PubMed

    Phu, Jack; Bui, Bang V; Kalloniatis, Michael; Khuu, Sieu K

    2018-03-01

    The number of subjects needed to establish the normative limits for visual field (VF) testing is not known. Using bootstrap resampling, we determined whether the ground truth mean, distribution limits, and standard deviation (SD) could be approximated using different set size ( x ) levels, in order to provide guidance for the number of healthy subjects required to obtain robust VF normative data. We analyzed the 500 Humphrey Field Analyzer (HFA) SITA-Standard results of 116 healthy subjects and 100 HFA full threshold results of 100 psychophysically experienced healthy subjects. These VFs were resampled (bootstrapped) to determine mean sensitivity, distribution limits (5th and 95th percentiles), and SD for different ' x ' and numbers of resamples. We also used the VF results of 122 glaucoma patients to determine the performance of ground truth and bootstrapped results in identifying and quantifying VF defects. An x of 150 (for SITA-Standard) and 60 (for full threshold) produced bootstrapped descriptive statistics that were no longer different to the original distribution limits and SD. Removing outliers produced similar results. Differences between original and bootstrapped limits in detecting glaucomatous defects were minimized at x = 250. Ground truth statistics of VF sensitivities could be approximated using set sizes that are significantly smaller than the original cohort. Outlier removal facilitates the use of Gaussian statistics and does not significantly affect the distribution limits. We provide guidance for choosing the cohort size for different levels of error when performing normative comparisons with glaucoma patients.

  2. [Relationship between finger dermatoglyphics and body size indicators in adulthood among Chinese twin population from Qingdao and Lishui cities].

    PubMed

    Sun, Luanluan; Yu, Canqing; Lyu, Jun; Cao, Weihua; Pang, Zengchang; Chen, Weijian; Wang, Shaojie; Chen, Rongfu; Gao, Wenjing; Li, Liming

    2014-01-01

    To study the correlation between fingerprints and body size indicators in adulthood. Samples were composed of twins from two sub-registries of Chinese National Twin Registry (CNTR), including 405 twin pairs in Lishui and 427 twin pairs in Qingdao. All participants were asked to complete the field survey, consisting of questionnaire, physical examination and blood collection. From the 832 twin pairs, those with complete and clear demographic prints were selected as the target population. Information of Fingerprints pixel on the demographic characteristics of these 100 twin pairs and their related adulthood body type indicators were finally chosen to form this research. Descriptive statistics and mixed linear model were used for data analyses. In the mixed linear models adjusted for age and sex, data showed that the body fat percentage of those who had arches was higher than those who did not have the arches (P = 0.002), and those who had radial loops would have higher body fat percentage when compared with ones who did not (P = 0.041). After adjusted for age, there appeared no statistically significant correlation between radial loops and systolic pressure, but the correlations of arches (P = 0.031)and radial loops (P = 0.022) to diastolic pressure still remained statistically significant. Statistically significant correlations were found between fingerprint types and body size indicators, and the fingerprint types showed a useful tool to explore the effects of uterine environment on health status in one's adulthood.

  3. Optimization method of superpixel analysis for multi-contrast Jones matrix tomography (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Miyazawa, Arata; Hong, Young-Joo; Makita, Shuichi; Kasaragod, Deepa K.; Miura, Masahiro; Yasuno, Yoshiaki

    2017-02-01

    Local statistics are widely utilized for quantification and image processing of OCT. For example, local mean is used to reduce speckle, local variation of polarization state (degree-of-polarization-uniformity (DOPU)) is used to visualize melanin. Conventionally, these statistics are calculated in a rectangle kernel whose size is uniform over the image. However, the fixed size and shape of the kernel result in a tradeoff between image sharpness and statistical accuracy. Superpixel is a cluster of pixels which is generated by grouping image pixels based on the spatial proximity and similarity of signal values. Superpixels have variant size and flexible shapes which preserve the tissue structure. Here we demonstrate a new superpixel method which is tailored for multifunctional Jones matrix OCT (JM-OCT). This new method forms the superpixels by clustering image pixels in a 6-dimensional (6-D) feature space (spatial two dimensions and four dimensions of optical features). All image pixels were clustered based on their spatial proximity and optical feature similarity. The optical features are scattering, OCT-A, birefringence and DOPU. The method is applied to retinal OCT. Generated superpixels preserve the tissue structures such as retinal layers, sclera, vessels, and retinal pigment epithelium. Hence, superpixel can be utilized as a local statistics kernel which would be more suitable than a uniform rectangle kernel. Superpixelized image also can be used for further image processing and analysis. Since it reduces the number of pixels to be analyzed, it reduce the computational cost of such image processing.

  4. [Biomechanical significance of the acetabular roof and its reaction to mechanical injury].

    PubMed

    Domazet, N; Starović, D; Nedeljković, R

    1999-01-01

    The introduction of morphometry into the quantitative analysis of the bone system and functional adaptation of acetabulum to mechanical damages and injuries enabled a relatively simple and acceptable examination of morphological acetabular changes in patients with damaged hip joints. Measurements of the depth and form of acetabulum can be done by radiological methods, computerized tomography and ultrasound (1-9). The aim of the study was to obtain data on the behaviour of acetabular roof, the so-called "eyebrow", by morphometric analyses during different mechanical injuries. Clinical studies of the effect of different loads on acetabular roof were carried out in 741 patients. Radiographic findings of 400 men and 341 women were analysed. The control group was composed of 148 patients with normal hip joints. Average age of the patients was 54.7 years and that of control subjects 52.0 years. Data processing was done for all examined patients. On the basis of our measurements the average size of female "eyebrow" ranged from 24.8 mm to 31.5 mm with standard deviation of 0.93 and in men from 29.4 mm to 40.3 mm with standard deviation of 1.54. The average size in the whole population was 32.1 mm with standard deviation of 15.61. Statistical analyses revealed high correlation coefficients between the age and "eyebrow" size in men (r = 0.124; p < 0.05); it was statically in inverse proportion (Graph 1). However, in female patients the correlation coefficient was statistically significant (r = 0.060; p > 0.05). The examination of the size of collodiaphysial angle and length of "eyebrow" revealed that "eyebrow" length was in inverse proportion to the size of collodiaphysial angle (r = 0.113; p < 0.05). The average "eyebrow" length in relation to the size of collodiaphysial angle ranged from 21.3 mm to 35.2 mm with standard deviation of 1.60. There was no statistically significant correlation between the "eyebrow" size and Wiberg's angle in male (r = 0.049; p > 0.05) and female (r = 0.005; p > 0.05) patients. The "eyebrow" length was proportionally dependent on the size of the shortened extremity in all examined subjects. This dependence was statistically significant both in female (r = 0.208; p < 0.05) and male (r = 0.193; p < 0.05) patients. The study revealed that fossa acetabuli was forward and downward laterally directed. The size, form and cross-section of acetabulum changed during different loads. Dimensions and morphological changes in acetabulum showed some but unimportant changes in comparison to that in the control group. These findings are graphically presented in Figure 5 and numerically in Tables 1 and 2. The study of spatial orientation among hip joints revealed that fossa acetabuli was forward and downward laterally directed; this was in accordance with results other authors (1, 7, 9, 15, 18). There was a statistically significant difference in relation to the "eyebrow" size between patients and normal subjects (t = 3.88; p < 0.05). The average difference of "eyebrow" size was 6.892 mm. A larger "eyebrow" was found in patients with normally loaded hip. There was also a significant difference in "eyebrow" size between patients and healthy female subjects (t = 4.605; p < 0.05). A larger "eyebrow" of 8.79 mm was found in female subjects with normally loaded hip. On the basis of our study it can be concluded that the findings related to changes in acetabular roof, the so-called "eyebrow", are important in diagnosis, follow-up and therapy of pathogenetic processes of these disorders.

  5. Bayesian evaluation of effect size after replicating an original study

    PubMed Central

    van Aert, Robbie C. M.; van Assen, Marcel A. L. M.

    2017-01-01

    The vast majority of published results in the literature is statistically significant, which raises concerns about their reliability. The Reproducibility Project Psychology (RPP) and Experimental Economics Replication Project (EE-RP) both replicated a large number of published studies in psychology and economics. The original study and replication were statistically significant in 36.1% in RPP and 68.8% in EE-RP suggesting many null effects among the replicated studies. However, evidence in favor of the null hypothesis cannot be examined with null hypothesis significance testing. We developed a Bayesian meta-analysis method called snapshot hybrid that is easy to use and understand and quantifies the amount of evidence in favor of a zero, small, medium and large effect. The method computes posterior model probabilities for a zero, small, medium, and large effect and adjusts for publication bias by taking into account that the original study is statistically significant. We first analytically approximate the methods performance, and demonstrate the necessity to control for the original study’s significance to enable the accumulation of evidence for a true zero effect. Then we applied the method to the data of RPP and EE-RP, showing that the underlying effect sizes of the included studies in EE-RP are generally larger than in RPP, but that the sample sizes of especially the included studies in RPP are often too small to draw definite conclusions about the true effect size. We also illustrate how snapshot hybrid can be used to determine the required sample size of the replication akin to power analysis in null hypothesis significance testing and present an easy to use web application (https://rvanaert.shinyapps.io/snapshot/) and R code for applying the method. PMID:28388646

  6. Evolution of sociality by natural selection on variances in reproductive fitness: evidence from a social bee.

    PubMed

    Stevens, Mark I; Hogendoorn, Katja; Schwarz, Michael P

    2007-08-29

    The Central Limit Theorem (CLT) is a statistical principle that states that as the number of repeated samples from any population increase, the variance among sample means will decrease and means will become more normally distributed. It has been conjectured that the CLT has the potential to provide benefits for group living in some animals via greater predictability in food acquisition, if the number of foraging bouts increases with group size. The potential existence of benefits for group living derived from a purely statistical principle is highly intriguing and it has implications for the origins of sociality. Here we show that in a social allodapine bee the relationship between cumulative food acquisition (measured as total brood weight) and colony size accords with the CLT. We show that deviations from expected food income decrease with group size, and that brood weights become more normally distributed both over time and with increasing colony size, as predicted by the CLT. Larger colonies are better able to match egg production to expected food intake, and better able to avoid costs associated with producing more brood than can be reared while reducing the risk of under-exploiting the food resources that may be available. These benefits to group living derive from a purely statistical principle, rather than from ecological, ergonomic or genetic factors, and could apply to a wide variety of species. This in turn suggests that the CLT may provide benefits at the early evolutionary stages of sociality and that evolution of group size could result from selection on variances in reproductive fitness. In addition, they may help explain why sociality has evolved in some groups and not others.

  7. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance

    PubMed Central

    Kramer, Karen L.; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children’s growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children’s monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children’s growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children’s growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children’s growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance. PMID:26938742

  8. Sibling Competition & Growth Tradeoffs. Biological vs. Statistical Significance.

    PubMed

    Kramer, Karen L; Veile, Amanda; Otárola-Castillo, Erik

    2016-01-01

    Early childhood growth has many downstream effects on future health and reproduction and is an important measure of offspring quality. While a tradeoff between family size and child growth outcomes is theoretically predicted in high-fertility societies, empirical evidence is mixed. This is often attributed to phenotypic variation in parental condition. However, inconsistent study results may also arise because family size confounds the potentially differential effects that older and younger siblings can have on young children's growth. Additionally, inconsistent results might reflect that the biological significance associated with different growth trajectories is poorly understood. This paper addresses these concerns by tracking children's monthly gains in height and weight from weaning to age five in a high fertility Maya community. We predict that: 1) as an aggregate measure family size will not have a major impact on child growth during the post weaning period; 2) competition from young siblings will negatively impact child growth during the post weaning period; 3) however because of their economic value, older siblings will have a negligible effect on young children's growth. Accounting for parental condition, we use linear mixed models to evaluate the effects that family size, younger and older siblings have on children's growth. Congruent with our expectations, it is younger siblings who have the most detrimental effect on children's growth. While we find statistical evidence of a quantity/quality tradeoff effect, the biological significance of these results is negligible in early childhood. Our findings help to resolve why quantity/quality studies have had inconsistent results by showing that sibling competition varies with sibling age composition, not just family size, and that biological significance is distinct from statistical significance.

  9. An Introductory Summary of Various Effect Size Choices.

    ERIC Educational Resources Information Center

    Cromwell, Susan

    This paper provides a tutorial summary of some of the many effect size choices so that members of the Southwest Educational Research Association would be better able to follow the recommendations of the American Psychological Association (APA) publication manual, the APA Task Force on Statistical Inference, and the publication requirements of some…

  10. Estimating an Effect Size in One-Way Multivariate Analysis of Variance (MANOVA)

    ERIC Educational Resources Information Center

    Steyn, H. S., Jr.; Ellis, S. M.

    2009-01-01

    When two or more univariate population means are compared, the proportion of variation in the dependent variable accounted for by population group membership is eta-squared. This effect size can be generalized by using multivariate measures of association, based on the multivariate analysis of variance (MANOVA) statistics, to establish whether…

  11. 12 CFR 208.22 - Community development and public welfare investments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... purposes of this section: (1) Low- or moderate-income area means: (i) One or more census tracts in a Metropolitan Statistical Area where the median family income adjusted for family size in each census tract is less than 80 percent of the median family income adjusted for family size of the Metropolitan...

  12. The American Arts Industry: Size and Significance.

    ERIC Educational Resources Information Center

    Chartrand, Harry Hillman

    In this study, the U.S. arts industry is conceptually defined and measured with respect to statistical size. The contribution and significance of the arts industry to the economy is then assessed within the context of national competitiveness and the emerging knowledge economy. Study findings indicate that the arts industry contributes between 5%…

  13. An Investigation of Sample Size Splitting on ATFIND and DIMTEST

    ERIC Educational Resources Information Center

    Socha, Alan; DeMars, Christine E.

    2013-01-01

    Modeling multidimensional test data with a unidimensional model can result in serious statistical errors, such as bias in item parameter estimates. Many methods exist for assessing the dimensionality of a test. The current study focused on DIMTEST. Using simulated data, the effects of sample size splitting for use with the ATFIND procedure for…

  14. Relationship of Class-Size to Classroom Processes, Teacher Satisfaction and Pupil Affect: A Meta-Analysis.

    ERIC Educational Resources Information Center

    Smith, Mary Lee; Glass, Gene V.

    Using data from previously completed research, the authors of this report attempted to examine the relationship between class size and measures of outcomes such as student attitudes and behavior, classroom processes and learning environment, and teacher satisfaction. The authors report that statistical integration of the existing research…

  15. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  16. Technology Tips: Sample Too Small? Probably Not!

    ERIC Educational Resources Information Center

    Strayer, Jeremy F.

    2013-01-01

    Statistical studies are referenced in the news every day, so frequently that people are sometimes skeptical of reported results. Often, no matter how large a sample size researchers use in their studies, people believe that the sample size is too small to make broad generalizations. The tasks presented in this article use simulations of repeated…

  17. The False Promise of Class-Size Reduction

    ERIC Educational Resources Information Center

    Chingos, Matthew M.

    2011-01-01

    Class-size reduction, or CSR, is enormously popular with parents, teachers, and the public in general. Many parents believe that their children will benefit from more individualized attention in a smaller class and many teachers find smaller classes easier to manage. The pupil-teacher ratio is an easy statistic for the public to monitor as a…

  18. State Estimates of Disability in America. Disability Statistics Report 3.

    ERIC Educational Resources Information Center

    LaPlante, Mitchell P.

    This study presents and discusses existing data on disability by state, from the 1980 and 1990 censuses, the Current Population Survey (CPS), and the National Health Interview Survey (NHIS). The study used direct methods for states with large sample sizes and synthetic estimates for states with low sample sizes. The study's highlighted findings…

  19. Statistics Clinic

    NASA Technical Reports Server (NTRS)

    Feiveson, Alan H.; Foy, Millennia; Ploutz-Snyder, Robert; Fiedler, James

    2014-01-01

    Do you have elevated p-values? Is the data analysis process getting you down? Do you experience anxiety when you need to respond to criticism of statistical methods in your manuscript? You may be suffering from Insufficient Statistical Support Syndrome (ISSS). For symptomatic relief of ISSS, come for a free consultation with JSC biostatisticians at our help desk during the poster sessions at the HRP Investigators Workshop. Get answers to common questions about sample size, missing data, multiple testing, when to trust the results of your analyses and more. Side effects may include sudden loss of statistics anxiety, improved interpretation of your data, and increased confidence in your results.

  20. The endothelial sample size analysis in corneal specular microscopy clinical examinations.

    PubMed

    Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci

    2012-05-01

    To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.

  1. Will Outer Tropical Cyclone Size Change due to Anthropogenic Warming?

    NASA Astrophysics Data System (ADS)

    Schenkel, B. A.; Lin, N.; Chavas, D. R.; Vecchi, G. A.; Knutson, T. R.; Oppenheimer, M.

    2017-12-01

    Prior research has shown significant interbasin and intrabasin variability in outer tropical cyclone (TC) size. Moreover, outer TC size has even been shown to vary substantially over the lifetime of the majority of TCs. However, the factors responsible for both setting initial outer TC size and determining its evolution throughout the TC lifetime remain uncertain. Given these gaps in our physical understanding, there remains uncertainty in how outer TC size will change, if at all, due to anthropogenic warming. The present study seeks to quantify whether outer TC size will change significantly in response to anthropogenic warming using data from a high-resolution global climate model and a regional hurricane model. Similar to prior work, the outer TC size metric used in this study is the radius in which the azimuthal-mean surface azimuthal wind equals 8 m/s. The initial results from the high-resolution global climate model data suggest that the distribution of outer TC size shifts significantly towards larger values in each global TC basin during future climates, as revealed by 1) statistically significant increase of the median outer TC size by 5-10% (p<0.05) according to a 1,000-sample bootstrap resampling approach with replacement and 2) statistically significant differences between distributions of outer TC size from current and future climate simulations as shown using two-sample Kolmogorov Smirnov testing (p<<0.01). Additional analysis of the high-resolution global climate model data reveals that outer TC size does not uniformly increase within each basin in future climates, but rather shows substantial locational dependence. Future work will incorporate the regional mesoscale hurricane model data to help focus on identifying the source of the spatial variability in outer TC size increases within each basin during future climates and, more importantly, why outer TC size changes in response to anthropogenic warming.

  2. How often should we expect to be wrong? Statistical power, P values, and the expected prevalence of false discoveries.

    PubMed

    Marino, Michael J

    2018-05-01

    There is a clear perception in the literature that there is a crisis in reproducibility in the biomedical sciences. Many underlying factors contributing to the prevalence of irreproducible results have been highlighted with a focus on poor design and execution of experiments along with the misuse of statistics. While these factors certainly contribute to irreproducibility, relatively little attention outside of the specialized statistical literature has focused on the expected prevalence of false discoveries under idealized circumstances. In other words, when everything is done correctly, how often should we expect to be wrong? Using a simple simulation of an idealized experiment, it is possible to show the central role of sample size and the related quantity of statistical power in determining the false discovery rate, and in accurate estimation of effect size. According to our calculations, based on current practice many subfields of biomedical science may expect their discoveries to be false at least 25% of the time, and the only viable course to correct this is to require the reporting of statistical power and a minimum of 80% power (1 - β = 0.80) for all studies. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. "TNOs are Cool": A survey of the trans-Neptunian region. XIII. Statistical analysis of multiple trans-Neptunian objects observed with Herschel Space Observatory

    NASA Astrophysics Data System (ADS)

    Kovalenko, I. D.; Doressoundiram, A.; Lellouch, E.; Vilenius, E.; Müller, T.; Stansberry, J.

    2017-11-01

    Context. Gravitationally bound multiple systems provide an opportunity to estimate the mean bulk density of the objects, whereas this characteristic is not available for single objects. Being a primitive population of the outer solar system, binary and multiple trans-Neptunian objects (TNOs) provide unique information about bulk density and internal structure, improving our understanding of their formation and evolution. Aims: The goal of this work is to analyse parameters of multiple trans-Neptunian systems, observed with Herschel and Spitzer space telescopes. Particularly, statistical analysis is done for radiometric size and geometric albedo, obtained from photometric observations, and for estimated bulk density. Methods: We use Monte Carlo simulation to estimate the real size distribution of TNOs. For this purpose, we expand the dataset of diameters by adopting the Minor Planet Center database list with available values of the absolute magnitude therein, and the albedo distribution derived from Herschel radiometric measurements. We use the 2-sample Anderson-Darling non-parametric statistical method for testing whether two samples of diameters, for binary and single TNOs, come from the same distribution. Additionally, we use the Spearman's coefficient as a measure of rank correlations between parameters. Uncertainties of estimated parameters together with lack of data are taken into account. Conclusions about correlations between parameters are based on statistical hypothesis testing. Results: We have found that the difference in size distributions of multiple and single TNOs is biased by small objects. The test on correlations between parameters shows that the effective diameter of binary TNOs strongly correlates with heliocentric orbital inclination and with magnitude difference between components of binary system. The correlation between diameter and magnitude difference implies that small and large binaries are formed by different mechanisms. Furthermore, the statistical test indicates, although not significant with the sample size, that a moderately strong correlation exists between diameter and bulk density. Herschel is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA.

  4. Probability of detection of internal voids in structural ceramics using microfocus radiography

    NASA Technical Reports Server (NTRS)

    Baaklini, G. Y.; Roth, D. J.

    1986-01-01

    The reliability of microfocous X-radiography for detecting subsurface voids in structural ceramic test specimens was statistically evaluated. The microfocus system was operated in the projection mode using low X-ray photon energies (20 keV) and a 10 micro m focal spot. The statistics were developed for implanted subsurface voids in green and sintered silicon carbide and silicon nitride test specimens. These statistics were compared with previously-obtained statistics for implanted surface voids in similar specimens. Problems associated with void implantation are discussed. Statistical results are given as probability-of-detection curves at a 95 precent confidence level for voids ranging in size from 20 to 528 micro m in diameter.

  5. Probability of detection of internal voids in structural ceramics using microfocus radiography

    NASA Technical Reports Server (NTRS)

    Baaklini, G. Y.; Roth, D. J.

    1985-01-01

    The reliability of microfocus x-radiography for detecting subsurface voids in structural ceramic test specimens was statistically evaluated. The microfocus system was operated in the projection mode using low X-ray photon energies (20 keV) and a 10 micro m focal spot. The statistics were developed for implanted subsurface voids in green and sintered silicon carbide and silicon nitride test specimens. These statistics were compared with previously-obtained statistics for implanted surface voids in similar specimens. Problems associated with void implantation are discussed. Statistical results are given as probability-of-detection curves at a 95 percent confidence level for voids ranging in size from 20 to 528 micro m in diameter.

  6. [Influence of hyperprolactinemia and tumoral size in the postoperative pituitary function in clinically nonfunctioning pituitary macroadenomas].

    PubMed

    Fonseca, Ana Luiza Vidal; Chimelli, Leila; Santos, Mario José C Felippe; Santos, Alair Augusto S M Damas dos; Violante, Alice Helena Dutra

    2002-09-01

    To study the influence of hyperprolactinemia and tumoral size in the pituitary function in clinically nonfunctioning pituitary macroadenomas. Twenty three patients with clinically nonfunctioning pituitary macroadenomas were evaluated by image studies (computed tomography or magnetic resonance) and basal hormonal level; 16 had preoperative hypothalamus-hypophysial function tests (megatests). All tumors had histological diagnosis and in seventeen immunohistochemical study for adenohypophysial hormones was also performed. Student's t test, chi square test, exact test of Fisher and Mc Neman test were used for the statistics analysis. The level of significance adopted was 5% (p<0.05). Tumoral diameter varied of 1.1 to 4.7 cm (average=2.99 cm +/- 1.04). In the preoperative, 5 (21.7%) patients did not show laboratorial hormonal deficit, 9 (39.1%) developed hyperprolactinemia, 13 (56,5%) normal levels of prolactin (PRL) and 1 (4.3%) subnormal; 18 (78.3%) patients developed hypopituitarism (4 pan-hypopituitarism). Nineteen patients (82.6%) underwent transsfenoidal approach, 3 (13%) craniotomy and 1 (4.4%) combined access. Only 6 patients had total tumoral resection. Of the 17 immunohistochemical studies, 5 tumours were immunonegatives, 1 compound, 1 LH+, 1 FSH +, 1 alpha sub-unit and 8 focal or isolated immunorreactivity for one of the pituitary hormones or sub-units; of the other six tumours, 5 were chromophobe and 1 chromophobe/acidophile. No significant statistic difference was noted between tumoral size and preoperative PRL levels (p=0.82), nor between tumoral size and postoperative hormonal state, except in the GH and gonadal axis. Significant statistic was noted: between tumoral size and preoperative hormonal state (except in the gonadal axis); between normal PRL levels, associated to none or little preoperative hypophysial disfunction, and recovery of postoperative pituitary function. Isolated preoperative hyperprolactinemia and tumoral size have not been predictable for the recovery of postoperative pituitary function.

  7. Performance of Bootstrapping Approaches To Model Test Statistics and Parameter Standard Error Estimation in Structural Equation Modeling.

    ERIC Educational Resources Information Center

    Nevitt, Jonathan; Hancock, Gregory R.

    2001-01-01

    Evaluated the bootstrap method under varying conditions of nonnormality, sample size, model specification, and number of bootstrap samples drawn from the resampling space. Results for the bootstrap suggest the resampling-based method may be conservative in its control over model rejections, thus having an impact on the statistical power associated…

  8. The Probability of Obtaining Two Statistically Different Test Scores as a Test Index

    ERIC Educational Resources Information Center

    Muller, Jorg M.

    2006-01-01

    A new test index is defined as the probability of obtaining two randomly selected test scores (PDTS) as statistically different. After giving a concept definition of the test index, two simulation studies are presented. The first analyzes the influence of the distribution of test scores, test reliability, and sample size on PDTS within classical…

  9. Disability Statistics in the Developing World: A Reflection on the Meanings in Our Numbers

    ERIC Educational Resources Information Center

    Fujiura, Glenn T.; Park, Hye J.; Rutkowski-Kmitta, Violet

    2005-01-01

    Background: The imbalance between the sheer size of the developing world and what little is known about the lives and life circumstances of persons with disabilities living there should command our attention. Method: International development initiatives routinely give great priority to the collection of statistical indicators yet even the most…

  10. The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups

    ERIC Educational Resources Information Center

    Pero-Cebollero, Maribel; Guardia-Olmos, Joan

    2013-01-01

    In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…

  11. Application of binomial and multinomial probability statistics to the sampling design process of a global grain tracing and recall system

    USDA-ARS?s Scientific Manuscript database

    Small, coded, pill-sized tracers embedded in grain are proposed as a method for grain traceability. A sampling process for a grain traceability system was designed and investigated by applying probability statistics using a science-based sampling approach to collect an adequate number of tracers fo...

  12. Statistical Clustering and the Contents of the Infant Vocabulary

    ERIC Educational Resources Information Center

    Swingley, Daniel

    2005-01-01

    Infants parse speech into word-sized units according to biases that develop in the first year. One bias, present before the age of 7 months, is to cluster syllables that tend to co-occur. The present computational research demonstrates that this statistical clustering bias could lead to the extraction of speech sequences that are actual words,…

  13. The three-dimensional structure of cumulus clouds over the ocean. 1: Structural analysis

    NASA Technical Reports Server (NTRS)

    Kuo, Kwo-Sen; Welch, Ronald M.; Weger, Ronald C.; Engelstad, Mark A.; Sengupta, S. K.

    1993-01-01

    Thermal channel (channel 6, 10.4-12.5 micrometers) images of five Landsat thematic mapper cumulus scenes over the ocean are examined. These images are thresholded using the standard International Satellite Cloud Climatology Project (ISCCP) thermal threshold algorithm. The individual clouds in the cloud fields are segmented to obtain their structural statistics which include size distribution, orientation angle, horizontal aspect ratio, and perimeter-to-area (PtA) relationship. The cloud size distributions exhibit a double power law with the smaller clouds having a smaller absolute exponent. The cloud orientation angles, horizontal aspect ratios, and PtA exponents are found in good agreement with earlier studies. A technique also is developed to recognize individual cells within a cloud so that statistics of cloud cellular structure can be obtained. Cell structural statistics are computed for each cloud. Unicellular clouds are generally smaller (less than or equal to 1 km) and have smaller PtA exponents, while multicellular clouds are larger (greater than or equal to 1 km) and have larger PtA exponents. Cell structural statistics are similar to those of the smaller clouds. When each cell is approximated as a quadric surface using a linear least squares fit, most cells have the shape of a hyperboloid of one sheet, but about 15% of the cells are best modeled by a hyperboloid of two sheets. Less than 1% of the clouds are ellipsoidal. The number of cells in a cloud increases slightly faster than linearly with increasing cloud size. The mean nearest neighbor distance between cells in a cloud, however, appears to increase linearly with increasing cloud size and to reach a maximum when the cloud effective diameter is about 10 km; then it decreases with increasing cloud size. Sensitivity studies of threshold and lapse rate show that neither has a significant impact upon the results. A goodness-of-fit ratio is used to provide a quantitative measure of the individual cloud results. Significantly improved results are obtained after applying a smoothing operator, suggesting the eliminating subresolution scale variations with higher spatial resolution may yield even better shape analyses.

  14. Pupil size in Jewish theological seminary students.

    PubMed

    Shemesh, G; Kesler, A; Lazar, M; Rothkoff, L

    2004-01-01

    To investigate the authors' clinical impression that pupil size among myopic Jewish theological seminary students is different from pupil size of similar secular subjects. This cross-sectional study was conducted on 28 male Jewish theological seminary students and 28 secular students or workers who were matched for age and refraction. All participants were consecutively enrolled. Scotopic and photopic pupil size was measured by means of a Colvard pupillometer. Comparisons of various parameters between the groups were performed using the two-sample t-test, Fisher exact test, a paired-sample t-test, a two-way analysis of variance, and Pearson correlation coefficients as appropriate. The two groups were statistically matched for age, refraction, and visual acuity. The seminary students were undercorrected by an average of 2.35 diopters (D), while the secular subjects were undercorrected by only 0.65 D (p<0.01). The average pupil size was larger in the religious group under both scotopic and photopic luminance. This difference was maintained when the two groups were compared according to iris color under both conditions, reaching a level of statistical significance (p<0.0001). There was a significant difference in photopic pupil size between dark and light irises (p=0.049), but this difference was not maintained under scotopic conditions. The average pupil size of young ultraorthodox seminary students was significantly larger than that of matched secular subjects. Whether this is the result of intensive close-up work or of apparently characteristic undercorrection of the myopia is undetermined.

  15. A geometrical optics approach for modeling aperture averaging in free space optical communication applications

    NASA Astrophysics Data System (ADS)

    Yuksel, Heba; Davis, Christopher C.

    2006-09-01

    Intensity fluctuations at the receiver in free space optical (FSO) communication links lead to a received power variance that depends on the size of the receiver aperture. Increasing the size of the receiver aperture reduces the power variance. This effect of the receiver size on power variance is called aperture averaging. If there were no aperture size limitation at the receiver, then there would be no turbulence-induced scintillation. In practice, there is always a tradeoff between aperture size, transceiver weight, and potential transceiver agility for pointing, acquisition and tracking (PAT) of FSO communication links. We have developed a geometrical simulation model to predict the aperture averaging factor. This model is used to simulate the aperture averaging effect at given range by using a large number of rays, Gaussian as well as uniformly distributed, propagating through simulated turbulence into a circular receiver of varying aperture size. Turbulence is simulated by filling the propagation path with spherical bubbles of varying sizes and refractive index discontinuities statistically distributed according to various models. For each statistical representation of the atmosphere, the three-dimensional trajectory of each ray is analyzed using geometrical optics. These Monte Carlo techniques have proved capable of assessing the aperture averaging effect, in particular, the quantitative expected reduction in intensity fluctuations with increasing aperture diameter. In addition, beam wander results have demonstrated the range-cubed dependence of mean-squared beam wander. An effective turbulence parameter can also be determined by correlating beam wander behavior with the path length.

  16. Brain size growth in wild and captive chimpanzees (Pan troglodytes).

    PubMed

    Cofran, Zachary

    2018-05-24

    Despite many studies of chimpanzee brain size growth, intraspecific variation is under-explored. Brain size data from chimpanzees of the Taï Forest and the Yerkes Primate Research Center enable a unique glimpse into brain growth variation as age at death is known for individuals, allowing cross-sectional growth curves to be estimated. Because Taï chimpanzees are from the wild but Yerkes apes are captive, potential environmental effects on neural development can also be explored. Previous research has revealed differences in growth and health between wild and captive primates, but such habitat effects have yet to be investigated for brain growth. Here, I use an iterative curve fitting procedure to estimate brain growth and regression parameters for each population, statistically comparing growth models using bootstrapped confidence intervals. Yerkes and Taï brain sizes overlap at all ages, although the sole Taï newborn is at the low end of captive neonatal variation. Growth rate and duration are statistically indistinguishable between the two populations. Resampling the Yerkes sample to match the Taï sample size and age group composition shows that ontogenetic variation in the two groups are remarkably similar despite the latter's limited size. Best fit growth curves for each sample indicate cessation of brain size growth at around 2 years, earlier than has previously been reported. The overall similarity between wild and captive chimpanzees points to the canalization of brain growth in this species. © 2018 Wiley Periodicals, Inc.

  17. Determination of cardiac size following space missions of different durations: the second manned Skylab mission.

    PubMed

    Nicogossian, A; Hoffler, G W; Johnson, R L; Gowen, R J

    1976-04-01

    A simple method to estimate cardiac size from single frontal plane chest roentgenograms has been described. Pre- and postflight chest X-rays from Apollo 17, and Skylab 2 and 3 have been analyzed for changes in the cardiac silhouette size. The data obtained from the computed cardiothoracic areal ratios compared well with the clinical cardiothoracic diametral ratios (r = .86). Though an overall postflight decrease in cardiac size is evident, the mean difference was not statistically significant (n = 8). The individual decreases in the cardiac silhouette size postflight are thought to be due to decrements in intracardiac chamber volumes rather than in myocardial muscle mass.

  18. Estimation of sample size and testing power (Part 4).

    PubMed

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  19. Determination of cardiac size following space missions of different durations - The second manned Skylab mission

    NASA Technical Reports Server (NTRS)

    Nicogossian, A.; Hoffler, G. W.; Johnson, R. L.; Gowen, R. J.

    1976-01-01

    A simple method to estimate cardiac size from single frontal plane chest roentgenograms has been described. Pre- and postflight chest X-rays from Apollo 17, and Skylab 2 and 3 have been analyzed for changes in the cardiac silhouette size. The data obtained from the computed cardiothoracic areal ratios compared well with the clinical cardiothoracic diametral ratios (r = .86). Though an overall postflight decrease in cardiac size is evident, the mean difference was not statistically significant (n = 8). The individual decreases in the cardiac silhouette size postflight are thought to be due to decrements in intracardiac chamber volumes rather than in myocardial muscle mass.

  20. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    PubMed

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  1. Statistical computation of tolerance limits

    NASA Technical Reports Server (NTRS)

    Wheeler, J. T.

    1993-01-01

    Based on a new theory, two computer codes were developed specifically to calculate the exact statistical tolerance limits for normal distributions within unknown means and variances for the one-sided and two-sided cases for the tolerance factor, k. The quantity k is defined equivalently in terms of the noncentral t-distribution by the probability equation. Two of the four mathematical methods employ the theory developed for the numerical simulation. Several algorithms for numerically integrating and iteratively root-solving the working equations are written to augment the program simulation. The program codes generate some tables of k's associated with the varying values of the proportion and sample size for each given probability to show accuracy obtained for small sample sizes.

  2. The analysis of morphometric data on rocky mountain wolves and artic wolves using statistical method

    NASA Astrophysics Data System (ADS)

    Ammar Shafi, Muhammad; Saifullah Rusiman, Mohd; Hamzah, Nor Shamsidah Amir; Nor, Maria Elena; Ahmad, Noor’ani; Azia Hazida Mohamad Azmi, Nur; Latip, Muhammad Faez Ab; Hilmi Azman, Ahmad

    2018-04-01

    Morphometrics is a quantitative analysis depending on the shape and size of several specimens. Morphometric quantitative analyses are commonly used to analyse fossil record, shape and size of specimens and others. The aim of the study is to find the differences between rocky mountain wolves and arctic wolves based on gender. The sample utilised secondary data which included seven variables as independent variables and two dependent variables. Statistical modelling was used in the analysis such was the analysis of variance (ANOVA) and multivariate analysis of variance (MANOVA). The results showed there exist differentiating results between arctic wolves and rocky mountain wolves based on independent factors and gender.

  3. A Total Quality-Control Plan with Right-Sized Statistical Quality-Control.

    PubMed

    Westgard, James O

    2017-03-01

    A new Clinical Laboratory Improvement Amendments option for risk-based quality-control (QC) plans became effective in January, 2016. Called an Individualized QC Plan, this option requires the laboratory to perform a risk assessment, develop a QC plan, and implement a QC program to monitor ongoing performance of the QC plan. Difficulties in performing a risk assessment may limit validity of an Individualized QC Plan. A better alternative is to develop a Total QC Plan including a right-sized statistical QC procedure to detect medically important errors. Westgard Sigma Rules provides a simple way to select the right control rules and the right number of control measurements. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. EXTENDING MULTIVARIATE DISTANCE MATRIX REGRESSION WITH AN EFFECT SIZE MEASURE AND THE ASYMPTOTIC NULL DISTRIBUTION OF THE TEST STATISTIC

    PubMed Central

    McArtor, Daniel B.; Lubke, Gitta H.; Bergeman, C. S.

    2017-01-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains. PMID:27738957

  5. Extending multivariate distance matrix regression with an effect size measure and the asymptotic null distribution of the test statistic.

    PubMed

    McArtor, Daniel B; Lubke, Gitta H; Bergeman, C S

    2017-12-01

    Person-centered methods are useful for studying individual differences in terms of (dis)similarities between response profiles on multivariate outcomes. Multivariate distance matrix regression (MDMR) tests the significance of associations of response profile (dis)similarities and a set of predictors using permutation tests. This paper extends MDMR by deriving and empirically validating the asymptotic null distribution of its test statistic, and by proposing an effect size for individual outcome variables, which is shown to recover true associations. These extensions alleviate the computational burden of permutation tests currently used in MDMR and render more informative results, thus making MDMR accessible to new research domains.

  6. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Öztürk, Hande; Noyan, I. Cevdet

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  7. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE PAGES

    Öztürk, Hande; Noyan, I. Cevdet

    2017-08-24

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  8. Small numbers, disclosure risk, security, and reliability issues in Web-based data query systems.

    PubMed

    Rudolph, Barbara A; Shah, Gulzar H; Love, Denise

    2006-01-01

    This article describes the process for developing consensus guidelines and tools for releasing public health data via the Web and highlights approaches leading agencies have taken to balance disclosure risk with public dissemination of reliable health statistics. An agency's choice of statistical methods for improving the reliability of released data for Web-based query systems is based upon a number of factors, including query system design (dynamic analysis vs preaggregated data and tables), population size, cell size, data use, and how data will be supplied to users. The article also describes those efforts that are necessary to reduce the risk of disclosure of an individual's protected health information.

  9. Benchmarking protein-protein interface predictions: why you should care about protein size.

    PubMed

    Martin, Juliette

    2014-07-01

    A number of predictive methods have been developed to predict protein-protein binding sites. Each new method is traditionally benchmarked using sets of protein structures of various sizes, and global statistics are used to assess the quality of the prediction. Little attention has been paid to the potential bias due to protein size on these statistics. Indeed, small proteins involve proportionally more residues at interfaces than large ones. If a predictive method is biased toward small proteins, this can lead to an over-estimation of its performance. Here, we investigate the bias due to the size effect when benchmarking protein-protein interface prediction on the widely used docking benchmark 4.0. First, we simulate random scores that favor small proteins over large ones. Instead of the 0.5 AUC (Area Under the Curve) value expected by chance, these biased scores result in an AUC equal to 0.6 using hypergeometric distributions, and up to 0.65 using constant scores. We then use real prediction results to illustrate how to detect the size bias by shuffling, and subsequently correct it using a simple conversion of the scores into normalized ranks. In addition, we investigate the scores produced by eight published methods and show that they are all affected by the size effect, which can change their relative ranking. The size effect also has an impact on linear combination scores by modifying the relative contributions of each method. In the future, systematic corrections should be applied when benchmarking predictive methods using data sets with mixed protein sizes. © 2014 Wiley Periodicals, Inc.

  10. Only pick the right grains: Modelling the bias due to subjective grain-size interval selection for chronometric and fingerprinting approaches.

    NASA Astrophysics Data System (ADS)

    Dietze, Michael; Fuchs, Margret; Kreutzer, Sebastian

    2016-04-01

    Many modern approaches of radiometric dating or geochemical fingerprinting rely on sampling sedimentary deposits. A key assumption of most concepts is that the extracted grain-size fraction of the sampled sediment adequately represents the actual process to be dated or the source area to be fingerprinted. However, these assumptions are not always well constrained. Rather, they have to align with arbitrary, method-determined size intervals, such as "coarse grain" or "fine grain" with partly even different definitions. Such arbitrary intervals violate principal process-based concepts of sediment transport and can thus introduce significant bias to the analysis outcome (i.e., a deviation of the measured from the true value). We present a flexible numerical framework (numOlum) for the statistical programming language R that allows quantifying the bias due to any given analysis size interval for different types of sediment deposits. This framework is applied to synthetic samples from the realms of luminescence dating and geochemical fingerprinting, i.e. a virtual reworked loess section. We show independent validation data from artificially dosed and subsequently mixed grain-size proportions and we present a statistical approach (end-member modelling analysis, EMMA) that allows accounting for the effect of measuring the compound dosimetric history or geochemical composition of a sample. EMMA separates polymodal grain-size distributions into the underlying transport process-related distributions and their contribution to each sample. These underlying distributions can then be used to adjust grain-size preparation intervals to minimise the incorporation of "undesired" grain-size fractions.

  11. Perceptual Averaging in Individuals with Autism Spectrum Disorder.

    PubMed

    Corbett, Jennifer E; Venuti, Paola; Melcher, David

    2016-01-01

    There is mounting evidence that observers rely on statistical summaries of visual information to maintain stable and coherent perception. Sensitivity to the mean (or other prototypical value) of a visual feature (e.g., mean size) appears to be a pervasive process in human visual perception. Previous studies in individuals diagnosed with Autism Spectrum Disorder (ASD) have uncovered characteristic patterns of visual processing that suggest they may rely more on enhanced local representations of individual objects instead of computing such perceptual averages. To further explore the fundamental nature of abstract statistical representation in visual perception, we investigated perceptual averaging of mean size in a group of 12 high-functioning individuals diagnosed with ASD using simplified versions of two identification and adaptation tasks that elicited characteristic perceptual averaging effects in a control group of neurotypical participants. In Experiment 1, participants performed with above chance accuracy in recalling the mean size of a set of circles ( mean task ) despite poor accuracy in recalling individual circle sizes ( member task ). In Experiment 2, their judgments of single circle size were biased by mean size adaptation. Overall, these results suggest that individuals with ASD perceptually average information about sets of objects in the surrounding environment. Our results underscore the fundamental nature of perceptual averaging in vision, and further our understanding of how autistic individuals make sense of the external environment.

  12. Analysis and meta-analysis of single-case designs with a standardized mean difference statistic: a primer and applications.

    PubMed

    Shadish, William R; Hedges, Larry V; Pustejovsky, James E

    2014-04-01

    This article presents a d-statistic for single-case designs that is in the same metric as the d-statistic used in between-subjects designs such as randomized experiments and offers some reasons why such a statistic would be useful in SCD research. The d has a formal statistical development, is accompanied by appropriate power analyses, and can be estimated using user-friendly SPSS macros. We discuss both advantages and disadvantages of d compared to other approaches such as previous d-statistics, overlap statistics, and multilevel modeling. It requires at least three cases for computation and assumes normally distributed outcomes and stationarity, assumptions that are discussed in some detail. We also show how to test these assumptions. The core of the article then demonstrates in depth how to compute d for one study, including estimation of the autocorrelation and the ratio of between case variance to total variance (between case plus within case variance), how to compute power using a macro, and how to use the d to conduct a meta-analysis of studies using single-case designs in the free program R, including syntax in an appendix. This syntax includes how to read data, compute fixed and random effect average effect sizes, prepare a forest plot and a cumulative meta-analysis, estimate various influence statistics to identify studies contributing to heterogeneity and effect size, and do various kinds of publication bias analyses. This d may prove useful for both the analysis and meta-analysis of data from SCDs. Copyright © 2013 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  13. Digital mammography: observer performance study of the effects of pixel size on radiologists' characterization of malignant and benign microcalcifications

    NASA Astrophysics Data System (ADS)

    Chan, Heang-Ping; Helvie, Mark A.; Petrick, Nicholas; Sahiner, Berkman; Adler, Dorit D.; Blane, Caroline E.; Joynt, Lynn K.; Paramagul, Chintana; Roubidoux, Marilyn A.; Wilson, Todd E.; Hadjiiski, Lubomir M.; Goodsitt, Mitchell M.

    1999-05-01

    A receiver operating characteristic (ROC) experiment was conducted to evaluate the effects of pixel size on the characterization of mammographic microcalcifications. Digital mammograms were obtained by digitizing screen-film mammograms with a laser film scanner. One hundred twelve two-view mammograms with biopsy-proven microcalcifications were digitized at a pixel size of 35 micrometer X 35 micrometer. A region of interest (ROI) containing the microcalcifications was extracted from each image. ROI images with pixel sizes of 70 micrometers, 105 micrometers, and 140 micrometers were derived from the ROI of 35 micrometer pixel size by averaging 2 X 2, 3 X 3, and 4 X 4 neighboring pixels, respectively. The ROI images were printed on film with a laser imager. Seven MQSA-approved radiologists participated as observers. The likelihood of malignancy of the microcalcifications was rated on a 10-point confidence rating scale and analyzed with ROC methodology. The classification accuracy was quantified by the area, Az, under the ROC curve. The statistical significance of the differences in the Az values for different pixel sizes was estimated with the Dorfman-Berbaum-Metz (DBM) method for multi-reader, multi-case ROC data. It was found that five of the seven radiologists demonstrated a higher classification accuracy with the 70 micrometer or 105 micrometer images. The average Az also showed a higher classification accuracy in the range of 70 to 105 micrometer pixel size. However, the differences in A(subscript z/ between different pixel sizes did not achieve statistical significance. The low specificity of image features of microcalcifications an the large interobserver and intraobserver variabilities may have contributed to the relatively weak dependence of classification accuracy on pixel size.

  14. The relationship between national-level carbon dioxide emissions and population size: an assessment of regional and temporal variation, 1960-2005.

    PubMed

    Jorgenson, Andrew K; Clark, Brett

    2013-01-01

    This study examines the regional and temporal differences in the statistical relationship between national-level carbon dioxide emissions and national-level population size. The authors analyze panel data from 1960 to 2005 for a diverse sample of nations, and employ descriptive statistics and rigorous panel regression modeling techniques. Initial descriptive analyses indicate that all regions experienced overall increases in carbon emissions and population size during the 45-year period of investigation, but with notable differences. For carbon emissions, the sample of countries in Asia experienced the largest percent increase, followed by countries in Latin America, Africa, and lastly the sample of relatively affluent countries in Europe, North America, and Oceania combined. For population size, the sample of countries in Africa experienced the largest percent increase, followed countries in Latin America, Asia, and the combined sample of countries in Europe, North America, and Oceania. Findings for two-way fixed effects panel regression elasticity models of national-level carbon emissions indicate that the estimated elasticity coefficient for population size is much smaller for nations in Africa than for nations in other regions of the world. Regarding potential temporal changes, from 1960 to 2005 the estimated elasticity coefficient for population size decreased by 25% for the sample of Africa countries, 14% for the sample of Asia countries, 6.5% for the sample of Latin America countries, but remained the same in size for the sample of countries in Europe, North America, and Oceania. Overall, while population size continues to be the primary driver of total national-level anthropogenic carbon dioxide emissions, the findings for this study highlight the need for future research and policies to recognize that the actual impacts of population size on national-level carbon emissions differ across both time and region.

  15. Cluster size statistic and cluster mass statistic: two novel methods for identifying changes in functional connectivity between groups or conditions.

    PubMed

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.

  16. Across-cohort QC analyses of GWAS summary statistics from complex traits.

    PubMed

    Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M

    2016-01-01

    Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics F st statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy.

  17. Cluster Size Statistic and Cluster Mass Statistic: Two Novel Methods for Identifying Changes in Functional Connectivity Between Groups or Conditions

    PubMed Central

    Ing, Alex; Schwarzbauer, Christian

    2014-01-01

    Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods – the cluster size statistic (CSS) and cluster mass statistic (CMS) – are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity. PMID:24906136

  18. Across-cohort QC analyses of GWAS summary statistics from complex traits

    PubMed Central

    Chen, Guo-Bo; Lee, Sang Hong; Robinson, Matthew R; Trzaskowski, Maciej; Zhu, Zhi-Xiang; Winkler, Thomas W; Day, Felix R; Croteau-Chonka, Damien C; Wood, Andrew R; Locke, Adam E; Kutalik, Zoltán; Loos, Ruth J F; Frayling, Timothy M; Hirschhorn, Joel N; Yang, Jian; Wray, Naomi R; Visscher, Peter M

    2017-01-01

    Genome-wide association studies (GWASs) have been successful in discovering SNP trait associations for many quantitative traits and common diseases. Typically, the effect sizes of SNP alleles are very small and this requires large genome-wide association meta-analyses (GWAMAs) to maximize statistical power. A trend towards ever-larger GWAMA is likely to continue, yet dealing with summary statistics from hundreds of cohorts increases logistical and quality control problems, including unknown sample overlap, and these can lead to both false positive and false negative findings. In this study, we propose four metrics and visualization tools for GWAMA, using summary statistics from cohort-level GWASs. We propose methods to examine the concordance between demographic information, and summary statistics and methods to investigate sample overlap. (I) We use the population genetics Fst statistic to verify the genetic origin of each cohort and their geographic location, and demonstrate using GWAMA data from the GIANT Consortium that geographic locations of cohorts can be recovered and outlier cohorts can be detected. (II) We conduct principal component analysis based on reported allele frequencies, and are able to recover the ancestral information for each cohort. (III) We propose a new statistic that uses the reported allelic effect sizes and their standard errors to identify significant sample overlap or heterogeneity between pairs of cohorts. (IV) To quantify unknown sample overlap across all pairs of cohorts, we propose a method that uses randomly generated genetic predictors that does not require the sharing of individual-level genotype data and does not breach individual privacy. PMID:27552965

  19. DISTMIX: direct imputation of summary statistics for unmeasured SNPs from mixed ethnicity cohorts.

    PubMed

    Lee, Donghyung; Bigdeli, T Bernard; Williamson, Vernell S; Vladimirov, Vladimir I; Riley, Brien P; Fanous, Ayman H; Bacanu, Silviu-Alin

    2015-10-01

    To increase the signal resolution for large-scale meta-analyses of genome-wide association studies, genotypes at unmeasured single nucleotide polymorphisms (SNPs) are commonly imputed using large multi-ethnic reference panels. However, the ever increasing size and ethnic diversity of both reference panels and cohorts makes genotype imputation computationally challenging for moderately sized computer clusters. Moreover, genotype imputation requires subject-level genetic data, which unlike summary statistics provided by virtually all studies, is not publicly available. While there are much less demanding methods which avoid the genotype imputation step by directly imputing SNP statistics, e.g. Directly Imputing summary STatistics (DIST) proposed by our group, their implicit assumptions make them applicable only to ethnically homogeneous cohorts. To decrease computational and access requirements for the analysis of cosmopolitan cohorts, we propose DISTMIX, which extends DIST capabilities to the analysis of mixed ethnicity cohorts. The method uses a relevant reference panel to directly impute unmeasured SNP statistics based only on statistics at measured SNPs and estimated/user-specified ethnic proportions. Simulations show that the proposed method adequately controls the Type I error rates. The 1000 Genomes panel imputation of summary statistics from the ethnically diverse Psychiatric Genetic Consortium Schizophrenia Phase 2 suggests that, when compared to genotype imputation methods, DISTMIX offers comparable imputation accuracy for only a fraction of computational resources. DISTMIX software, its reference population data, and usage examples are publicly available at http://code.google.com/p/distmix. dlee4@vcu.edu Supplementary Data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  20. Risk factors for persistent gestational trophoblastic neoplasia.

    PubMed

    Kuyumcuoglu, Umur; Guzel, Ali Irfan; Erdemoglu, Mahmut; Celik, Yusuf

    2011-01-01

    This retrospective study evaluated the risk factors for persistent gestational trophoblastic disease (GTN) and determined their odds ratios. This study included 100 cases with GTN admitted to our clinic. Possible risk factors recorded were age, gravidity, parity, size of the neoplasia, and beta-human chorionic gonadotropin levels (beta-hCG) before and after the procedure. Statistical analyses consisted of the independent sample t-test and logistic regression using the statistical package SPSS ver. 15.0 for Windows (SPSS, Chicago, IL, USA). Twenty of the cases had persistent GTN, and the differences between these and the others cases were evaluated. The size of the neoplasia and histopathological type of GTN had no statistical relationship with persistence, whereas age, gravidity, and beta-hCG levels were significant risk factors for persistent GTN (p < 0.05). The odds ratios (95% confidence interval (CI)) for age, gravidity, and pre- and post-evacuation beta-hCG levels determined using logistic regression were 4.678 (0.97-22.44), 7.315 (1.16-46.16), 2.637 (1.41-4.94), and 2.339 (1.52-3.60), respectively. Patient age, gravidity, and beta-hCG levels were risk factors for persistent GTN, whereas the size of the neoplasia and histopathological type of GTN were not significant risk factors.

  1. Statistical framework and noise sensitivity of the amplitude radial correlation contrast method.

    PubMed

    Kipervaser, Zeev Gideon; Pelled, Galit; Goelman, Gadi

    2007-09-01

    A statistical framework for the amplitude radial correlation contrast (RCC) method, which integrates a conventional pixel threshold approach with cluster-size statistics, is presented. The RCC method uses functional MRI (fMRI) data to group neighboring voxels in terms of their degree of temporal cross correlation and compares coherences in different brain states (e.g., stimulation OFF vs. ON). By defining the RCC correlation map as the difference between two RCC images, the map distribution of two OFF states is shown to be normal, enabling the definition of the pixel cutoff. The empirical cluster-size null distribution obtained after the application of the pixel cutoff is used to define a cluster-size cutoff that allows 5% false positives. Assuming that the fMRI signal equals the task-induced response plus noise, an analytical expression of amplitude-RCC dependency on noise is obtained and used to define the pixel threshold. In vivo and ex vivo data obtained during rat forepaw electric stimulation are used to fine-tune this threshold. Calculating the spatial coherences within in vivo and ex vivo images shows enhanced coherence in the in vivo data, but no dependency on the anesthesia method, magnetic field strength, or depth of anesthesia, strengthening the generality of the proposed cutoffs. Copyright (c) 2007 Wiley-Liss, Inc.

  2. [Evaluation of the quality of Anales Españoles de Pediatría versus Medicina Clínica].

    PubMed

    Bonillo Perales, A

    2002-08-01

    To compare the scientific methodology and quality of articles published in Anales Españoles de Pediatría and Medicina Clínica. A stratified and randomized selection of 40 original articles published in 2001 in Anales Españoles de Pediatría and Medicina Clínica was made. Methodological errors in the critical analysis of original articles (21 items), epidemiological design, sample size, statistical complexity and levels of scientific evidence in both journals were compared using the chi-squared and/or Student's t-test. No differences were found between Anales Españoles de Pediatría and Medicina Clínica in the critical evaluation of original articles (p > 0.2). In original articles published in Anales Españoles de Pediatría, the designs were of lower scientific evidence (a lower proportion of clinical trials, cohort and case-control studies) (17.5 vs 42.5 %, p 0.05), sample sizes were smaller (p 0.003) and there was less statistical complexity in the results section (p 0.03). To improve the scientific quality of Anales Españoles de Pediatría, improved study designs, larger sample sizes and greater statistical complexity are required in its articles.

  3. Statistical Modelling of Temperature and Moisture Uptake of Biochars Exposed to Selected Relative Humidity of Air.

    PubMed

    Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel

    2018-02-09

    New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.

  4. Evaluation of photosynthetic efficacy and CO2 removal of microalgae grown in an enriched bicarbonate medium.

    PubMed

    Abinandan, S; Shanthakumar, S

    2016-06-01

    Bicarbonate species in the aqueous phase is the primary source for CO 2 for the growth of microalgae. The potential of carbon dioxide (CO 2 ) fixation by Chlorella pyrenoidosa in enriched bicarbonate medium was evaluated. In the present study, effects of parameters such as pH, sodium bicarbonate concentration and inoculum size were assessed for the removal of CO 2 by C. pyrenoidosa under mixotrophic condition. Central composite design tool from response surface methodology was used to validate statistical methods in order to study the influence of these parameters. The obtained results reveal that the maximum removal of CO 2 was attained at pH 8 with sodium bicarbonate concentration of 3.33 g/l, and inoculum size of 30 %. The experimental results were statistically significant with R 2 value of 0.9527 and 0.960 for CO 2 removal and accumulation of chlorophyll content, respectively. Among the various interactions, interactive effects between the parameters pH and inoculum size was statistically significant (P < 0.05) for CO 2 removal and chlorophyll accumulation. Based on the studies, the application of C. pyrenoidosa as a potential source for carbon dioxide removal at alkaline pH from bicarbonate source is highlighted.

  5. Sample Size Requirements for Studies of Treatment Effects on Beta-Cell Function in Newly Diagnosed Type 1 Diabetes

    PubMed Central

    Lachin, John M.; McGee, Paula L.; Greenbaum, Carla J.; Palmer, Jerry; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of -cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(), log(+1) and square-root transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8–12 years of age, adolescents (13–17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13–17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(+1) and transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes. PMID:22102862

  6. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    PubMed

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately evaluate the sample size for studies of new agents to preserve C-peptide levels in newly diagnosed type 1 diabetes.

  7. IGESS: a statistical approach to integrating individual-level genotype data and summary statistics in genome-wide association studies.

    PubMed

    Dai, Mingwei; Ming, Jingsi; Cai, Mingxuan; Liu, Jin; Yang, Can; Wan, Xiang; Xu, Zongben

    2017-09-15

    Results from genome-wide association studies (GWAS) suggest that a complex phenotype is often affected by many variants with small effects, known as 'polygenicity'. Tens of thousands of samples are often required to ensure statistical power of identifying these variants with small effects. However, it is often the case that a research group can only get approval for the access to individual-level genotype data with a limited sample size (e.g. a few hundreds or thousands). Meanwhile, summary statistics generated using single-variant-based analysis are becoming publicly available. The sample sizes associated with the summary statistics datasets are usually quite large. How to make the most efficient use of existing abundant data resources largely remains an open question. In this study, we propose a statistical approach, IGESS, to increasing statistical power of identifying risk variants and improving accuracy of risk prediction by i ntegrating individual level ge notype data and s ummary s tatistics. An efficient algorithm based on variational inference is developed to handle the genome-wide analysis. Through comprehensive simulation studies, we demonstrated the advantages of IGESS over the methods which take either individual-level data or summary statistics data as input. We applied IGESS to perform integrative analysis of Crohns Disease from WTCCC and summary statistics from other studies. IGESS was able to significantly increase the statistical power of identifying risk variants and improve the risk prediction accuracy from 63.2% ( ±0.4% ) to 69.4% ( ±0.1% ) using about 240 000 variants. The IGESS software is available at https://github.com/daviddaigithub/IGESS . zbxu@xjtu.edu.cn or xwan@comp.hkbu.edu.hk or eeyang@hkbu.edu.hk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  8. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

    PubMed Central

    Fraley, R. Chris; Vazire, Simine

    2014-01-01

    The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

  9. Cognitive-behavioral high parental involvement treatments for pediatric obsessive-compulsive disorder: A meta-analysis.

    PubMed

    Iniesta-Sepúlveda, Marina; Rosa-Alcázar, Ana I; Sánchez-Meca, Julio; Parada-Navas, José L; Rosa-Alcázar, Ángel

    2017-06-01

    A meta-analysis on the efficacy of cognitive-behavior-family treatment (CBFT) on children and adolescents with obsessive-compulsive disorder (OCD) was accomplished. The purposes of the study were: (a) to estimate the effect magnitude of CBFT in ameliorating obsessive-compulsive symptoms and reducing family accommodation on pediatric OCD and (b) to identify potential moderator variables of the effect sizes. A literature search enabled us to identify 27 studies that fulfilled our selection criteria. The effect size index was the standardized pretest-postest mean change index. For obsessive-compulsive symptoms, the adjusted mean effect size for CBFT was clinically relevant and statistically significant in the posttest (d adj =1.464). For family accommodation the adjusted mean effect size was also positive and statistically significant, but in a lesser extent than for obsessive-compulsive symptoms (d adj =0.511). Publication bias was discarded as a threat against the validity of the meta-analytic results. Large heterogeneity among effect sizes was found. Better results were found when CBFT was individually applied than in group (d + =2.429 and 1.409, respectively). CBFT is effective to reduce obsessive-compulsive symptoms, but offers a limited effect for family accommodation. Additional modules must be included in CBFT to improve its effectiveness on family accommodation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Sample sizes and model comparison metrics for species distribution models

    Treesearch

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  11. The Impact of APA and AERA Guidelines on Effect Size Reporting

    ERIC Educational Resources Information Center

    Peng, Chao-Ying Joanne; Chen, Li-Ting; Chiang, Hsu-Min; Chiang, Yi-Chen

    2013-01-01

    Given the long history of effect size (ES) indices (Olejnik and Algina, "Contemporary Educational Psychology," 25, 241-286 2000) and various attempts by APA and AERA to encourage the reporting and interpretation of ES to supplement findings from inferential statistical analyses, it is essential to document the impact of APA and AERA standards on…

  12. Accuracy assessment of percent canopy cover, cover type, and size class

    Treesearch

    H. T. Schreuder; S. Bain; R. C. Czaplewski

    2003-01-01

    Truth for vegetation cover percent and type is obtained from very large-scale photography (VLSP), stand structure as measured by size classes, and vegetation types from a combination of VLSP and ground sampling. We recommend using the Kappa statistic with bootstrap confidence intervals for overall accuracy, and similarly bootstrap confidence intervals for percent...

  13. The Hard but Necessary Task of Gathering Order-One Effect Size Indices in Meta-Analysis

    ERIC Educational Resources Information Center

    Ortego, Carmen; Botella, Juan

    2010-01-01

    Meta-analysis of studies with two groups and two measurement occasions must employ order-one effect size indices to represent study outcomes. Especially with non-random assignment, non-equivalent control group designs, a statistical analysis restricted to post-treatment scores can lead to severely biased conclusions. The 109 primary studies…

  14. Confidence Intervals for Effect Sizes: Compliance and Clinical Significance in the "Journal of Consulting and Clinical Psychology"

    ERIC Educational Resources Information Center

    Odgaard, Eric C.; Fowler, Robert L.

    2010-01-01

    Objective: In 2005, the "Journal of Consulting and Clinical Psychology" ("JCCP") became the first American Psychological Association (APA) journal to require statistical measures of clinical significance, plus effect sizes (ESs) and associated confidence intervals (CIs), for primary outcomes (La Greca, 2005). As this represents the single largest…

  15. Random Responding as a Threat to the Validity of Effect Size Estimates in Correlational Research

    ERIC Educational Resources Information Center

    Crede, Marcus

    2010-01-01

    Random responding to psychological inventories is a long-standing concern among clinical practitioners and researchers interested in interpreting idiographic data, but it is typically viewed as having only a minor impact on the statistical inferences drawn from nomothetic data. This article explores the impact of random responding on the size and…

  16. Aggregate and Individual Replication Probability within an Explicit Model of the Research Process

    ERIC Educational Resources Information Center

    Miller, Jeff; Schwarz, Wolf

    2011-01-01

    We study a model of the research process in which the true effect size, the replication jitter due to changes in experimental procedure, and the statistical error of effect size measurement are all normally distributed random variables. Within this model, we analyze the probability of successfully replicating an initial experimental result by…

  17. Effect Size Measures for Mediation Models: Quantitative Strategies for Communicating Indirect Effects

    ERIC Educational Resources Information Center

    Preacher, Kristopher J.; Kelley, Ken

    2011-01-01

    The statistical analysis of mediation effects has become an indispensable tool for helping scientists investigate processes thought to be causal. Yet, in spite of many recent advances in the estimation and testing of mediation effects, little attention has been given to methods for communicating effect size and the practical importance of those…

  18. A Response to Holster and Lake Regarding Guessing and the Rasch Model

    ERIC Educational Resources Information Center

    Stewart, Jeffrey; McLean, Stuart; Kramer, Brandon

    2017-01-01

    Stewart questioned vocabulary size estimation methods proposed by Beglar and Nation for the Vocabulary Size Test, further arguing Rasch mean square (MSQ) fit statistics cannot determine the proportion of random guesses contained in the average learner's raw score, because the average value will be near 1 by design. He illustrated this by…

  19. Determining sample size for tree utilization surveys

    Treesearch

    Stanley J. Zarnoch; James W. Bentley; Tony G. Johnson

    2004-01-01

    The U.S. Department of Agriculture Forest Service has conducted many studies to determine what proportion of the timber harvested in the South is actually utilized. This paper describes the statistical methods used to determine required sample sizes for estimating utilization ratios for a required level of precision. The data used are those for 515 hardwood and 1,557...

  20. Performing Contrast Analysis in Factorial Designs: From NHST to Confidence Intervals and Beyond

    ERIC Educational Resources Information Center

    Wiens, Stefan; Nilsson, Mats E.

    2017-01-01

    Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful…

  1. A New Sample Size Formula for Regression.

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Barcikowski, Robert S.

    The focus of this research was to determine the efficacy of a new method of selecting sample sizes for multiple linear regression. A Monte Carlo simulation was used to study both empirical predictive power rates and empirical statistical power rates of the new method and seven other methods: those of C. N. Park and A. L. Dudycha (1974); J. Cohen…

  2. Using Sieving and Unknown Sand Samples for a Sedimentation-Stratigraphy Class Project with Linkage to Introductory Courses

    ERIC Educational Resources Information Center

    Videtich, Patricia E.; Neal, William J.

    2012-01-01

    Using sieving and sample "unknowns" for instructional grain-size analysis and interpretation of sands in undergraduate sedimentology courses has advantages over other techniques. Students (1) learn to calculate and use statistics; (2) visually observe differences in the grain-size fractions, thereby developing a sense of specific size…

  3. Acute Respiratory Distress Syndrome Measurement Error. Potential Effect on Clinical Study Results

    PubMed Central

    Cooke, Colin R.; Iwashyna, Theodore J.; Hofer, Timothy P.

    2016-01-01

    Rationale: Identifying patients with acute respiratory distress syndrome (ARDS) is a recognized challenge. Experts often have only moderate agreement when applying the clinical definition of ARDS to patients. However, no study has fully examined the implications of low reliability measurement of ARDS on clinical studies. Objectives: To investigate how the degree of variability in ARDS measurement commonly reported in clinical studies affects study power, the accuracy of treatment effect estimates, and the measured strength of risk factor associations. Methods: We examined the effect of ARDS measurement error in randomized clinical trials (RCTs) of ARDS-specific treatments and cohort studies using simulations. We varied the reliability of ARDS diagnosis, quantified as the interobserver reliability (κ-statistic) between two reviewers. In RCT simulations, patients identified as having ARDS were enrolled, and when measurement error was present, patients without ARDS could be enrolled. In cohort studies, risk factors as potential predictors were analyzed using reviewer-identified ARDS as the outcome variable. Measurements and Main Results: Lower reliability measurement of ARDS during patient enrollment in RCTs seriously degraded study power. Holding effect size constant, the sample size necessary to attain adequate statistical power increased by more than 50% as reliability declined, although the result was sensitive to ARDS prevalence. In a 1,400-patient clinical trial, the sample size necessary to maintain similar statistical power increased to over 1,900 when reliability declined from perfect to substantial (κ = 0.72). Lower reliability measurement diminished the apparent effectiveness of an ARDS-specific treatment from a 15.2% (95% confidence interval, 9.4–20.9%) absolute risk reduction in mortality to 10.9% (95% confidence interval, 4.7–16.2%) when reliability declined to moderate (κ = 0.51). In cohort studies, the effect on risk factor associations was similar. Conclusions: ARDS measurement error can seriously degrade statistical power and effect size estimates of clinical studies. The reliability of ARDS measurement warrants careful attention in future ARDS clinical studies. PMID:27159648

  4. Education

    DTIC Science & Technology

    2005-01-01

    program) steadily declined from 15% in 1970 to 10.7% in 2001.16 Data from the National Center for Education Statistics show that the number of...academic institutions, and corporate education and training institutions. By size, it’s defined in terms of distribution of funds, facilities , and...of students entering four-year colleges and universities require some remedial education .”9 Given statistics such as these, concerns for the US

  5. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    ERIC Educational Resources Information Center

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  6. The Relationship between Visual Analysis and Five Statistical Analyses in a Simple AB Single-Case Research Design

    ERIC Educational Resources Information Center

    Brossart, Daniel F.; Parker, Richard I.; Olson, Elizabeth A.; Mahadevan, Lakshmi

    2006-01-01

    This study explored some practical issues for single-case researchers who rely on visual analysis of graphed data, but who also may consider supplemental use of promising statistical analysis techniques. The study sought to answer three major questions: (a) What is a typical range of effect sizes from these analytic techniques for data from…

  7. Statistics & Input-Output Measures for School Libraries in Colorado, 2002.

    ERIC Educational Resources Information Center

    Colorado State Library, Denver.

    This document presents statistics and input-output measures for K-12 school libraries in Colorado for 2002. Data are presented by type and size of school, i.e., high schools (six categories ranging from 2,000 and over to under 300), junior high/middle schools (five categories ranging from 1,000-1,999 to under 300), elementary schools (four…

  8. A General Framework for Power Analysis to Detect the Moderator Effects in Two- and Three-Level Cluster Randomized Trials

    ERIC Educational Resources Information Center

    Dong, Nianbo; Spybrook, Jessaca; Kelcey, Ben

    2016-01-01

    The purpose of this study is to propose a general framework for power analyses to detect the moderator effects in two- and three-level cluster randomized trials (CRTs). The study specifically aims to: (1) develop the statistical formulations for calculating statistical power, minimum detectable effect size (MDES) and its confidence interval to…

  9. Skin Conditions of Youths 12-17, United States. Vital and Health Statistics; Series 11, Number 157.

    ERIC Educational Resources Information Center

    Roberts, Jean; Ludford, Jacqueline

    This report of the National Center for Health Statistics presents national estimates of the prevalence of facial acne and other skin lesions among noninstitutionalized youths aged 12-17 years by age, race, sex, geographic region, population size of place of residence, family income, education of parent, overall health, indications of stress,…

  10. Size distribution spectrum of noninertial particles in turbulence

    NASA Astrophysics Data System (ADS)

    Saito, Izumi; Gotoh, Toshiyuki; Watanabe, Takeshi

    2018-05-01

    Collision-coalescence growth of noninertial particles in three-dimensional homogeneous isotropic turbulence is studied. Smoluchowski's coagulation equation describes the evolution of the size distribution of particles in this system. By applying a methodology based on turbulence theory, the equation is shown to have a steady-state solution, which corresponds to the Kolmogorov-type power-law spectrum. Direct numerical simulations of turbulence and Lagrangian particles are conducted. The result shows that the size distribution in a statistically steady state agrees accurately with the theoretical prediction.

  11. High Impact = High Statistical Standards? Not Necessarily So

    PubMed Central

    Tressoldi, Patrizio E.; Giofré, David; Sella, Francesco; Cumming, Geoff

    2013-01-01

    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors. PMID:23418533

  12. High impact  =  high statistical standards? Not necessarily so.

    PubMed

    Tressoldi, Patrizio E; Giofré, David; Sella, Francesco; Cumming, Geoff

    2013-01-01

    What are the statistical practices of articles published in journals with a high impact factor? Are there differences compared with articles published in journals with a somewhat lower impact factor that have adopted editorial policies to reduce the impact of limitations of Null Hypothesis Significance Testing? To investigate these questions, the current study analyzed all articles related to psychological, neuropsychological and medical issues, published in 2011 in four journals with high impact factors: Science, Nature, The New England Journal of Medicine and The Lancet, and three journals with relatively lower impact factors: Neuropsychology, Journal of Experimental Psychology-Applied and the American Journal of Public Health. Results show that Null Hypothesis Significance Testing without any use of confidence intervals, effect size, prospective power and model estimation, is the prevalent statistical practice used in articles published in Nature, 89%, followed by articles published in Science, 42%. By contrast, in all other journals, both with high and lower impact factors, most articles report confidence intervals and/or effect size measures. We interpreted these differences as consequences of the editorial policies adopted by the journal editors, which are probably the most effective means to improve the statistical practices in journals with high or low impact factors.

  13. Influence of CT contrast agent on dose calculation of intensity modulated radiation therapy plan for nasopharyngeal carcinoma.

    PubMed

    Lee, F K-H; Chan, C C-L; Law, C-K

    2009-02-01

    Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.

  14. [Analysis on difference of richness of traditional Chinese medicine resources in Chongqing based on grid technology].

    PubMed

    Zhang, Xiao-Bo; Qu, Xian-You; Li, Meng; Wang, Hui; Jing, Zhi-Xian; Liu, Xiang; Zhang, Zhi-Wei; Guo, Lan-Ping; Huang, Lu-Qi

    2017-11-01

    After the end of the national and local medicine resources census work, a large number of Chinese medicine resources and distribution of data will be summarized. The species richness between the regions is a valid indicator for objective reflection of inter-regional resources of Chinese medicine. Due to the large difference in the size of the county area, the assessment of the intercropping of the resources of the traditional Chinese medicine by the county as a statistical unit will lead to the deviation of the regional abundance statistics. Based on the rule grid or grid statistical methods, the size of the statistical unit due to different can be reduced, the differences in the richness of traditional Chinese medicine resources are caused. Taking Chongqing as an example, based on the existing survey data, the difference of richness of traditional Chinese medicine resources under different grid scale were compared and analyzed. The results showed that the 30 km grid could be selected and the richness of Chinese medicine resources in Chongqing could reflect the objective situation of intercropping resources richness in traditional Chinese medicine better. Copyright© by the Chinese Pharmaceutical Association.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hagos, Samson M.; Feng, Zhe; Burleyson, Casey D.

    Regional cloud permitting model simulations of cloud populations observed during the 2011 ARM Madden Julian Oscillation Investigation Experiment/ Dynamics of Madden-Julian Experiment (AMIE/DYNAMO) field campaign are evaluated against radar and ship-based measurements. Sensitivity of model simulated surface rain rate statistics to parameters and parameterization of hydrometeor sizes in five commonly used WRF microphysics schemes are examined. It is shown that at 2 km grid spacing, the model generally overestimates rain rate from large and deep convective cores. Sensitivity runs involving variation of parameters that affect rain drop or ice particle size distribution (more aggressive break-up process etc) generally reduce themore » bias in rain-rate and boundary layer temperature statistics as the smaller particles become more vulnerable to evaporation. Furthermore significant improvement in the convective rain-rate statistics is observed when the horizontal grid-spacing is reduced to 1 km and 0.5 km, while it is worsened when run at 4 km grid spacing as increased turbulence enhances evaporation. The results suggest modulation of evaporation processes, through parameterization of turbulent mixing and break-up of hydrometeors may provide a potential avenue for correcting cloud statistics and associated boundary layer temperature biases in regional and global cloud permitting model simulations.« less

  16. Fast mean and variance computation of the diffuse sound transmission through finite-sized thick and layered wall and floor systems

    NASA Astrophysics Data System (ADS)

    Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.

    2018-05-01

    A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.

  17. Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.

    PubMed

    Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L

    2008-04-01

    The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.

  18. [Variation trend and significance of adult tonsil size and tongue position].

    PubMed

    Bin, X; Zhou, Y

    2016-08-05

    Objective: The aim of this study is to explore the changing trend and significance of adult tonsil size and tongue position by observing adults in different age groups. Method: Oropharyngeal cavities of 1 060 adults who undergoing health examination and had no history of tonsil surgery were observed. Friedman tongue position (FTP) and tonsil size (TS) were scored according to Friedman's criteria and results were statistic analyzed to evaluate their changing law and significance. Result: Mean FTP scores increased with age significantly( P <0.01); FTP score in male was lower than that in female( P <0.01). TS score significantly decreased with age( P <0.05).The average score of TS had no statistical significance in different gender. Although there was no statistical significance, total score of FTP show an increasing trend with age( P >0.05);Total scores of FTP were different between sexes(male 4.12±0.67,female 4.23±0.68, P <0.05).BMI was not found to be statistically different when FTP scores, TS scores and total scores changed ( P >0.05); but it showed an increasing trend with age( P <0.01). Conclusion: Width of pharyngeal cavity in normal adults is always kept in certain stability, while it proves to be narrower in obese people. TS score and FTP score, which appear the opposite trend with age, can be thought as a major factor to keep a stable width of oral pharyngeal cavity. Copyright© by the Editorial Department of Journal of Clinical Otorhinolaryngology Head and Neck Surgery.

  19. An application of principal component analysis to the clavicle and clavicle fixation devices.

    PubMed

    Daruwalla, Zubin J; Courtis, Patrick; Fitzpatrick, Clare; Fitzpatrick, David; Mullett, Hannan

    2010-03-26

    Principal component analysis (PCA) enables the building of statistical shape models of bones and joints. This has been used in conjunction with computer assisted surgery in the past. However, PCA of the clavicle has not been performed. Using PCA, we present a novel method that examines the major modes of size and three-dimensional shape variation in male and female clavicles and suggests a method of grouping the clavicle into size and shape categories. Twenty-one high-resolution computerized tomography scans of the clavicle were reconstructed and analyzed using a specifically developed statistical software package. After performing statistical shape analysis, PCA was applied to study the factors that account for anatomical variation. The first principal component representing size accounted for 70.5 percent of anatomical variation. The addition of a further three principal components accounted for almost 87 percent. Using statistical shape analysis, clavicles in males have a greater lateral depth and are longer, wider and thicker than in females. However, the sternal angle in females is larger than in males. PCA confirmed these differences between genders but also noted that men exhibit greater variance and classified clavicles into five morphological groups. This unique approach is the first that standardizes a clavicular orientation. It provides information that is useful to both, the biomedical engineer and clinician. Other applications include implant design with regard to modifying current or designing future clavicle fixation devices. Our findings support the need for further development of clavicle fixation devices and the questioning of whether gender-specific devices are necessary.

  20. Support Provided to the External Tank (ET) Project on the Use of Statistical Analysis for ET Certification Consultation Position Paper

    NASA Technical Reports Server (NTRS)

    Null, Cynthia H.

    2009-01-01

    In June 2004, the June Space Flight Leadership Council (SFLC) assigned an action to the NASA Engineering and Safety Center (NESC) and External Tank (ET) project jointly to characterize the available dataset [of defect sizes from dissections of foam], identify resultant limitations to statistical treatment of ET as-built foam as part of the overall thermal protection system (TPS) certification, and report to the Program Requirements Change Board (PRCB) and SFLC in September 2004. The NESC statistics team was formed to assist the ET statistics group in August 2004. The NESC's conclusions are presented in this report.

Top