Lee, F K-H; Chan, C C-L; Law, C-K
2009-02-01
Contrast enhanced computed tomography (CECT) has been used for delineation of treatment target in radiotherapy. The different Hounsfield unit due to the injected contrast agent may affect radiation dose calculation. We investigated this effect on intensity modulated radiotherapy (IMRT) of nasopharyngeal carcinoma (NPC). Dose distributions of 15 IMRT plans were recalculated on CECT. Dose statistics for organs at risk (OAR) and treatment targets were recorded for the plain CT-calculated and CECT-calculated plans. Statistical significance of the differences was evaluated. Correlations were also tested, among magnitude of calculated dose difference, tumor size and level of enhancement contrast. Differences in nodal mean/median dose were statistically significant, but small (approximately 0.15 Gy for a 66 Gy prescription). In the vicinity of the carotid arteries, the difference in calculated dose was also statistically significant, but only with a mean of approximately 0.2 Gy. We did not observe any significant correlation between the difference in the calculated dose and the tumor size or level of enhancement. The results implied that the calculated dose difference was clinically insignificant and may be acceptable for IMRT planning.
2012 Workplace and Gender Relations Survey of Reserve Component Members: Survey Note and Briefing
2013-05-08
to be a statistically significant difference at the .05 leve l of significance. Overview The abi li ty to calculate annual prevalence rates is a...understand that to be a statistically significant difference at the .05 level of significance. Overview The ability to calculate annual prevalence...statistically significant differences for women or men in the overall rate between 2008 and 2012. Of the 2.8% of women who experienced UMAN
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2010 CFR
2010-07-01
... introductory statistics texts. ... student's t-test involves calculation of the value of a t-statistic for each comparison of the mean... parameter with its initial background concentration or value. The calculated value of the t-statistic must...
McMullan, Miriam; Jones, Ray; Lea, Susan
2010-04-01
This paper is a report of a correlational study of the relations of age, status, experience and drug calculation ability to numerical ability of nursing students and Registered Nurses. Competent numerical and drug calculation skills are essential for nurses as mistakes can put patients' lives at risk. A cross-sectional study was carried out in 2006 in one United Kingdom university. Validated numerical and drug calculation tests were given to 229 second year nursing students and 44 Registered Nurses attending a non-medical prescribing programme. The numeracy test was failed by 55% of students and 45% of Registered Nurses, while 92% of students and 89% of nurses failed the drug calculation test. Independent of status or experience, older participants (> or = 35 years) were statistically significantly more able to perform numerical calculations. There was no statistically significant difference between nursing students and Registered Nurses in their overall drug calculation ability, but nurses were statistically significantly more able than students to perform basic numerical calculations and calculations for solids, oral liquids and injections. Both nursing students and Registered Nurses were statistically significantly more able to perform calculations for solids, liquid oral and injections than calculations for drug percentages, drip and infusion rates. To prevent deskilling, Registered Nurses should continue to practise and refresh all the different types of drug calculations as often as possible with regular (self)-testing of their ability. Time should be set aside in curricula for nursing students to learn how to perform basic numerical and drug calculations. This learning should be reinforced through regular practice and assessment.
NASA Technical Reports Server (NTRS)
Staubert, R.
1985-01-01
Methods for calculating the statistical significance of excess events and the interpretation of the formally derived values are discussed. It is argued that a simple formula for a conservative estimate should generally be used in order to provide a common understanding of quoted values.
The choice of statistical methods for comparisons of dosimetric data in radiotherapy.
Chaikh, Abdulhamid; Giraud, Jean-Yves; Perrin, Emmanuel; Bresciani, Jean-Pierre; Balosso, Jacques
2014-09-18
Novel irradiation techniques are continuously introduced in radiotherapy to optimize the accuracy, the security and the clinical outcome of treatments. These changes could raise the question of discontinuity in dosimetric presentation and the subsequent need for practice adjustments in case of significant modifications. This study proposes a comprehensive approach to compare different techniques and tests whether their respective dose calculation algorithms give rise to statistically significant differences in the treatment doses for the patient. Statistical investigation principles are presented in the framework of a clinical example based on 62 fields of radiotherapy for lung cancer. The delivered doses in monitor units were calculated using three different dose calculation methods: the reference method accounts the dose without tissues density corrections using Pencil Beam Convolution (PBC) algorithm, whereas new methods calculate the dose with tissues density correction for 1D and 3D using Modified Batho (MB) method and Equivalent Tissue air ratio (ETAR) method, respectively. The normality of the data and the homogeneity of variance between groups were tested using Shapiro-Wilks and Levene test, respectively, then non-parametric statistical tests were performed. Specifically, the dose means estimated by the different calculation methods were compared using Friedman's test and Wilcoxon signed-rank test. In addition, the correlation between the doses calculated by the three methods was assessed using Spearman's rank and Kendall's rank tests. The Friedman's test showed a significant effect on the calculation method for the delivered dose of lung cancer patients (p <0.001). The density correction methods yielded to lower doses as compared to PBC by on average (-5 ± 4.4 SD) for MB and (-4.7 ± 5 SD) for ETAR. Post-hoc Wilcoxon signed-rank test of paired comparisons indicated that the delivered dose was significantly reduced using density-corrected methods as compared to the reference method. Spearman's and Kendall's rank tests indicated a positive correlation between the doses calculated with the different methods. This paper illustrates and justifies the use of statistical tests and graphical representations for dosimetric comparisons in radiotherapy. The statistical analysis shows the significance of dose differences resulting from two or more techniques in radiotherapy.
Dechartres, Agnes; Bond, Elizabeth G; Scheer, Jordan; Riveros, Carolina; Atal, Ignacio; Ravaud, Philippe
2016-11-30
Publication bias and other reporting bias have been well documented for journal articles, but no study has evaluated the nature of results posted at ClinicalTrials.gov. We aimed to assess how many randomized controlled trials (RCTs) with results posted at ClinicalTrials.gov report statistically significant results and whether the proportion of trials with significant results differs when no treatment effect estimate or p-value is posted. We searched ClinicalTrials.gov in June 2015 for all studies with results posted. We included completed RCTs with a superiority hypothesis and considered results for the first primary outcome with results posted. For each trial, we assessed whether a treatment effect estimate and/or p-value was reported at ClinicalTrials.gov and if yes, whether results were statistically significant. If no treatment effect estimate or p-value was reported, we calculated the treatment effect and corresponding p-value using results per arm posted at ClinicalTrials.gov when sufficient data were reported. From the 17,536 studies with results posted at ClinicalTrials.gov, we identified 2823 completed phase 3 or 4 randomized trials with a superiority hypothesis. Of these, 1400 (50%) reported a treatment effect estimate and/or p-value. Results were statistically significant for 844 trials (60%), with a median p-value of 0.01 (Q1-Q3: 0.001-0.26). For the 1423 trials with no treatment effect estimate or p-value posted, we could calculate the treatment effect and corresponding p-value using results reported per arm for 929 (65%). For 494 trials (35%), p-values could not be calculated mainly because of insufficient reporting, censored data, or repeated measurements over time. For the 929 trials we could calculate p-values, we found statistically significant results for 342 (37%), with a median p-value of 0.19 (Q1-Q3: 0.005-0.59). Half of the trials with results posted at ClinicalTrials.gov reported a treatment effect estimate and/or p-value, with significant results for 60% of these. p-values could be calculated from results reported per arm at ClinicalTrials.gov for only 65% of the other trials. The proportion of significant results was much lower for these trials, which suggests a selective posting of treatment effect estimates and/or p-values when results are statistically significant.
NASA Astrophysics Data System (ADS)
Lin, Shu; Wang, Rui; Xia, Ning; Li, Yongdong; Liu, Chunliang
2018-01-01
Statistical multipactor theories are critical prediction approaches for multipactor breakdown determination. However, these approaches still require a negotiation between the calculation efficiency and accuracy. This paper presents an improved stationary statistical theory for efficient threshold analysis of two-surface multipactor. A general integral equation over the distribution function of the electron emission phase with both the single-sided and double-sided impacts considered is formulated. The modeling results indicate that the improved stationary statistical theory can not only obtain equally good accuracy of multipactor threshold calculation as the nonstationary statistical theory, but also achieve high calculation efficiency concurrently. By using this improved stationary statistical theory, the total time consumption in calculating full multipactor susceptibility zones of parallel plates can be decreased by as much as a factor of four relative to the nonstationary statistical theory. It also shows that the effect of single-sided impacts is indispensable for accurate multipactor prediction of coaxial lines and also more significant for the high order multipactor. Finally, the influence of secondary emission yield (SEY) properties on the multipactor threshold is further investigated. It is observed that the first cross energy and the energy range between the first cross and the SEY maximum both play a significant role in determining the multipactor threshold, which agrees with the numerical simulation results in the literature.
A two-component rain model for the prediction of attenuation statistics
NASA Technical Reports Server (NTRS)
Crane, R. K.
1982-01-01
A two-component rain model has been developed for calculating attenuation statistics. In contrast to most other attenuation prediction models, the two-component model calculates the occurrence probability for volume cells or debris attenuation events. The model performed significantly better than the International Radio Consultative Committee model when used for predictions on earth-satellite paths. It is expected that the model will have applications in modeling the joint statistics required for space diversity system design, the statistics of interference due to rain scatter at attenuating frequencies, and the duration statistics for attenuation events.
[A Review on the Use of Effect Size in Nursing Research].
Kang, Hyuncheol; Yeon, Kyupil; Han, Sang Tae
2015-10-01
The purpose of this study was to introduce the main concepts of statistical testing and effect size and to provide researchers in nursing science with guidance on how to calculate the effect size for the statistical analysis methods mainly used in nursing. For t-test, analysis of variance, correlation analysis, regression analysis which are used frequently in nursing research, the generally accepted definitions of the effect size were explained. Some formulae for calculating the effect size are described with several examples in nursing research. Furthermore, the authors present the required minimum sample size for each example utilizing G*Power 3 software that is the most widely used program for calculating sample size. It is noted that statistical significance testing and effect size measurement serve different purposes, and the reliance on only one side may be misleading. Some practical guidelines are recommended for combining statistical significance testing and effect size measure in order to make more balanced decisions in quantitative analyses.
Nursing students' mathematic calculation skills.
Rainboth, Lynde; DeMasi, Chris
2006-12-01
This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.
Statistics Using Just One Formula
ERIC Educational Resources Information Center
Rosenthal, Jeffrey S.
2018-01-01
This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…
Your Chi-Square Test Is Statistically Significant: Now What?
ERIC Educational Resources Information Center
Sharpe, Donald
2015-01-01
Applied researchers have employed chi-square tests for more than one hundred years. This paper addresses the question of how one should follow a statistically significant chi-square test result in order to determine the source of that result. Four approaches were evaluated: calculating residuals, comparing cells, ransacking, and partitioning. Data…
40 CFR Appendix IV to Part 264 - Cochran's Approximation to the Behrens-Fisher Students' t-test
Code of Federal Regulations, 2011 CFR
2011-07-01
... summary measures to calculate a t-statistic (t*) and a comparison t-statistic (tc). The t* value is compared to the tc value and a conclusion reached as to whether there has been a statistically significant... made in collecting the background data. The t-statistic (tc), against which t* will be compared...
40 CFR Appendix IV to Part 264 - Cochran's Approximation to the Behrens-Fisher Students' t-test
Code of Federal Regulations, 2010 CFR
2010-07-01
... summary measures to calculate a t-statistic (t*) and a comparison t-statistic (tc). The t* value is compared to the tc value and a conclusion reached as to whether there has been a statistically significant... made in collecting the background data. The t-statistic (tc), against which t* will be compared...
Conservative Tests under Satisficing Models of Publication Bias.
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%-rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.
Conservative Tests under Satisficing Models of Publication Bias
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs. PMID:26901834
Tools for Basic Statistical Analysis
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Statistical Analysis Toolset is a collection of eight Microsoft Excel spreadsheet programs, each of which performs calculations pertaining to an aspect of statistical analysis. These programs present input and output data in user-friendly, menu-driven formats, with automatic execution. The following types of calculations are performed: Descriptive statistics are computed for a set of data x(i) (i = 1, 2, 3 . . . ) entered by the user. Normal Distribution Estimates will calculate the statistical value that corresponds to cumulative probability values, given a sample mean and standard deviation of the normal distribution. Normal Distribution from two Data Points will extend and generate a cumulative normal distribution for the user, given two data points and their associated probability values. Two programs perform two-way analysis of variance (ANOVA) with no replication or generalized ANOVA for two factors with four levels and three repetitions. Linear Regression-ANOVA will curvefit data to the linear equation y=f(x) and will do an ANOVA to check its significance.
TSP Symposium 2012 Proceedings
2012-11-01
and Statistical Model 78 7.3 Analysis and Results 79 7.4 Threats to Validity and Limitations 85 7.5 Conclusions 86 7.6 Acknowledgments 87 7.7...Table 12: Overall Statistics of the Experiment 32 Table 13: Results of Pairwise ANOVA Analysis, Highlighting Statistically Significant Differences...we calculated the percentage of defects injected. The distribution statistics are shown in Table 2. Table 2: Mean Lower, Upper Confidence Interval
An operational definition of a statistically meaningful trend.
Bryhn, Andreas C; Dimberg, Peter H
2011-04-28
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
Heidel, R Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Statistical Signal Process in R Language in the Pharmacovigilance Programme of India.
Kumar, Aman; Ahuja, Jitin; Shrivastava, Tarani Prakash; Kumar, Vipin; Kalaiselvan, Vivekanandan
2018-05-01
The Ministry of Health & Family Welfare, Government of India, initiated the Pharmacovigilance Programme of India (PvPI) in July 2010. The purpose of the PvPI is to collect data on adverse reactions due to medications, analyze it, and use the reference to recommend informed regulatory intervention, besides communicating the risk to health care professionals and the public. The goal of the present study was to apply statistical tools to find the relationship between drugs and ADRs for signal detection by R programming. Four statistical parameters were proposed for quantitative signal detection. These 4 parameters are IC 025 , PRR and PRR lb , chi-square, and N 11 ; we calculated these 4 values using R programming. We analyzed 78,983 drug-ADR combinations, and the total count of drug-ADR combination was 4,20,060. During the calculation of the statistical parameter, we use 3 variables: (1) N 11 (number of counts), (2) N 1. (Drug margin), and (3) N .1 (ADR margin). The structure and calculation of these 4 statistical parameters in R language are easily understandable. On the basis of the IC value (IC value >0), out of the 78,983 drug-ADR combination (drug-ADR combination), we found the 8,667 combinations to be significantly associated. The calculation of statistical parameters in R language is time saving and allows to easily identify new signals in the Indian ICSR (Individual Case Safety Reports) database.
The large sample size fallacy.
Lantz, Björn
2013-06-01
Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.
Gorobets, Yu I; Gorobets, O Yu
2015-01-01
The statistical model is proposed in this paper for description of orientation of trajectories of unicellular diamagnetic organisms in a magnetic field. The statistical parameter such as the effective energy is calculated on basis of this model. The resulting effective energy is the statistical characteristics of trajectories of diamagnetic microorganisms in a magnetic field connected with their metabolism. The statistical model is applicable for the case when the energy of the thermal motion of bacteria is negligible in comparison with their energy in a magnetic field and the bacteria manifest the significant "active random movement", i.e. there is the randomizing motion of the bacteria of non thermal nature, for example, movement of bacteria by means of flagellum. The energy of the randomizing active self-motion of bacteria is characterized by the new statistical parameter for biological objects. The parameter replaces the energy of the randomizing thermal motion in calculation of the statistical distribution. Copyright © 2014 Elsevier Ltd. All rights reserved.
Transport Coefficients from Large Deviation Functions
NASA Astrophysics Data System (ADS)
Gao, Chloe; Limmer, David
2017-10-01
We describe a method for computing transport coefficients from the direct evaluation of large deviation function. This method is general, relying on only equilibrium fluctuations, and is statistically efficient, employing trajectory based importance sampling. Equilibrium fluctuations of molecular currents are characterized by their large deviation functions, which is a scaled cumulant generating function analogous to the free energy. A diffusion Monte Carlo algorithm is used to evaluate the large deviation functions, from which arbitrary transport coefficients are derivable. We find significant statistical improvement over traditional Green-Kubo based calculations. The systematic and statistical errors of this method are analyzed in the context of specific transport coefficient calculations, including the shear viscosity, interfacial friction coefficient, and thermal conductivity.
Koner, Debasish; Barrios, Lizandra; González-Lezana, Tomás; Panda, Aditya N
2014-09-21
A real wave packet based time-dependent method and a statistical quantum method have been used to study the He + NeH(+) (v, j) reaction with the reactant in various ro-vibrational states, on a recently calculated ab initio ground state potential energy surface. Both the wave packet and statistical quantum calculations were carried out within the centrifugal sudden approximation as well as using the exact Hamiltonian. Quantum reaction probabilities exhibit dense oscillatory pattern for smaller total angular momentum values, which is a signature of resonances in a complex forming mechanism for the title reaction. Significant differences, found between exact and approximate quantum reaction cross sections, highlight the importance of inclusion of Coriolis coupling in the calculations. Statistical results are in fairly good agreement with the exact quantum results, for ground ro-vibrational states of the reactant. Vibrational excitation greatly enhances the reaction cross sections, whereas rotational excitation has relatively small effect on the reaction. The nature of the reaction cross section curves is dependent on the initial vibrational state of the reactant and is typical of a late barrier type potential energy profile.
ppcor: An R Package for a Fast Calculation to Semi-partial Correlation Coefficients.
Kim, Seongho
2015-11-01
Lack of a general matrix formula hampers implementation of the semi-partial correlation, also known as part correlation, to the higher-order coefficient. This is because the higher-order semi-partial correlation calculation using a recursive formula requires an enormous number of recursive calculations to obtain the correlation coefficients. To resolve this difficulty, we derive a general matrix formula of the semi-partial correlation for fast computation. The semi-partial correlations are then implemented on an R package ppcor along with the partial correlation. Owing to the general matrix formulas, users can readily calculate the coefficients of both partial and semi-partial correlations without computational burden. The package ppcor further provides users with the level of the statistical significance with its test statistic.
To P or Not to P: Backing Bayesian Statistics.
Buchinsky, Farrel J; Chadha, Neil K
2017-12-01
In biomedical research, it is imperative to differentiate chance variation from truth before we generalize what we see in a sample of subjects to the wider population. For decades, we have relied on null hypothesis significance testing, where we calculate P values for our data to decide whether to reject a null hypothesis. This methodology is subject to substantial misinterpretation and errant conclusions. Instead of working backward by calculating the probability of our data if the null hypothesis were true, Bayesian statistics allow us instead to work forward, calculating the probability of our hypothesis given the available data. This methodology gives us a mathematical means of incorporating our "prior probabilities" from previous study data (if any) to produce new "posterior probabilities." Bayesian statistics tell us how confidently we should believe what we believe. It is time to embrace and encourage their use in our otolaryngology research.
Influence of nonlinear effects on statistical properties of the radiation from SASE FEL
NASA Astrophysics Data System (ADS)
Saldin, E. L.; Schneidmiller, E. A.; Yurkov, M. V.
1998-02-01
The paper presents analysis of statistical properties of the radiation from self-amplified spontaneous emission (SASE) free-electron laser operating in nonlinear mode. The present approach allows one to calculate the following statistical properties of the SASE FEL radiation: time and spectral field correlation functions, distribution of the fluctuations of the instantaneous radiation power, distribution of the energy in the electron bunch, distribution of the radiation energy after monochromator installed at the FEL amplifier exit and the radiation spectrum. It has been observed that the statistics of the instantaneous radiation power from SASE FEL operating in the nonlinear regime changes significantly with respect to the linear regime. All numerical results presented in the paper have been calculated for the 70 nm SASE FEL at the TESLA Test Facility under construction at DESY.
Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults.
Shui, Wuyang; Zhou, Mingquan; Deng, Qingqiong; Wu, Zhongke; Ji, Yuan; Li, Kang; He, Taiping; Jiang, Haiyan
2016-09-01
Craniofacial reconstruction (CFR) is used to recreate a likeness of original facial appearance for an unidentified skull; this technique has been applied in both forensics and archeology. Many CFR techniques rely on the average facial soft tissue thickness (FSTT) of anatomical landmarks, related to ethnicity, age, sex, body mass index (BMI), etc. Previous studies typically employed FSTT at sparsely distributed anatomical landmarks, where different landmark definitions may affect the contrasting results. In the present study, a total of 90,198 one-to-one correspondence skull vertices are established on 171 head CT-scans and the FSTT of each corresponding vertex is calculated (hereafter referred to as densely calculated FSTT) for statistical analysis and CFR. Basic descriptive statistics (i.e., mean and standard deviation) for densely calculated FSTT are reported separately according to sex and age. Results show that 76.12% of overall vertices indicate that the FSTT is greater in males than females, with the exception of vertices around the zygoma, zygomatic arch and mid-lateral orbit. These sex-related significant differences are found at 55.12% of all vertices and the statistically age-related significant differences are depicted between the three age groups at a majority of all vertices (73.31% for males and 63.43% for females). Five non-overlapping categories are given and the descriptive statistics (i.e., mean, standard deviation, local standard deviation and percentage) are reported. Multiple appearances are produced using the densely calculated FSTT of various age and sex groups, and a quantitative assessment is provided to examine how relevant the choice of FSTT is to increasing the accuracy of CFR. In conclusion, this study provides a new perspective in understanding the distribution of FSTT and the construction of a new densely calculated FSTT database for craniofacial reconstruction. Copyright © 2016. Published by Elsevier Ireland Ltd.
Chaikh, Abdulhamid; Balosso, Jacques
2016-12-01
To apply the statistical bootstrap analysis and dosimetric criteria's to assess the change of prescribed dose (PD) for lung cancer to maintain the same clinical results when using new generations of dose calculation algorithms. Nine lung cancer cases were studied. For each patient, three treatment plans were generated using exactly the same beams arrangements. In plan 1, the dose was calculated using pencil beam convolution (PBC) algorithm turning on heterogeneity correction with modified batho (PBC-MB). In plan 2, the dose was calculated using anisotropic analytical algorithm (AAA) and the same PD, as plan 1. In plan 3, the dose was calculated using AAA with monitor units (MUs) obtained from PBC-MB, as input. The dosimetric criteria's include MUs, delivered dose at isocentre (Diso) and calculated dose to 95% of the target volume (D95). The bootstrap method was used to assess the significance of the dose differences and to accurately estimate the 95% confidence interval (95% CI). Wilcoxon and Spearman's rank tests were used to calculate P values and the correlation coefficient (ρ). Statistically significant for dose difference was found using point kernel model. A good correlation was observed between both algorithms types, with ρ>0.9. Using AAA instead of PBC-MB, an adjustment of the PD in the isocentre is suggested. For a given set of patients, we assessed the need to readjust the PD for lung cancer using dosimetric indices and bootstrap statistical method. Thus, if the goal is to keep on with the same clinical results, the PD for lung tumors has to be adjusted with AAA. According to our simulation we suggest to readjust the PD by 5% and an optimization for beam arrangements to better protect the organs at risks (OARs).
Akterations/corrections to the BRASS Program
NASA Technical Reports Server (NTRS)
Brand, S. N.
1985-01-01
Corrections applied to statistical programs contained in two subroutines of the Bed Rest Analysis Software System (BRASS) are summarized. Two subroutines independently calculate significant values within the BRASS program.
Finding P-Values for F Tests of Hypothesis on a Spreadsheet.
ERIC Educational Resources Information Center
Rochowicz, John A., Jr.
The calculation of the F statistic for a one-factor analysis of variance (ANOVA) and the construction of an ANOVA tables are easily implemented on a spreadsheet. This paper describes how to compute the p-value (observed significance level) for a particular F statistic on a spreadsheet. Decision making on a spreadsheet and applications to the…
The fragility of statistically significant findings from randomized trials in head and neck surgery.
Noel, Christopher W; McMullen, Caitlin; Yao, Christopher; Monteiro, Eric; Goldstein, David P; Eskander, Antoine; de Almeida, John R
2018-04-23
The Fragility Index (FI) is a novel tool for evaluating the robustness of statistically significant findings in a randomized control trial (RCT). It measures the number of events upon which statistical significance depends. We sought to calculate the FI scores for RCTs in the head and neck cancer literature where surgery was a primary intervention. Potential articles were identified in PubMed (MEDLINE), Embase, and Cochrane without publication date restrictions. Two reviewers independently screened eligible RCTs reporting at least one dichotomous and statistically significant outcome. The data from each trial were extracted and the FI scores were calculated. Associations between trial characteristics and FI were determined. In total, 27 articles were identified. The median sample size was 67.5 (interquartile range [IQR] = 42-143) and the median number of events per trial was 8 (IQR = 2.25-18.25). The median FI score was 1 (IQR = 0-2.5), meaning that changing one patient from a nonevent to an event in the treatment arm would change the result to a statistically nonsignificant result, or P > .05. The FI score was less than the number of patients lost to follow-up in 71% of cases. The FI score was found to be moderately correlated with P value (ρ = -0.52, P = .007) and with journal impact factor (ρ = 0.49, P = .009) on univariable analysis. On multivariable analysis, only the P value was found to be a predictor of FI score (P = .001). Randomized trials in the head and neck cancer literature where surgery is a primary modality are relatively nonrobust statistically with low FI scores. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.
Kono, Miyuki; Miura, Naoto; Fujii, Takao; Ohmura, Koichiro; Yoshifuji, Hajime; Yukawa, Naoichiro; Imura, Yoshitaka; Nakashima, Ran; Ikeda, Takaharu; Umemura, Shin-ichiro; Miyatake, Takafumi; Mimori, Tsuneyo
2015-01-01
Objective To examine how connective tissue diseases affect finger-vein pattern authentication. Methods The finger-vein patterns of 68 patients with connective tissue diseases and 24 healthy volunteers were acquired. Captured as CCD (charge-coupled device) images by transmitting near-infrared light through fingers, they were followed up in once in each season for one year. The similarity of the follow-up patterns and the initial one was evaluated in terms of their normalized cross-correlation C. Results The mean C values calculated for patients tended to be lower than those calculated for healthy volunteers. In midwinter (February in Japan) they showed statistically significant reduction both as compared with patients in other seasons and as compared with season-matched healthy controls, whereas the values calculated for healthy controls showed no significant seasonal changes. Values calculated for patients with systemic sclerosis (SSc) or mixed connective tissue disease (MCTD) showed major reductions in November and, especially, February. Patients with rheumatoid arthritis (RA) and patients with dermatomyositis or polymyositis (DM/PM) did not show statistically significant seasonal changes in C values. Conclusions Finger-vein patterns can be used throughout the year to identify patients with connective tissue diseases, but some attention is needed for patients with advanced disease such as SSc. PMID:26701644
NASA Astrophysics Data System (ADS)
Gomo, M.; Vermeulen, D.
2015-03-01
An investigation was conducted to statistically compare the influence of non-purging and purging groundwater sampling methods on analysed inorganic chemistry parameters and calculated saturation indices. Groundwater samples were collected from 15 monitoring wells drilled in Karoo aquifers before and after purging for the comparative study. For the non-purging method, samples were collected from groundwater flow zones located in the wells using electrical conductivity (EC) profiling. The two data sets of non-purged and purged groundwater samples were analysed for inorganic chemistry parameters at the Institute of Groundwater Studies (IGS) laboratory of the Free University in South Africa. Saturation indices for mineral phases that were found in the data base of PHREEQC hydrogeochemical model were calculated for each data set. Four one-way ANOVA tests were conducted using Microsoft excel 2007 to investigate if there is any statistically significant difference between: (1) all inorganic chemistry parameters measured in the non-purged and purged groundwater samples per each specific well, (2) all mineral saturation indices calculated for the non-purged and purged groundwater samples per each specific well, (3) individual inorganic chemistry parameters measured in the non-purged and purged groundwater samples across all wells and (4) Individual mineral saturation indices calculated for non-purged and purged groundwater samples across all wells. For all the ANOVA tests conducted, the calculated alpha values (p) are greater than 0.05 (significance level) and test statistic (F) is less than the critical value (Fcrit) (F < Fcrit). The results imply that there was no statistically significant difference between the two data sets. With a 95% confidence, it was therefore concluded that the variance between groups was rather due to random chance and not to the influence of the sampling methods (tested factor). It is therefore be possible that in some hydrogeologic conditions, non-purged groundwater samples might be just as representative as the purged ones. The findings of this study can provide an important platform for future evidence oriented research investigations to establish the necessity of purging prior to groundwater sampling in different aquifer systems.
Residual stress in glass: indentation crack and fractography approaches
Anunmana, Chuchai; Anusavice, Kenneth J.; Mecholsky, John J.
2009-01-01
Objective To test the hypothesis that the indentation crack technique can determine surface residual stresses that are not statistically significantly different from those determined from the analytical procedure using surface cracks, the four-point flexure test, and fracture surface analysis. Methods Soda-lime-silica glass bar specimens (4 mm × 2.3 mm × 28 mm) were prepared and annealed at 650 °C for 30 min before testing. The fracture toughness values of the glass bars were determined from 12 specimens based on induced surface cracks, four-point flexure, and fractographic analysis. To determine the residual stress from the indentation technique, 18 specimens were indented under 19.6 N load using a Vickers microhardness indenter. Crack lengths were measured within 1 min and 24 h after indentation, and the measured crack lengths were compared with the mean crack lengths of annealed specimens. Residual stress was calculated from an equation developed for the indentation technique. All specimens were fractured in a four-point flexure fixture and the residual stress was calculated from the strength and measured crack sizes on the fracture surfaces. Results The results show that there was no significant difference between the residual stresses calculated from the two techniques. However, the differences in mean residual stresses calculated within 1 min compared with those calculated after 24 h were statistically significant (p=0.003). Significance This study compared the indentation technique with the fractographic analysis method for determining the residual stress in the surface of soda-lime silica glass. The indentation method may be useful for estimating residual stress in glass. PMID:19671475
Povlsen, Bo
2012-01-01
Objectives To investigate if typing speed is proportional to the severity of pain in keyboard workers with work-related upper limb disorder (WRULD). Design Standardized functional typing test with participants scoring pain before and after typing; calculation of typing speed. Participants Fifty-nine patients and six controls. Setting Tertiary hospital centre for hand and upper limb pain. Main outcome measures Pain (VAS 0–10) and calculation of typing speed as words per minute. Results Three subgroups of patients were found based on their typing speed: fast, slow and intermediate. Two-tailed student T-test with P level at 0.05 was used for evaluation. The typing speeds were significantly different between all three patient groups (P < 0.05). The typing speed was significantly faster in the fastest patient group than in the control group (P = 0.04) and the slow and middle groups (P = < 0.0001). The pain before typing was highest in the ‘slow’ group, in both hands but this difference was not statistically significant. Conclusion Typing speed is not proportional to the severity of pain in keyboard workers with WRULD. Patients with statistically significant slower or faster typing speeds do not have statistically different levels of pain. PMID:22299070
Environmental flow allocation and statistics calculator
Konrad, Christopher P.
2011-01-01
The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.
Trends in incidence of lung cancer in Croatia from 2001 to 2013: gender and regional differences
Siroglavić, Katarina-Josipa; Polić Vižintin, Marina; Tripković, Ingrid; Šekerija, Mario; Kukulj, Suzana
2017-01-01
Aim To provide an overview of the lung cancer incidence trends in the City of Zagreb (Zagreb), Split-Dalmatia County (SDC), and Croatia in the period from 2001 to 2013. Method Incidence data were obtained from the Croatian National Cancer Registry. For calculating incidence rates per 100 000 population, we used population estimates for the period 2001-2013 from the Croatian Bureau of Statistics. Age-standardized rates of lung cancer incidence were calculated by the direct standardization method using the European Standard Population. To describe incidence trends, we used joinpoint regression analysis. Results Joinpoint analysis showed a statistically significant decrease in lung cancer incidence in men in all regions, with an annual percentage change (APC) of -2.2% for Croatia, 1.9% for Zagreb, and -2.0% for SDC. In women, joinpoint analysis showed a statistically significant increase in the incidence for Croatia, with APC of 1.4%, a statistically significant increase of 1.0% for Zagreb, and no significant change in trend for SDC. In both genders, joinpoint analysis showed a significant decrease in age-standardized incidence rates of lung cancer, with APC of -1.3% for Croatia, -1.1% for Zagreb, and -1.6% for SDC. Conclusion There was an increase in female lung cancer incidence rate and a decrease in male lung cancer incidence rate in Croatia in 2001-20013 period, with similar patterns observed in all the investigated regions. These results highlight the importance of smoking prevention and cessation policies, especially among women and young people. PMID:29094814
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2014 CFR
2014-07-01
... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2011 CFR
2011-07-01
... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2012 CFR
2012-07-01
... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...
40 CFR Appendix IV to Part 265 - Tests for Significance
Code of Federal Regulations, 2013 CFR
2013-07-01
... changes in the concentration or value of an indicator parameter in periodic ground-water samples when... then be compared to the value of the t-statistic found in a table for t-test of significance at the specified level of significance. A calculated value of t which exceeds the value of t found in the table...
Teaching Statistics Online Using "Excel"
ERIC Educational Resources Information Center
Jerome, Lawrence
2011-01-01
As anyone who has taught or taken a statistics course knows, statistical calculations can be tedious and error-prone, with the details of a calculation sometimes distracting students from understanding the larger concepts. Traditional statistics courses typically use scientific calculators, which can relieve some of the tedium and errors but…
Interrelationships Between 3 Keratoconic Cone Parameters.
Tu, Kyaw L; Tourkmani, Abdo K; Srinivas, Singaram
2017-09-01
To find out the interrelationships between 3 parameters of the keratoconic cone. A total of 101 keratoconic eyes of 58 patients were included in this retrospective case series study. A complete eye examination was performed. Kmax (K) and pachymetry at the thinnest point (T) were obtained from the Pentacam tomographer. The vertex to thinnest pachymetry distance (D for decentration) was calculated using trigonometry. Pearson correlation coefficients between T and D, between T and K, and between D and K were calculated. There is a statistically significant positive correlation between thinnest point pachymetry and decentration (R = 0.366, P = 0.0002) and also statistically significant negative correlation between thinnest point pachymetry and Kmax (R = -0.719, P < 0.00001) and decentration and Kmax (R = -0.281, P = 0.0044). The interrelationships between the 3 keratoconic cone parameters suggest that the thinner cones are largely central, that is, decenter less, but show greater steepening.
Calculating stage duration statistics in multistage diseases.
Komarova, Natalia L; Thalhauser, Craig J
2011-01-01
Many human diseases are characterized by multiple stages of progression. While the typical sequence of disease progression can be identified, there may be large individual variations among patients. Identifying mean stage durations and their variations is critical for statistical hypothesis testing needed to determine if treatment is having a significant effect on the progression, or if a new therapy is showing a delay of progression through a multistage disease. In this paper we focus on two methods for extracting stage duration statistics from longitudinal datasets: an extension of the linear regression technique, and a counting algorithm. Both are non-iterative, non-parametric and computationally cheap methods, which makes them invaluable tools for studying the epidemiology of diseases, with a goal of identifying different patterns of progression by using bioinformatics methodologies. Here we show that the regression method performs well for calculating the mean stage durations under a wide variety of assumptions, however, its generalization to variance calculations fails under realistic assumptions about the data collection procedure. On the other hand, the counting method yields reliable estimations for both means and variances of stage durations. Applications to Alzheimer disease progression are discussed.
Residual stress in glass: indentation crack and fractography approaches.
Anunmana, Chuchai; Anusavice, Kenneth J; Mecholsky, John J
2009-11-01
To test the hypothesis that the indentation crack technique can determine surface residual stresses that are not statistically significantly different from those determined from the analytical procedure using surface cracks, the four-point flexure test, and fracture surface analysis. Soda-lime-silica glass bar specimens (4 mm x 2.3 mm x 28 mm) were prepared and annealed at 650 degrees C for 30 min before testing. The fracture toughness values of the glass bars were determined from 12 specimens based on induced surface cracks, four-point flexure, and fractographic analysis. To determine the residual stress from the indentation technique, 18 specimens were indented under 19.6N load using a Vickers microhardness indenter. Crack lengths were measured within 1 min and 24h after indentation, and the measured crack lengths were compared with the mean crack lengths of annealed specimens. Residual stress was calculated from an equation developed for the indentation technique. All specimens were fractured in a four-point flexure fixture and the residual stress was calculated from the strength and measured crack sizes on the fracture surfaces. The results show that there was no significant difference between the residual stresses calculated from the two techniques. However, the differences in mean residual stresses calculated within 1 min compared with those calculated after 24h were statistically significant (p=0.003). This study compared the indentation technique with the fractographic analysis method for determining the residual stress in the surface of soda-lime-silica glass. The indentation method may be useful for estimating residual stress in glass.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2006-01-01
A new method is proposed to compare statistical differences between summary histograms, which are the histograms summed over a large ensemble of individual histograms. It consists of choosing a distance statistic for measuring the difference between summary histograms and using a bootstrap procedure to calculate the statistical significance level. Bootstrapping is an approach to statistical inference that makes few assumptions about the underlying probability distribution that describes the data. Three distance statistics are compared in this study. They are the Euclidean distance, the Jeffries-Matusita distance and the Kuiper distance. The data used in testing the bootstrap method are satellite measurements of cloud systems called cloud objects. Each cloud object is defined as a contiguous region/patch composed of individual footprints or fields of view. A histogram of measured values over footprints is generated for each parameter of each cloud object and then summary histograms are accumulated over all individual histograms in a given cloud-object size category. The results of statistical hypothesis tests using all three distances as test statistics are generally similar, indicating the validity of the proposed method. The Euclidean distance is determined to be most suitable after comparing the statistical tests of several parameters with distinct probability distributions among three cloud-object size categories. Impacts on the statistical significance levels resulting from differences in the total lengths of satellite footprint data between two size categories are also discussed.
Peters, Marloes J M; Wierts, Roel; Jutten, Elisabeth M C; Halders, Servé G E A; Willems, Paul C P H; Brans, Boudewijn
2015-11-01
A complication after spinal fusion surgery is pseudarthrosis, but its radiological diagnosis is of limited value. (18)F-fluoride PET with its ability to assess bone metabolism activity could be of value. The goal of this study was to assess the clinical feasibility of calculating the static standardized uptake value (SUV) from a short dynamic scan without the use of blood sampling, thereby obtaining all dynamic and static parameters in a scan of only 30 min. This approach was tested on a retrospective patient population with persisting pain after spinal fusion surgery. In 16 patients, SUVs (SUV max, SUV mean) and kinetic parameters (K 1, k 2, k 3, v b, K i,NLR, K 1/k 2, k 3/(k 2 + k 3), K i,patlak) were derived from static and dynamic PET/CT scans of operated and control regions of the spine, after intravenous administration of 156-214 MBq (18)F-fluoride. Parameter differences between control and operated regions, as well as between pseudarthrosis and fused segments were evaluated. SUVmean at 30 and 60 min was calculated from kinetic parameters obtained from the dynamic data set (SUV mean,2TCM). Agreement between measured and calculated SUVs was evaluated through Bland-Altman plots. Overall, statistically significant differences between control and operated regions were observed for SUV max, SUV mean, K i,NLR, K i,patlak, K 1/k 2 and k 3/(k 2 + k 3). Diagnostic CT showed pseudarthrosis in 6/16 patients, while in 10/16 patients, segments were fused. Of all parameters, only those regarding the incorporation of bone [K i,NLR, K i,patlak, k 3/(k 2 + k 3)] differed statistically significant in the intervertebral disc space between the pseudarthrosis and fused patients group. The mean values of the patient-specific blood clearance rate [Formula: see text] differed statistically significant between the pseudarthrosis and the fusion group, with a p value of 0.011. This may correspond with the lack of statistical significance of the SUV values between pseudarthrosis and fused patients. Bland-Altman plots show that calculated SUV mean,2TCM values corresponded well with the measured SUV mean values. This study shows the feasibility of a 30-min dynamic (18)F-fluoride PET/CT scanning and this may provide dynamic parameters clinically relevant to the diagnosis of pseudarthrosis.
Results of Propellant Mixing Variable Study Using Precise Pressure-Based Burn Rate Calculations
NASA Technical Reports Server (NTRS)
Stefanski, Philip L.
2014-01-01
A designed experiment was conducted in which three mix processing variables (pre-curative addition mix temperature, pre-curative addition mixing time, and mixer speed) were varied to estimate their effects on within-mix propellant burn rate variability. The chosen discriminator for the experiment was the 2-inch diameter by 4-inch long (2x4) Center-Perforated (CP) ballistic evaluation motor. Motor nozzle throat diameters were sized to produce a common targeted chamber pressure. Initial data analysis did not show a statistically significant effect. Because propellant burn rate must be directly related to chamber pressure, a method was developed that showed statistically significant effects on chamber pressure (either maximum or average) by adjustments to the process settings. Burn rates were calculated from chamber pressures and these were then normalized to a common pressure for comparative purposes. The pressure-based method of burn rate determination showed significant reduction in error when compared to results obtained from the Brooks' modification of the propellant web-bisector burn rate determination method. Analysis of effects using burn rates calculated by the pressure-based method showed a significant correlation of within-mix burn rate dispersion to mixing duration and the quadratic of mixing duration. The findings were confirmed in a series of mixes that examined the effects of mixing time on burn rate variation, which yielded the same results.
Zhao, Ren-Wu; Guo, Zhi-Qiang; Zhang, Ru-Xin
2016-06-01
A growing number of molecular epidemiological studies have been conducted to evaluate the association between human papillomavirus (HPV) infection and the malignancy of sinonasal inverted papilloma (SNIP). However, the results remain inconclusive. Here, a meta-analysis was conducted to quantitatively assess this association. Case-control studies investigating SNIP tissues for presence of HPV DNA were identified. The odds ratios (ORs) and 95% confidence intervals (CIs) were calculated by the Mantel-Haenszel method. An assessment of publication bias and sensitivity analysis were also performed. We calculated a pooled OR of 2.16 (95% CI=1.46-3.21, P<0.001) without statistically significant heterogeneity or publication bias. Stratification by HPV type showed a stronger association for patients with high-risk HPV (hrHPV) types, HPV-16, HPV-18, and HPV-16/18 infection (OR=8.8 [95% CI: 4.73-16.38], 8.04 [95% CI: 3.34-19.39], 18.57 [95% CI: 4.56-75.70], and 26.24 [4.35-158.47], respectively). When only using PCR studies, pooled ORs for patients with hrHPV, HPV-16, and HPV18 infection still reached statistical significance. However, Egger's test reflected significant publication bias in the HPV-16 sub-analysis (P=0.06), and the adjusted OR was no longer statistically significant (OR=1.65, 95%CI: 0.58-4.63). These results suggest that HPV infection, especially hrHPV (HPV-18), is significantly associated with malignant SNIP. Copyright © 2016 Elsevier B.V. All rights reserved.
Mathematical ability of first year undergraduate paramedic students-A before and after study.
Eastwood, Kathryn; Boyle, Malcolm; Kim, Visal; Stam, Nathan; Williams, Brett
2015-11-01
An ability to accurately perform drug calculations unassisted is an essential skill for all health professionals, with various occupational-specific stressors exacerbating mathematical deficiencies. The objective of this study was to determine the unaided mathematic ability of first year undergraduate paramedic students before and after mathematical and drug calculation tutorials. Students were administered a questionnaire containing demographic, drug calculation and arithmetic questions during week one of the semester before the tutorials. During the semester students participated in three 2-hour tutorials which included both mathematical and drug calculation questions without assistance of computational devices. At the end of semester was a summative drug calculation examination of which five key questions were compared to similar questions from the first questionnaire. Descriptive statistics describe the demographic data with a paired t-test comparing the questionnaire and exam results. Drug calculation and mathematical ability was markedly improved following the tutorials, mean score of correct answers before 1.74 (SD 1.4) and after 4.14 (SD 0.93), p<0001. When comparing the correct results for the same question type, there were statistically significant differences in four of five different drug calculations: volume of drug drawn up 10 v 57 p<0.0001, infusion rate 29 v 31 p=0.717, drip rate 16 v 54 p<0.0001, volume from a syringe 30 v 59 p<0.0001, and drug dose 42 v 62 p<0.0001. Total errors reduced from 188 to 45. First year undergraduate paramedic students initially demonstrated a poor ability to complete mathematical and drug calculations without the assistance of computational devices. This improved significantly following appropriate education and practice. Further research is required to determine the retention of this ability over time. Copyright © 2015 Elsevier Ltd. All rights reserved.
The challenge of identifying greenhouse gas-induced climatic change
NASA Technical Reports Server (NTRS)
Maccracken, Michael C.
1992-01-01
Meeting the challenge of identifying greenhouse gas-induced climatic change involves three steps. First, observations of critical variables must be assembled, evaluated, and analyzed to determine that there has been a statistically significant change. Second, reliable theoretical (model) calculations must be conducted to provide a definitive set of changes for which to search. Third, a quantitative and statistically significant association must be made between the projected and observed changes to exclude the possibility that the changes are due to natural variability or other factors. This paper provides a qualitative overview of scientific progress in successfully fulfilling these three steps.
Robustness of Multiple Objective Decision Analysis Preference Functions
2002-06-01
p p′ : The probability of some event. ,i ip q : The probability of event . i Π : An aggregation of proportional data used in calculating a test ...statistical tests of the significance of the term and also is conducted in a multivariate framework rather than the ROSA univariate approach. A...residual error is ˆ−e = y y (45) The coefficient provides a ready indicator of the contribution for the associated variable and statistical tests
Plant selection for ethnobotanical uses on the Amalfi Coast (Southern Italy).
Savo, V; Joy, R; Caneva, G; McClatchey, W C
2015-07-15
Many ethnobotanical studies have investigated selection criteria for medicinal and non-medicinal plants. In this paper we test several statistical methods using different ethnobotanical datasets in order to 1) define to which extent the nature of the datasets can affect the interpretation of results; 2) determine if the selection for different plant uses is based on phylogeny, or other selection criteria. We considered three different ethnobotanical datasets: two datasets of medicinal plants and a dataset of non-medicinal plants (handicraft production, domestic and agro-pastoral practices) and two floras of the Amalfi Coast. We performed residual analysis from linear regression, the binomial test and the Bayesian approach for calculating under-used and over-used plant families within ethnobotanical datasets. Percentages of agreement were calculated to compare the results of the analyses. We also analyzed the relationship between plant selection and phylogeny, chorology, life form and habitat using the chi-square test. Pearson's residuals for each of the significant chi-square analyses were examined for investigating alternative hypotheses of plant selection criteria. The three statistical analysis methods differed within the same dataset, and between different datasets and floras, but with some similarities. In the two medicinal datasets, only Lamiaceae was identified in both floras as an over-used family by all three statistical methods. All statistical methods in one flora agreed that Malvaceae was over-used and Poaceae under-used, but this was not found to be consistent with results of the second flora in which one statistical result was non-significant. All other families had some discrepancy in significance across methods, or floras. Significant over- or under-use was observed in only a minority of cases. The chi-square analyses were significant for phylogeny, life form and habitat. Pearson's residuals indicated a non-random selection of woody species for non-medicinal uses and an under-use of plants of temperate forests for medicinal uses. Our study showed that selection criteria for plant uses (including medicinal) are not always based on phylogeny. The comparison of different statistical methods (regression, binomial and Bayesian) under different conditions led to the conclusion that the most conservative results are obtained using regression analysis.
Manzo, Karen; Tiesman, Hope; Stewart, Jera; Hobbs, Gerald R; Knox, Sarah S
2015-01-01
We examined racial/ethnic and gender-specific associations between suicide ideation/attempts and risky behaviors, sadness/hopelessness, and victimization in Montana American Indian and White youth using 1999-2011 Youth Risk Behavior Survey data. Logistic regression was used to calculate odds ratios and 95% confidence intervals in stratified racial/ethnic-gender groups. The primary results of this study show that although the American Indian youth had more statistically significant suicidal thoughts and attempts than the White youth, they had fewer statistically significant predictors compared to the White youth. Sadness/hopelessness was the strongest, and the only statistically significant, predictor of suicide ideation/attempts common across all four groups. The unhealthy weight control cluster was a significant predictor for the White youth and the American Indian/Alaska Native girls; the alcohol/tobacco/marijuana cluster was a significant predictor for the American Indian boys only. Results show important differences across the groups and indicate directions for future research targeting prevention and intervention.
Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.
2000-01-01
A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.
Heskes, Tom; Eisinga, Rob; Breitling, Rainer
2014-11-21
The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .
Popov, I; Valašková, J; Štefaničková, J; Krásnik, V
2017-01-01
A substantial part of the population suffers from some kind of refractive errors. It is envisaged that their prevalence may change with the development of society. The aim of this study is to determine the prevalence of refractive errors using calculations based on the Gullstrand schematic eye model. We used the Gullstrand schematic eye model to calculate refraction retrospectively. Refraction was presented as the need for glasses correction at a vertex distance of 12 mm. The necessary data was obtained using the optical biometer Lenstar LS900. Data which could not be obtained due to the limitations of the device was substituted by theoretical data from the Gullstrand schematic eye model. Only analyses from the right eyes were presented. The data was interpreted using descriptive statistics, Pearson correlation and t-test. The statistical tests were conducted at a level of significance of 5%. Our sample included 1663 patients (665 male, 998 female) within the age range of 19 to 96 years. Average age was 70.8 ± 9.53 years. Average refraction of the eye was 2.73 ± 2.13D (males 2.49 ± 2.34, females 2.90 ± 2.76). The mean absolute error from emmetropia was 3.01 ± 1.58 (males 2.83 ± 2.95, females 3.25 ± 3.35). 89.06% of the sample was hyperopic, 6.61% was myopic and 4.33% emmetropic. We did not find any correlation between refraction and age. Females were more hyperopic than males. We did not find any statistically significant hypermetopic shift of refraction with age. According to our estimation, the calculations of refractive errors using the Gullstrand schematic eye model showed a significant hypermetropic shift of more than +2D. Our results could be used in future for comparing the prevalence of refractive errors using same methods we used.Key words: refractive errors, refraction, Gullstrand schematic eye model, population, emmetropia.
NASA Astrophysics Data System (ADS)
Allen, David
Some informal discussions among educators regarding motivation of students and academic performance have included the topic of magnet schools. The premise is that a focused theme, such as an aspect of science, positively affects student motivation and academic achievement. However, there is limited research involving magnet schools and their influence on student motivation and academic performance. This study provides empirical data for the discussion about magnet schools influence on motivation and academic ability. This study utilized path analysis in a structural equation modeling framework to simultaneously investigate the relationships between demographic exogenous independent variables, the independent variable of attending a science or technology magnet middle school, and the dependent variables of motivation to learn science and academic achievement in science. Due to the categorical nature of the variables, Bayesian statistical analysis was used to calculate the path coefficients and the standardized effects for each relationship in the model. The coefficients of determination were calculated to determine the amount of variance each path explained. Only five of 21 paths had statistical significance. Only one of the five statistically significant paths (Attended Magnet School to Motivation to Learn Science) explained a noteworthy amount (45.8%) of the variance.
Meteor trail footprint statistics
NASA Astrophysics Data System (ADS)
Mui, S. Y.; Ellicott, R. C.
Footprint statistics derived from field-test data are presented. The statistics are the probability that two receivers will lie in the same footprint. The dependence of the footprint statistics on the transmitter range, link orientation, and antenna polarization are examined. Empirical expressions for the footprint statistics are presented. The need to distinguish the instantaneous footprint, which is the area illuminated at a particular instant, from the composite footprint, which is the total area illuminated during the lifetime of the meteor trail, is explained. The statistics for the instantaneous and composite footprints have been found to be similar. The only significant difference lies in the parameter that represents the probability of two colocated receivers being in the same footprint. The composite footprint statistics can be used to calculate the space diversity gain of a multiple-receiver system. The instantaneous footprint statistics are useful in the evaluation of the interference probability in a network of meteor burst communication nodes.
Stepp, Cara E
2013-03-01
The relative fundamental frequency (RFF) surrounding production of a voiceless consonant has previously been shown to be lower in speakers with hypokinetic dysarthria and Parkinson's disease (PD) relative to age/sex matched controls. Here RFF was calculated in 32 speakers with PD without overt hypokinetic dysarthria and 32 age and sex matched controls to better understand the relationships between RFF and PD progression, medication status, and sex. Results showed that RFF was statistically significantly lower in individuals with PD compared with healthy age-matched controls and was statistically significantly lower in individuals diagnosed at least 5 yrs prior to experimentation relative to individuals recorded less than 5 yrs past diagnosis. Contrary to previous trends, no effect of medication was found. However, a statistically significant effect of sex on offset RFF was shown, with lower values in males relative to females. Future work examining the physiological bases of RFF is warranted.
Currens, J.C.
1999-01-01
Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.Analytical data for nitrate and triazines from 566 samples collected over a 3-year period at Pleasant Grove Spring, Logan County, KY, were statistically analyzed to determine the minimum data set needed to calculate meaningful yearly averages for a conduit-flow karst spring. Results indicate that a biweekly sampling schedule augmented with bihourly samples from high-flow events will provide meaningful suspended-constituent and dissolved-constituent statistics. Unless collected over an extensive period of time, daily samples may not be representative and may also be autocorrelated. All high-flow events resulting in a significant deflection of a constituent from base-line concentrations should be sampled. Either the geometric mean or the flow-weighted average of the suspended constituents should be used. If automatic samplers are used, then they may be programmed to collect storm samples as frequently as every few minutes to provide details on the arrival time of constituents of interest. However, only samples collected bihourly should be used to calculate averages. By adopting a biweekly sampling schedule augmented with high-flow samples, the need to continuously monitor discharge, or to search for and analyze existing data to develop a statistically valid monitoring plan, is lessened.
Deformation effect on spectral statistics of nuclei
NASA Astrophysics Data System (ADS)
Sabri, H.; Jalili Majarshin, A.
2018-02-01
In this study, we tried to get significant relations between the spectral statistics of atomic nuclei and their different degrees of deformations. To this aim, the empirical energy levels of 109 even-even nuclei in the 22 ≤ A ≤ 196 mass region are classified as their experimental and calculated quadrupole, octupole, hexadecapole and hexacontatetrapole deformations values and analyzed by random matrix theory. Our results show an obvious relation between the regularity of nuclei and strong quadrupole, hexadecapole and hexacontatetrapole deformations and but for nuclei that their octupole deformations are nonzero, we have observed a GOE-like statistics.
Exercise reduces depressive symptoms in adults with arthritis: Evidential value.
Kelley, George A; Kelley, Kristi S
2016-07-12
To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P -curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z -scores were calculated to examine selective-reporting bias. An alpha ( P ) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P -curve, adjusted for publication bias, was calculated. Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant ( P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified ( Z = -5.28, P < 0.0001). In addition, the included studies did not lack evidential value ( Z = 2.39, P = 0.99), nor did they lack evidential value and were P -hacked ( Z = 5.28, P > 0.99). The relative frequencies of P -values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P -curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions.
Exercise reduces depressive symptoms in adults with arthritis: Evidential value
Kelley, George A; Kelley, Kristi S
2016-01-01
AIM To determine whether evidential value exists that exercise reduces depression in adults with arthritis and other rheumatic conditions. METHODS Utilizing data derived from a prior meta-analysis of 29 randomized controlled trials comprising 2449 participants (1470 exercise, 979 control) with fibromyalgia, osteoarthritis, rheumatoid arthritis or systemic lupus erythematosus, a new method, P-curve, was utilized to assess for evidentiary worth as well as dismiss the possibility of discriminating reporting of statistically significant results regarding exercise and depression in adults with arthritis and other rheumatic conditions. Using the method of Stouffer, Z-scores were calculated to examine selective-reporting bias. An alpha (P) value < 0.05 was deemed statistically significant. In addition, average power of the tests included in P-curve, adjusted for publication bias, was calculated. RESULTS Fifteen of 29 studies (51.7%) with exercise and depression results were statistically significant (P < 0.05) while none of the results were statistically significant with respect to exercise increasing depression in adults with arthritis and other rheumatic conditions. Right-skew to dismiss selective reporting was identified (Z = −5.28, P < 0.0001). In addition, the included studies did not lack evidential value (Z = 2.39, P = 0.99), nor did they lack evidential value and were P-hacked (Z = 5.28, P > 0.99). The relative frequencies of P-values were 66.7% at 0.01, 6.7% each at 0.02 and 0.03, 13.3% at 0.04 and 6.7% at 0.05. The average power of the tests included in P-curve, corrected for publication bias, was 69%. Diagnostic plot results revealed that the observed power estimate was a better fit than the alternatives. CONCLUSION Evidential value results provide additional support that exercise reduces depression in adults with arthritis and other rheumatic conditions. PMID:27489782
Hayes, Lawrence D; Sculthorpe, Nicholas; Young, John D; Baker, Julien S; Grace, Fergal M
2014-12-01
Due to its noninvasive, convenient, and practical nature, salivary testosterone (sal-T) and cortisol (sal-C) are frequently used in a clinical and applied setting. However, few studies report biological and analytical error and even fewer report the 'critical difference' which is the change required before a true biological difference can be claimed. It was hypothesized that (a) exercise would result in a statistically significant change in sal-C and sal-T and (b) the exercise-induced change would be within the critical difference for both salivary hormones. In study 1, we calculated the critical difference of sal-T and sal-C of 18 healthy adult males aged 23.2 ± 3.0 years every 60 min in a seated position over a 12-h period (08:00-20:00 hours [study 1]). As proof-of-concept, sal-C and sal-T was also obtained pre and at 5 and 60 min post a maximal exercise protocols in a separate group of 17 healthy males (aged 20.1 ± 2.8 years [study 2]). The critical difference of sal-T calculated as 90 %. For sal-C, the critical difference was 148 % (study 1). Maximal exercise was associated with a statistically significant (p < 0.05) changes in sal-T and sal-C. However, these changes were all within the critical difference range. Results from this investigation indicate that a large magnitude of change for sal-C and sal-T is required before a biologically significant mean change can be claimed. Studies utilizing sal-T and sal-C should appreciate the critical difference of these measures and assess the biological significance of any statistical changes.
Accurate coding in sepsis: clinical significance and financial implications.
Chin, Y T; Scattergood, N; Thornber, M; Thomas, S
2016-09-01
Sepsis is a major healthcare problem and leading cause of death worldwide. UK hospital mortality statistics and payments for patient episodes of care are calculated on clinical coding data. The accuracy of these data depends on the quality of coding. This study aimed to investigate whether patients with significant bacteraemia are coded for sepsis and to estimate the financial costs of miscoding. Of 54 patients over a one-month period with a significant bacteraemia, only 19% had been coded for sepsis. This is likely to lead to falsely high calculated hospital mortality. Furthermore, this resulted in an underpayment of £21,000 for one month alone. Copyright © 2016 The Healthcare Infection Society. All rights reserved.
Bolog, N.; Oancea, I.; Andreisek, G.; Mangrau, Angelica; Caruntu, F.
2009-01-01
Background & Aims The purpose of the study is to evaluate the accuracy of the C/RL, RPN, and EGF in diagnosing cirrhosis. Methods The study population included 95 cirrhotic patients in the cirrhosis group (56 men, 39 women, age range 14-76;mean age 52.3) and 57 subjects in the control group (26 men, 31 women, age range 18-83;mean age 51). All MR examinations were performed by using the same protocol. Two radiologists independently assessed data sets in two different reading sessions. The sensitivity, specificity, and accuracy and the relative risk of the signs in diagnosing cirrhosis were calculated. The diagnosis accuracy of the C/RL sign was calculated using the ROC curve. The statistical significance of any difference of each sign between different classes of cirrhosis was also calculated. Results The interobserver agreement between the readers was excellent (κ≥ 0.81;95% CI:0.92, 1.0). There was a significant statistical difference of the diagnostic value of C/RL, RPN, and EGF between cirrhotic patients and control group (p<0.001). The sensitivity, specificity, and accuracy of C/RL were 72%, 87%, and 78%; 67%, 87%, and 75% for RPN; and 49%, 91%, and 65% for EGF. C/RL (OR=18.95) and RPN (OR=14.74) showed a higher risk for cirrhosis compared to EGF (OR=14.74). There was a statistical significance difference between C/RL and EGF (p=0.002) and between RPN and EGF for Child A class of cirrhosis (p-0.037). Conclusion The C/RL and RPN have similar performance regarding the diagnosis of cirrhosis having a higher diagnostic performance compared to EGF in cirrhosis. PMID:24778811
Statistical analysis and trends of wet snow avalanches in the French Alps over the period 1959-2010
NASA Astrophysics Data System (ADS)
Naaim, Mohamed
2017-04-01
Since an avalanche contains a significant proportion of wet snow, its characteristics and its behavior change significantly (heterogeneous and polydisperse). Even if on a steep given slope, wet snow avalanches are slow. They can flow over gentle slopes and reach the same extensions as dry avalanches. To highlight the link between climate warming and the proliferation of wet snow avlanches, we crossed two well-documented avalanche databases: the permanent avalanche chronicle (EPA) and the meteorological re-analyzes. For each avalanche referenced in EPA, a moisture index I is buit. It represents the ratio of the thickness of the wet snow layer to the total snow thickness, at the date of the avalanche on the concerned massif at 2400 m.a.s.l. The daily and annual proportion of avalanches exceeding a given threshold of I are calculated for each massif of the French alps. The statistical distribution of wet avalanches per massif is calculated over the period 1959-2009. The statistical quantities are also calculated over two successive periods of the same duration 1959-1984 and 1984-2009, and the annual evolution of the proportion of wet avalanches is studied using time-series tools to detect potential rupture or trends. This study showed that about 77% of avalanches on the French alpine massif mobilize dry snow. The probability of having an avalanche of a moisture index greater than 10 % in a given year is 0.2. This value varies from one massif to another. The analysis between the two successive periods showed a significant growth of wet avalanches on 20 massifs and a decrease on 3 massifs. The study of time-series confirmed these trends, which are of the inter-annual variability level.
Breast MRI background parenchymal enhancement (BPE) correlates with the risk of breast cancer.
Telegrafo, Michele; Rella, Leonarda; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe; Moschetta, Marco
2016-02-01
To investigate whether background parenchymal enhancement (BPE) and breast cancer would correlate searching for any significant difference of BPE pattern distribution in case of benign or malignant lesions. 386 patients, including 180 pre-menopausal (group 1) and 206 post-menopausal (group 2), underwent MR examination. Two radiologists evaluated MR images classifying normal BPE as minimal, mild, moderate or marked. The two groups of patients were subdivided into 3 categories based on MRI findings (negative, benign and malignant lesions). The distribution of BPE patterns within the two groups and within the three MR categories was calculated. The χ2 test was used to evaluate BPE type distribution in the three patient categories and any statistically significant correlation of BPE with lesion type was calculated. The Student t test was applied to search for any statistically significant difference between BPE type rates in group 1 and 2. The χ2 test demonstrated a statistically significant difference in the distribution of BPE types in negative patients and benign lesions as compared with malignant ones (p<0.05). A significantly higher prevalence of moderate and marked BPE was found among malignant lesions (group 1: 32% and 42%, respectively; group 2: 31% and 46%, respectively) while a predominance of minimal and mild BPE among negative patients (group 1: 60% and 36%, respectively; group 2: 68% and 32%, respectively) and benign lesions (group 1: 54% and 38%, respectively; group 2: 75% and 17%, respectively) was found. The Student t test did not show a statistically significant difference between BPE type rates in group 1 and 2 (p>0.05). Normal BPE could correlate with the risk of breast cancer being such BPE patterns as moderate and marked associated with patients with malignant lesions in both pre and post-menopausal women. Copyright © 2015 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uvarov, Vladimir, E-mail: vladimiru@savion.huji.ac.il; Popov, Inna
2013-11-15
Crystallite size values were determined by X-ray diffraction methods for 183 powder samples. The tested size range was from a few to about several hundred nanometers. Crystallite size was calculated with direct use of the Scherrer equation, the Williamson–Hall method and the Rietveld procedure via the application of a series of commercial and free software. The results were statistically treated to estimate the significance of the difference in size resulting from these methods. We also estimated effect of acquisition conditions (Bragg–Brentano, parallel-beam geometry, step size, counting time) and data processing on the calculated crystallite size values. On the basis ofmore » the obtained results it is possible to conclude that direct use of the Scherrer equation, Williamson–Hall method and the Rietveld refinement employed by a series of software (EVA, PCW and TOPAS respectively) yield very close results for crystallite sizes less than 60 nm for parallel beam geometry and less than 100 nm for Bragg–Brentano geometry. However, we found that despite the fact that the differences between the crystallite sizes, which were calculated by various methods, are small by absolute values, they are statistically significant in some cases. The values of crystallite size determined from XRD were compared with those obtained by imaging in a transmission (TEM) and scanning electron microscopes (SEM). It was found that there was a good correlation in size only for crystallites smaller than 50 – 60 nm. Highlights: • The crystallite sizes for 183 nanopowders were calculated using different XRD methods • Obtained results were subject to statistical treatment • Results obtained with Bragg-Brentano and parallel beam geometries were compared • Influence of conditions of XRD pattern acquisition on results was estimated • Calculated by XRD crystallite sizes were compared with same obtained by TEM and SEM.« less
Earth Observing System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Hejduk, Matthew D.
2016-01-01
The purpose of covariance realism is to properly size a primary object's covariance in order to add validity to the calculation of the probability of collision. The covariance realism technique in this paper consists of three parts: collection/calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics. An empirical cumulative distribution function (ECDF) Goodness-of-Fit (GOF) method is employed to determine if a covariance is properly sized by comparing the empirical distribution of Mahalanobis distance calculations to the hypothesized parent 3-DoF chi-squared distribution. To realistically size a covariance for collision probability calculations, this study uses a state noise compensation algorithm that adds process noise to the definitive epoch covariance to account for uncertainty in the force model. Process noise is added until the GOF tests pass a group significance level threshold. The results of this study indicate that when outliers attributed to persistently high or extreme levels of solar activity are removed, the aforementioned covariance realism compensation method produces a tuned covariance with up to 80 to 90% of the covariance propagation timespan passing (against a 60% minimum passing threshold) the GOF tests-a quite satisfactory and useful result.
NASA Astrophysics Data System (ADS)
Bachiller, Alejandro; Poza, Jesús; Gómez, Carlos; Molina, Vicente; Suazo, Vanessa; Hornero, Roberto
2015-02-01
Objective. The aim of this research is to explore the coupling patterns of brain dynamics during an auditory oddball task in schizophrenia (SCH). Approach. Event-related electroencephalographic (ERP) activity was recorded from 20 SCH patients and 20 healthy controls. The coupling changes between auditory response and pre-stimulus baseline were calculated in conventional EEG frequency bands (theta, alpha, beta-1, beta-2 and gamma), using three coupling measures: coherence, phase-locking value and Euclidean distance. Main results. Our results showed a statistically significant increase from baseline to response in theta coupling and a statistically significant decrease in beta-2 coupling in controls. No statistically significant changes were observed in SCH patients. Significance. Our findings support the aberrant salience hypothesis, since SCH patients failed to change their coupling dynamics between stimulus response and baseline when performing an auditory cognitive task. This result may reflect an impaired communication among neural areas, which may be related to abnormal cognitive functions.
Low power and type II errors in recent ophthalmology research.
Khan, Zainab; Milko, Jordan; Iqbal, Munir; Masri, Moness; Almeida, David R P
2016-10-01
To investigate the power of unpaired t tests in prospective, randomized controlled trials when these tests failed to detect a statistically significant difference and to determine the frequency of type II errors. Systematic review and meta-analysis. We examined all prospective, randomized controlled trials published between 2010 and 2012 in 4 major ophthalmology journals (Archives of Ophthalmology, British Journal of Ophthalmology, Ophthalmology, and American Journal of Ophthalmology). Studies that used unpaired t tests were included. Power was calculated using the number of subjects in each group, standard deviations, and α = 0.05. The difference between control and experimental means was set to be (1) 20% and (2) 50% of the absolute value of the control's initial conditions. Power and Precision version 4.0 software was used to carry out calculations. Finally, the proportion of articles with type II errors was calculated. β = 0.3 was set as the largest acceptable value for the probability of type II errors. In total, 280 articles were screened. Final analysis included 50 prospective, randomized controlled trials using unpaired t tests. The median power of tests to detect a 50% difference between means was 0.9 and was the same for all 4 journals regardless of the statistical significance of the test. The median power of tests to detect a 20% difference between means ranged from 0.26 to 0.9 for the 4 journals. The median power of these tests to detect a 50% and 20% difference between means was 0.9 and 0.5 for tests that did not achieve statistical significance. A total of 14% and 57% of articles with negative unpaired t tests contained results with β > 0.3 when power was calculated for differences between means of 50% and 20%, respectively. A large portion of studies demonstrate high probabilities of type II errors when detecting small differences between means. The power to detect small difference between means varies across journals. It is, therefore, worthwhile for authors to mention the minimum clinically important difference for individual studies. Journals can consider publishing statistical guidelines for authors to use. Day-to-day clinical decisions rely heavily on the evidence base formed by the plethora of studies available to clinicians. Prospective, randomized controlled clinical trials are highly regarded as a robust study and are used to make important clinical decisions that directly affect patient care. The quality of study designs and statistical methods in major clinical journals is improving overtime, 1 and researchers and journals are being more attentive to statistical methodologies incorporated by studies. The results of well-designed ophthalmic studies with robust methodologies, therefore, have the ability to modify the ways in which diseases are managed. Copyright © 2016 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.
Nasrollah, Jabbari; Mikaeil, Molazadeh; Omid, Esnaashari; Mojtaba, Seyed Siahi; Ahad, Zeinali
2014-01-01
The impact of intravenous (IV) contrast media (CM) on radiation dose calculations must be taken into account in treatment planning. The aim of this study is to evaluate the effect of an intravenous contrast media on dose calculations in three-dimensional conformal radiation therapy (3D-CRT) for lower esophageal and rectal cancers. Seventeen patients with lower esophageal tumors and 12 patients with rectal cancers were analyzed. At the outset, all patients were planned for 3D-CRT based on the computed tomography (CT) scans with IV contrast media. Subsequently, all the plans were copied and replaced on the scans without intravenous CM. The radiation doses calculated from the two sets of CTs were compared. The dose differences between the planning image set using intravenous contrast and the image set without contrast showed an average increase in Monitor Units (MUs) in the lower esophageal region that was 1.28 and 0.75% for 6 and 15 MV photon beams, respectively. There was no statistical significant difference in the rectal region between the two sets of scans in the 3D-CRT plans. The results showed that the dose differences between the plans for the CT scans with and without CM were small and clinically tolerable. However, the differences in the lower esophageal region were significant in the statistical analysis.
Quantitative EEG analysis of the maturational changes associated with childhood absence epilepsy
NASA Astrophysics Data System (ADS)
Rosso, O. A.; Hyslop, W.; Gerlach, R.; Smith, R. L. L.; Rostas, J. A. P.; Hunter, M.
2005-10-01
This study aimed to examine the background electroencephalography (EEG) in children with childhood absence epilepsy, a condition whose presentation has strong developmental links. EEG hallmarks of absence seizure activity are widely accepted and there is recognition that the bulk of inter-ictal EEG in this group is normal to the naked eye. This multidisciplinary study aimed to use the normalized total wavelet entropy (NTWS) (Signal Processing 83 (2003) 1275) to examine the background EEG of those patients demonstrating absence seizure activity, and compare it with children without absence epilepsy. This calculation can be used to define the degree of order in a system, with higher levels of entropy indicating a more disordered (chaotic) system. Results were subjected to further statistical analyses of significance. Entropy values were calculated for patients versus controls. For all channels combined, patients with absence epilepsy showed (statistically significant) lower entropy values than controls. The size of the difference in entropy values was not uniform, with certain EEG electrodes consistently showing greater differences than others.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Statistics. 1065.602 Section 1065.602... PROCEDURES Calculations and Data Requirements § 1065.602 Statistics. (a) Overview. This section contains equations and example calculations for statistics that are specified in this part. In this section we use...
Calculation of streamflow statistics for Ontario and the Great Lakes states
Piggott, Andrew R.; Neff, Brian P.
2005-01-01
Basic, flow-duration, and n-day frequency statistics were calculated for 779 current and historical streamflow gages in Ontario and 3,157 streamflow gages in the Great Lakes states with length-of-record daily mean streamflow data ending on December 31, 2000 and September 30, 2001, respectively. The statistics were determined using the U.S. Geological Survey’s SWSTAT and IOWDM, ANNIE, and LIBANNE software and Linux shell and PERL programming that enabled the mass processing of the data and calculation of the statistics. Verification exercises were performed to assess the accuracy of the processing and calculations. The statistics and descriptions, longitudes and latitudes, and drainage areas for each of the streamflow gages are summarized in ASCII text files and ESRI shapefiles.
The potential of composite cognitive scores for tracking progression in Huntington's disease.
Jones, Rebecca; Stout, Julie C; Labuschagne, Izelle; Say, Miranda; Justo, Damian; Coleman, Allison; Dumas, Eve M; Hart, Ellen; Owen, Gail; Durr, Alexandra; Leavitt, Blair R; Roos, Raymund; O'Regan, Alison; Langbehn, Doug; Tabrizi, Sarah J; Frost, Chris
2014-01-01
Composite scores derived from joint statistical modelling of individual risk factors are widely used to identify individuals who are at increased risk of developing disease or of faster disease progression. We investigated the ability of composite measures developed using statistical models to differentiate progressive cognitive deterioration in Huntington's disease (HD) from natural decline in healthy controls. Using longitudinal data from TRACK-HD, the optimal combinations of quantitative cognitive measures to differentiate premanifest and early stage HD individuals respectively from controls was determined using logistic regression. Composite scores were calculated from the parameters of each statistical model. Linear regression models were used to calculate effect sizes (ES) quantifying the difference in longitudinal change over 24 months between premanifest and early stage HD groups respectively and controls. ES for the composites were compared with ES for individual cognitive outcomes and other measures used in HD research. The 0.632 bootstrap was used to eliminate biases which result from developing and testing models in the same sample. In early HD, the composite score from the HD change prediction model produced an ES for difference in rate of 24-month change relative to controls of 1.14 (95% CI: 0.90 to 1.39), larger than the ES for any individual cognitive outcome and UHDRS Total Motor Score and Total Functional Capacity. In addition, this composite gave a statistically significant difference in rate of change in premanifest HD compared to controls over 24-months (ES: 0.24; 95% CI: 0.04 to 0.44), even though none of the individual cognitive outcomes produced statistically significant ES over this period. Composite scores developed using appropriate statistical modelling techniques have the potential to materially reduce required sample sizes for randomised controlled trials.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
Mars, Mokhtar; Bouaziz, Mouna; Tbini, Zeineb; Ladeb, Fethi; Gharbi, Souha
2018-06-12
This study aims to determine how Magnetic Resonance Imaging (MRI) acquisition techniques and calculation methods affect T2 values of knee cartilage at 1.5 Tesla and to identify sequences that can be used for high-resolution T2 mapping in short scanning times. This study was performed on phantom and twenty-nine patients who underwent MRI of the knee joint at 1.5 Tesla. The protocol includes T2 mapping sequences based on Single Echo Spin Echo (SESE), Multi-Echo Spin Echo (MESE), Fast Spin Echo (FSE) and Turbo Gradient Spin Echo (TGSE). The T2 relaxation times were quantified and evaluated using three calculation methods (MapIt, Syngo Offline and monoexponential fit). Signal to Noise Ratios (SNR) were measured in all sequences. All statistical analyses were performed using the t-test. The average T2 values in phantom were 41.7 ± 13.8 ms for SESE, 43.2 ± 14.4 ms for MESE, 42.4 ± 14.1 ms for FSE and 44 ± 14.5 ms for TGSE. In the patient study, the mean differences were 6.5 ± 8.2 ms, 7.8 ± 7.6 ms and 8.4 ± 14.2 ms for MESE, FSE and TGSE compared to SESE respectively; these statistical results were not significantly different (p > 0.05). The comparison between the three calculation methods showed no significant difference (p > 0.05). t-Test showed no significant difference between SNR values for all sequences. T2 values depend not only on the sequence type but also on the calculation method. None of the sequences revealed significant differences compared to the SESE reference sequence. TGSE with its short scanning time can be used for high-resolution T2 mapping. ©2018The Author(s). Published by S. Karger AG, Basel.
Applied statistics in ecology: common pitfalls and simple solutions
E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick
2013-01-01
The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...
Study of Left Ventricular Mass and Its Determinants on Echocardiography.
Guleri, Namrata; Rana, Susheela; Chauhan, Randhir S; Negi, Prakash Chand; Diwan, Yogesh; Diwan, Deepa
2017-09-01
Increased Left Ventricular Mass (LVM) is an independent risk factor for cardiovascular morbidity and mortality. This study was done to find the prevalence and determinants of LVM in the Northern Indian population. A prospective cross-sectional observational study was carried out in a tertiary care centre in Himachal Pradesh, India and the study population included all consecutive patients fulfilling the inclusion criteria attending cardiology OPD on seeking medical attention with various symptoms for dyslipidaemia, hypertension but not on medication over a period of one year. Focused history was taken; physical examination and investigations were done. Data collected was analysed using Epi-info software version 3.5.1. We calculated means of LVM index for categorical variables i.e., sex, tobacco consumption, alcohol consumption and sedentary lifestyle etc., and also calculated p-values as test of significance for mean difference across the exposure variable groups. The Pearson correlation coefficient was calculated and 2 tailed significance at p< 0.05 was taken as statistically significant. Mean age of study population was 42.30±9.8 years and 62.9% were males. The mean LVM index was significantly higher in men than in women 77.7 ± 11.4 vs.71.3 ± 15.7 (p-value <0.01). Strong positive correlation was observed between increased waist hip ratio and increased Left Ventricular Mass Index (LVMI). The Pearson correlation coefficient was 36.77 and it was statistically significant with p-value 0.04. We found positive and independent correlation of increased LVMI with increased Waist Hip Ratio (WHR). A positive independent correlation was also observed with higher fasting blood sugar levels.
Comparison of Histograms for Use in Cloud Observation and Modeling
NASA Technical Reports Server (NTRS)
Green, Lisa; Xu, Kuan-Man
2005-01-01
Cloud observation and cloud modeling data can be presented in histograms for each characteristic to be measured. Combining information from single-cloud histograms yields a summary histogram. Summary histograms can be compared to each other to reach conclusions about the behavior of an ensemble of clouds in different places at different times or about the accuracy of a particular cloud model. As in any scientific comparison, it is necessary to decide whether any apparent differences are statistically significant. The usual methods of deciding statistical significance when comparing histograms do not apply in this case because they assume independent data. Thus, a new method is necessary. The proposed method uses the Euclidean distance metric and bootstrapping to calculate the significance level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duggar, William Neil, E-mail: wduggar@umc.edu; Nguyen, Alex; Stanford, Jason
This study is to demonstrate the importance and a method of properly modeling the treatment couch for dose calculation in patient treatment using arc therapy. The 2 treatment couch tops—Aktina AK550 and Elekta iBEAM evo—of Elekta LINACs were scanned using Philips Brilliance Big Bore CT Simulator. Various parts of the couch tops were contoured, and their densities were measured and recorded on the Pinnacle treatment planning system (TPS) using the established computed tomography density table. These contours were saved as organ models to be placed beneath the patient during planning. Relative attenuation measurements were performed following procedures outlined by TG-176more » as well as absolute dose comparison of static fields of 10 × 10 cm{sup 2} that were delivered through the couch tops with that calculated in the TPS with the couch models. A total of 10 random arc therapy treatment plans (5 volumetric-modulated arc therapy [VMAT] and 5 stereotactic body radiation therapy [SBRT]), using 24 beams, were selected for this study. All selected plans were calculated with and without couch modeling. Each beam was evaluated using the Delta{sup 4} dosimetry system (Delta{sup 4}). The Student t-test was used to determine statistical significance. Independent reviews were exploited as per the Imaging and Radiation Oncology Core head and neck credentialing phantom. The selected plans were calculated on the actual patient anatomies with and without couch modeling to determine potential clinical effects. Large relative beam attenuations were noted dependent on which part of the couch top beams were passing through. Substantial improvements were also noted for static fields both calculated with the TPS and delivered physically when the couch models were included in the calculation. A statistically significant increase in agreement was noted for dose difference, distance to agreement, and γ-analysis with the Delta{sup 4} on VMAT and SBRT plans. A credentialing review showed improvement in treatment delivery after couch modeling with both thermoluminescent dosimeter doses and film analysis. Furthermore, analysis of treatment plans with and without using the couch model showed a statistically significant reduction in planning target volume coverage and increase in skin dose. In conclusion, ignoring the treatment couch, a common practice when generating a patient treatment plan, can overestimate the dose delivered especially for arc therapy. This work shows that explicitly modeling the couch during planning can meaningfully improve the agreement between calculated and measured dose distributions. Because of this project, we have implemented the couch models clinically across all treatment plans.« less
Environmentally safe areas and routes in the Baltic proper using Eulerian tracers.
Höglund, A; Meier, H E M
2012-07-01
In recent years, the shipping of environmentally hazardous cargo has increased considerably in the Baltic proper. In this study, a large number of hypothetical oil spills with an idealized, passive tracer are simulated. From the tracer distributions, statistical measures are calculated to optimize the quantity of tracer from a spill that would stay at sea as long as possible. Increased time may permit action to be taken against the spill before the oil reaches environmentally vulnerable coastal zones. The statistical measures are used to calculate maritime routes with maximum probability that an oil spill will stay at sea as long as possible. Under these assumptions, ships should follow routes that are located south of Bornholm instead of the northern routes in use currently. Our results suggest that the location of the optimal maritime routes depends on the season, although interannual variability is too large to identify statistically significant changes. Copyright © 2012. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Appleby, Stephen; Chingangbam, Pravabati; Park, Changbom; Hong, Sungwook E.; Kim, Juhan; Ganesan, Vidhya
2018-05-01
We apply the Minkowski tensor statistics to two-dimensional slices of the three-dimensional matter density field. The Minkowski tensors are a set of functions that are sensitive to directionally dependent signals in the data and, furthermore, can be used to quantify the mean shape of density fields. We begin by reviewing the definition of Minkowski tensors and introducing a method of calculating them from a discretely sampled field. Focusing on the statistic {W}21,1—a 2 × 2 matrix—we calculate its value for both the entire excursion set and individual connected regions and holes within the set. To study the morphology of structures within the excursion set, we calculate the eigenvalues λ 1, λ 2 for the matrix {W}21,1 of each distinct connected region and hole and measure their mean shape using the ratio β \\equiv < {λ }2/{λ }1> . We compare both {W}21,1 and β for a Gaussian field and a smoothed density field generated from the latest Horizon Run 4 cosmological simulation to study the effect of gravitational collapse on these functions. The global statistic {W}21,1 is essentially independent of gravitational collapse, as the process maintains statistical isotropy. However, β is modified significantly, with overdensities becoming relatively more circular compared to underdensities at low redshifts. When applying the statistics to a redshift-space distorted density field, the matrix {W}21,1 is no longer proportional to the identity matrix, and measurements of its diagonal elements can be used to probe the large-scale velocity field.
Peppa, V; Pappas, E P; Karaiskos, P; Major, T; Polgár, C; Papagiannis, P
2016-10-01
To investigate the clinical significance of introducing model based dose calculation algorithms (MBDCAs) as an alternative to TG-43 in 192 Ir interstitial breast brachytherapy. A 57 patient cohort was used in a retrospective comparison between TG-43 based dosimetry data exported from a treatment planning system and Monte Carlo (MC) dosimetry performed using MCNP v. 6.1 with plan and anatomy information in DICOM-RT format. Comparison was performed for the target, ipsilateral lung, heart, skin, breast and ribs, using dose distributions, dose-volume histograms (DVH) and plan quality indices clinically used for plan evaluation, as well as radiobiological parameters. TG-43 overestimation of target DVH parameters is statistically significant but small (less than 2% for the target coverage indices and 4% for homogeneity indices, on average). Significant dose differences (>5%) were observed close to the skin and at relatively large distances from the implant leading to a TG-43 dose overestimation for the organs at risk. These differences correspond to low dose regions (<50% of the prescribed dose), being less than 2% of the prescribed dose. Detected dosimetric differences did not induce clinically significant differences in calculated tumor control probabilities (mean absolute difference <0.2%) and normal tissue complication probabilities. While TG-43 shows a statistically significant overestimation of most indices used for plan evaluation, differences are small and therefore not clinically significant. Improved MBDCA dosimetry could be important for re-irradiation, technique inter-comparison and/or the assessment of secondary cancer induction risk, where accurate dosimetry in the whole patient anatomy is of the essence. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Image registration with uncertainty analysis
Simonson, Katherine M [Cedar Crest, NM
2011-03-22
In an image registration method, edges are detected in a first image and a second image. A percentage of edge pixels in a subset of the second image that are also edges in the first image shifted by a translation is calculated. A best registration point is calculated based on a maximum percentage of edges matched. In a predefined search region, all registration points other than the best registration point are identified that are not significantly worse than the best registration point according to a predetermined statistical criterion.
DICOM organ dose does not accurately represent calculated dose in mammography
NASA Astrophysics Data System (ADS)
Suleiman, Moayyad E.; Brennan, Patrick C.; McEntee, Mark F.
2016-03-01
This study aims to analyze the agreement between the mean glandular dose estimated by the mammography unit (organ dose) and mean glandular dose calculated using Dance et al published method (calculated dose). Anonymised digital mammograms from 50 BreastScreen NSW centers were downloaded and exposure information required for the calculation of dose was extracted from the DICOM header along with the organ dose estimated by the system. Data from quality assurance annual tests for the included centers were collected and used to calculate the mean glandular dose for each mammogram. Bland-Altman analysis and a two-tailed paired t-test were used to study the agreement between calculated and organ dose and the significance of any differences. A total of 27,869 dose points from 40 centers were included in the study, mean calculated dose and mean organ dose (+/- standard deviation) were 1.47 (+/-0.66) and 1.38 (+/-0.56) mGy respectively. A statistically significant 0.09 mGy bias (t = 69.25; p<0.0001) with 95% limits of agreement between calculated and organ doses ranging from -0.34 and 0.52 were shown by Bland-Altman analysis, which indicates a small yet highly significant difference between the two means. The use of organ dose for dose audits is done at the risk of over or underestimating the calculated dose, hence, further work is needed to identify the causal agents for differences between organ and calculated doses and to generate a correction factor for organ dose.
Kent, Peter; Boyle, Eleanor; Keating, Jennifer L; Albert, Hanne B; Hartvigsen, Jan
2017-02-01
To quantify variability in the results of statistical analyses based on contingency tables and discuss the implications for the choice of sample size for studies that derive clinical prediction rules. An analysis of three pre-existing sets of large cohort data (n = 4,062-8,674) was performed. In each data set, repeated random sampling of various sample sizes, from n = 100 up to n = 2,000, was performed 100 times at each sample size and the variability in estimates of sensitivity, specificity, positive and negative likelihood ratios, posttest probabilities, odds ratios, and risk/prevalence ratios for each sample size was calculated. There were very wide, and statistically significant, differences in estimates derived from contingency tables from the same data set when calculated in sample sizes below 400 people, and typically, this variability stabilized in samples of 400-600 people. Although estimates of prevalence also varied significantly in samples below 600 people, that relationship only explains a small component of the variability in these statistical parameters. To reduce sample-specific variability, contingency tables should consist of 400 participants or more when used to derive clinical prediction rules or test their performance. Copyright © 2016 Elsevier Inc. All rights reserved.
Bao, Ying; Wang, Dejun; Du, Zhenlan; Liu, Shuhen
2014-07-01
To determine the predicative significance of HRV and HRT to premature beat on patients with coal-worker's pneumoconiosis. 100 coal-worker's pneumoconiosis patients with premature beat (including 44 cases of occasional ventricular premature contraction and 56 cases of frequent ventricular premature contraction) were chosen as CWP group, and 50 healthy coal workers were chosen as control group. 24 h DCG was used to monitor and analyze the change of premature beat and to calculate HRV. Index: SDNN, SDANN, HFLF, HRT: TO, TS, compare HRV of CWP group and control group and the changes of HRT of both occasional and frequent ventricular premature contraction. The incidence of CWP at night (66.1%, 37 cases) is higher than that during daytime (33.9%, 19 cases), and the difference is statistically significant with P < 0.05. HRV (SDNN SDANN HF HL) indexes of CWP group are lower than control group, and the difference is statistically significant with P < 0.05. HRV indexes of control group at night are higher than that during daytime, and the difference is statistically significant with P < 0.05. Comparison of CWP group HRV indexes between day and night is statistically insignificant with P > 0.05. Compared with control group, TO of CWP group is higher while TS is lower, and the difference is statistically significant with P < 0.05. Compared with occasional ventricular premature contraction patients in CWP group, TO of frequent ventricular premature contraction patients is higher while TS is lower, and the difference is statistically significant with P < 0.05. Frequent ventricular premature contraction group in CWP group suffer from severe impaired autonomic nervous function injury, and abnormal HRV and HRT can be prognostic indicator of frequent ventricular premature contraction among coal-worker's pneumoconiosis patients.
Effect-Size Measures and Meta-Analytic Thinking in Counseling Psychology Research
ERIC Educational Resources Information Center
Henson, Robin K.
2006-01-01
Effect sizes are critical to result interpretation and synthesis across studies. Although statistical significance testing has historically dominated the determination of result importance, modern views emphasize the role of effect sizes and confidence intervals. This article accessibly discusses how to calculate and interpret the effect sizes…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oliver, J; Budzevich, M; Moros, E
Purpose: To investigate the relationship between quantitative image features (i.e. radiomics) and statistical fluctuations (i.e. electronic noise) in clinical Computed Tomography (CT) using the standardized American College of Radiology (ACR) CT accreditation phantom and patient images. Methods: Three levels of uncorrelated Gaussian noise were added to CT images of phantom and patients (20) acquired in static mode and respiratory tracking mode. We calculated the noise-power spectrum (NPS) of the original CT images of the phantom, and of the phantom images with added Gaussian noise with means of 50, 80, and 120 HU. Concurrently, on patient images (original and noise-added images),more » image features were calculated: 14 shape, 19 intensity (1st order statistics from intensity volume histograms), 18 GLCM features (2nd order statistics from grey level co-occurrence matrices) and 11 RLM features (2nd order statistics from run-length matrices). These features provide the underlying structural information of the images. GLCM (size 128x128) was calculated with a step size of 1 voxel in 13 directions and averaged. RLM feature calculation was performed in 13 directions with grey levels binning into 128 levels. Results: Adding the electronic noise to the images modified the quality of the NPS, shifting the noise from mostly correlated to mostly uncorrelated voxels. The dramatic increase in noise texture did not affect image structure/contours significantly for patient images. However, it did affect the image features and textures significantly as demonstrated by GLCM differences. Conclusion: Image features are sensitive to acquisition factors (simulated by adding uncorrelated Gaussian noise). We speculate that image features will be more difficult to detect in the presence of electronic noise (an uncorrelated noise contributor) or, for that matter, any other highly correlated image noise. This work focuses on the effect of electronic, uncorrelated, noise and future work shall examine the influence of changes in quantum noise on the features. J. Oliver was supported by NSF FGLSAMP BD award HRD #1139850 and the McKnight Doctoral Fellowship.« less
Crans, Gerald G; Shuster, Jonathan J
2008-08-15
The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhen, X; Chen, H; Liao, Y
Purpose: To study the feasibility of employing deformable registration methods for accurate rectum dose volume parameters calculation and their potentials in revealing rectum dose-toxicity between complication and non-complication cervical cancer patients with brachytherapy treatment. Method and Materials: Data from 60 patients treated with BT including planning images, treatment plans, and follow-up clinical exam were retrospectively collected. Among them, 12 patients complained about hematochezia were further examined with colonoscopy and scored as Grade 1–3 complication (CP). Meanwhile, another 12 non-complication (NCP) patients were selected as a reference group. To seek for potential gains in rectum toxicity prediction when fractional anatomical deformationsmore » are account for, the rectum dose volume parameters D0.1/1/2cc of the selected patients were retrospectively computed by three different approaches: the simple “worstcase scenario” (WS) addition method, an intensity-based deformable image registration (DIR) algorithm-Demons, and a more accurate, recent developed local topology preserved non-rigid point matching algorithm (TOP). Statistical significance of the differences between rectum doses of the CP group and the NCP group were tested by a two-tailed t-test and results were considered to be statistically significant if p < 0.05. Results: For the D0.1cc, no statistical differences are found between the CP and NCP group in all three methods. For the D1cc, dose difference is not detected by the WS method, however, statistical differences between the two groups are observed by both Demons and TOP, and more evident in TOP. For the D2cc, the CP and NCP cases are statistically significance of the difference for all three methods but more pronounced with TOP. Conclusion: In this study, we calculated the rectum D0.1/1/2cc by simple WS addition and two DIR methods and seek for gains in rectum toxicity prediction. The results favor the claim that accurate dose deformation and summation tend to be more sensitive in unveiling the dose-toxicity relationship. This work is supported in part by grant from VARIAN MEDICAL SYSTEMS INC, the National Natural Science Foundation of China (no 81428019 and no 81301940), the Guangdong Natural Science Foundation (2015A030313302)and the 2015 Pearl River S&T Nova Program of Guangzhou (201506010096).« less
Atan, Doğan; İkincioğulları, Aykut; Köseoğlu, Sabri; Özcan, Kürşat Murat; Çetin, Mehmet Ali; Ensari, Serdar; Dere, Hüseyin
2015-01-01
Background: Bell’s palsy is the most frequent cause of unilateral facial paralysis. Inflammation is thought to play an important role in the pathogenesis of Bell’s palsy. Aims: Neutrophil to lymphocyte ratio (NLR) and platelet to lymphocyte ratio (PLR) are simple and inexpensive tests which are indicative of inflammation and can be calculated by all physicians. The aim of this study was to reveal correlations of Bell’s palsy and degree of paralysis with NLR and PLR. Study Design: Case-control study. Methods: The retrospective study was performed January 2010 and December 2013. Ninety-nine patients diagnosed as Bell’s palsy were included in the Bell’s palsy group and ninety-nine healthy individuals with the same demographic characteristics as the Bell’s palsy group were included in the control group. As a result of analyses, NLR and PLR were calculated. Results: The mean NLR was 4.37 in the Bell’s palsy group and 1.89 in the control group with a statistically significant difference (p<0.001). The mean PLR was 137.5 in the Bell’s palsy group and 113.75 in the control group with a statistically significant difference (p=0.008). No statistically significant relation was detected between the degree of facial paralysis and NLR and PLR. Conclusion: The NLR and the PLR were significantly higher in patients with Bell’s palsy. This is the first study to reveal a relation between Bell’s palsy and PLR. NLR and PLR can be used as auxiliary parameters in the diagnosis of Bell’s palsy. PMID:26167340
Liu, Zhangshun; Liu, Jie; Shi, Xiaohong; Wang, Lihong; Yang, Yan; Tao, Minfang; Fu, Qiang
2017-09-01
The aim of this study is to compare calculated free testosterone (cFT) and total testosterone (T) in predicting late-onset hypogonadism (LOH) in middle-aged and elderly males. We surveyed a random sample of 608 males between the ages of 45 and 87 years from Shanghai, China. The Aging Male Symptoms (AMS) questionnaire and the Androgen Deficiency in Aging Male (ADAM) questionnaire were completed by the subjects. Testosterone (T), sex hormone-binding globulin (SHBG), albumin, and other blood biochemical indexes were measured in 332 males. The corresponding cFT was obtained using the Vermeulen formula and the correlations between T and cFT were analyzed by SPSS statistical software. Among the 332 males who underwent biochemical evaluation, 289 males (87.0%) was positively screened by the ADAM questionnaire and 232 males (69.9%) by the AMS questionnaire. As suggested by linear regression, cFT exhibited a negative correlation with age in both ADAM+ and AMS+ group, whereas T did not appear to have significant correlation with age. Besides, there were statistically significant differences in cFT (P<.001) in the AMS questionnaire. Calculated free testosterone levels are more reliable than T levels for diagnosing LOH in middle-aged and elderly males. © 2016 Wiley Periodicals, Inc.
DASS: efficient discovery and p-value calculation of substructures in unordered data.
Hollunder, Jens; Friedel, Maik; Beyer, Andreas; Workman, Christopher T; Wilhelm, Thomas
2007-01-01
Pattern identification in biological sequence data is one of the main objectives of bioinformatics research. However, few methods are available for detecting patterns (substructures) in unordered datasets. Data mining algorithms mainly developed outside the realm of bioinformatics have been adapted for that purpose, but typically do not determine the statistical significance of the identified patterns. Moreover, these algorithms do not exploit the often modular structure of biological data. We present the algorithm DASS (Discovery of All Significant Substructures) that first identifies all substructures in unordered data (DASS(Sub)) in a manner that is especially efficient for modular data. In addition, DASS calculates the statistical significance of the identified substructures, for sets with at most one element of each type (DASS(P(set))), or for sets with multiple occurrence of elements (DASS(P(mset))). The power and versatility of DASS is demonstrated by four examples: combinations of protein domains in multi-domain proteins, combinations of proteins in protein complexes (protein subcomplexes), combinations of transcription factor target sites in promoter regions and evolutionarily conserved protein interaction subnetworks. The program code and additional data are available at http://www.fli-leibniz.de/tsb/DASS
Development of polytoxicomania in function of defence from psychoticism.
Nenadović, Milutin M; Sapić, Rosa
2011-01-01
Polytoxicomanic proportions in subpopulations of youth have been growing steadily in recent decades, and this trend is pan-continental. Psychoticism is a psychological construct that assumes special basic dimensions of personality disintegration and cognitive functions. Psychoticism may, in general, be the basis of pathological functioning of youth and influence the patterns of thought, feelings and actions that cause dysfunction. The aim of this study was to determine the distribution of basic dimensions of psychoticism for commitment of youth to abuse psychoactive substances (PAS) in order to reduce disturbing intrapsychic experiences or manifestation of psychotic symptoms. For the purpose of this study, two groups of respondents were formed, balanced by age, gender and family structure of origin (at least one parent alive). The study applied a DELTA-9 instrument for assessment of cognitive disintegration in function of establishing psychoticism and its operationalization. The obtained results were statistically analyzed. From the parameters of descriptive statistics, the arithmetic mean was calculated with measures of dispersion. A cross-tabular analysis of variables tested was performed, as well as statistical significance with Pearson's chi2-test, and analysis of variance. Age structure and gender are approximately represented in the group of polytoximaniacs and the control group. Testing did not confirm the statistically significant difference (p > 0.5). Statistical methodology established that they significantly differed in most variables of psychoticism, polytoxicomaniacs compared with a control group of respondents. Testing confirmed a high statistical significance of differences of variables of psychoticism in the group of respondents for p < 0.001 to p < 0.01. A statistically significant representation of the dimension of psychoticism in the polytoxicomaniac group was established. The presence of factors concerning common executive dysfunction was emphasized.
Lamont, Scott; Brunero, Scott
2018-05-19
Workplace violence prevalence has attracted significant attention within the international nursing literature. Little attention to non-mental health settings and a lack of evaluation rigor have been identified within review literature. To examine the effects of a workplace violence training program in relation to risk assessment and management practices, de-escalation skills, breakaway techniques, and confidence levels, within an acute hospital setting. A quasi-experimental study of nurses using pretest-posttest measurements of educational objectives and confidence levels, with two week follow-up. A 440 bed metropolitan tertiary referral hospital in Sydney, Australia. Nurses working in specialties identified as a 'high risk' for violence. A pre-post-test design was used with participants attending a one day workshop. The workshop evaluation comprised the use of two validated questionnaires: the Continuing Professional Development Reaction questionnaire, and the Confidence in Coping with Patient Aggression Instrument. Descriptive and inferential statistics were calculated. The paired t-test was used to assess the statistical significance of changes in the clinical behaviour intention and confidence scores from pre- to post-intervention. Cohen's d effect sizes were calculated to determine the extent of the significant results. Seventy-eight participants completed both pre- and post-workshop evaluation questionnaires. Statistically significant increases in behaviour intention scores were found in fourteen of the fifteen constructs relating to the three broad workshop objectives, and confidence ratings, with medium to large effect sizes observed in some constructs. A significant increase in overall confidence in coping with patient aggression was also found post-test with large effect size. Positive results were observed from the workplace violence training. Training needs to be complimented by a multi-faceted organisational approach which includes governance, quality and review processes. Copyright © 2018 Elsevier Ltd. All rights reserved.
Bowden, Peter; Beavis, Ron; Marshall, John
2009-11-02
A goodness of fit test may be used to assign tandem mass spectra of peptides to amino acid sequences and to directly calculate the expected probability of mis-identification. The product of the peptide expectation values directly yields the probability that the parent protein has been mis-identified. A relational database could capture the mass spectral data, the best fit results, and permit subsequent calculations by a general statistical analysis system. The many files of the Hupo blood protein data correlated by X!TANDEM against the proteins of ENSEMBL were collected into a relational database. A redundant set of 247,077 proteins and peptides were correlated by X!TANDEM, and that was collapsed to a set of 34,956 peptides from 13,379 distinct proteins. About 6875 distinct proteins were only represented by a single distinct peptide, 2866 proteins showed 2 distinct peptides, and 3454 proteins showed at least three distinct peptides by X!TANDEM. More than 99% of the peptides were associated with proteins that had cumulative expectation values, i.e. probability of false positive identification, of one in one hundred or less. The distribution of peptides per protein from X!TANDEM was significantly different than those expected from random assignment of peptides.
[Micro-simulation of firms' heterogeneity on pollution intensity and regional characteristics].
Zhao, Nan; Liu, Yi; Chen, Ji-Ning
2009-11-01
In the same industrial sector, heterogeneity of pollution intensity exists among firms. There are some errors if using sector's average pollution intensity, which are calculated by limited number of firms in environmental statistic database to represent the sector's regional economic-environmental status. Based on the production function which includes environmental depletion as input, a micro-simulation model on firms' operational decision making is proposed. Then the heterogeneity of firms' pollution intensity can be mechanically described. Taking the mechanical manufacturing sector in Deyang city, 2005 as the case, the model's parameters were estimated. And the actual COD emission intensities of environmental statistic firms can be properly matched by the simulation. The model's results also show that the regional average COD emission intensity calculated by the environmental statistic firms (0.002 6 t per 10 000 yuan fixed asset, 0.001 5 t per 10 000 yuan production value) is lower than the regional average intensity calculated by all the firms in the region (0.003 0 t per 10 000 yuan fixed asset, 0.002 3 t per 10 000 yuan production value). The difference among average intensities in the six counties is significant as well. These regional characteristics of pollution intensity attribute to the sector's inner-structure (firms' scale distribution, technology distribution) and its spatial deviation.
Peterson, Cynthia K; Saupe, Nadja; Buck, Florian; Pfirrmann, Christian W A; Zanetti, Marco; Hodler, Juerg
2010-12-01
The purpose of this study was to evaluate pain relief 20 to 30 minutes after diagnostic or therapeutic injections into the sternoclavicular joint and to compare patient outcomes based on the CT diagnosis. Informed consent was obtained from each patient. Ethics approval was not required. Fifty patients who had CT-guided injections of corticosteroid and local anesthetic into their sternoclavicular joints were included in the study. Preinjection and 20- to 30-minute postinjection visual analog scale data were recorded and compared with the imaging findings agreed by consensus. Kappa statistics were calculated for the reliability of imaging diagnosis. The percentage of patients improving after joint injection was calculated, and the risk ratio comparing the response of patients with osteoarthritis to those without osteoarthritis was completed. The correlation between the severity of each patient's osteoarthritis and the pain response was calculated using Spearman's correlation coefficient. Sixty-six percent of the patients reported clinically significant pain reduction at between 20 and 30 minutes after injection. The proportion of patients with osteoarthritis who had a clinically significant response was 67% compared with 64% for patients who did not have osteoarthritis. This difference was not statistically or clinically significant. There was no correlation between the severity of osteoarthritis and the amount of pain reduction (r = 0.03). The reliability of imaging diagnosis was substantial. Two thirds of patients having sternoclavicular joint injections of corticosteroids and local anesthetics report clinically significant improvement regardless of the abnormalities detected on their CT images.
Kauweloa, Kevin I; Gutierrez, Alonso N; Stathakis, Sotirios; Papanikolaou, Niko; Mavroidis, Panayiotis
2016-07-01
A toolkit has been developed for calculating the 3-dimensional biological effective dose (BED) distributions in multi-phase, external beam radiotherapy treatments such as those applied in liver stereotactic body radiation therapy (SBRT) and in multi-prescription treatments. This toolkit also provides a wide range of statistical results related to dose and BED distributions. MATLAB 2010a, version 7.10 was used to create this GUI toolkit. The input data consist of the dose distribution matrices, organ contour coordinates, and treatment planning parameters from the treatment planning system (TPS). The toolkit has the capability of calculating the multi-phase BED distributions using different formulas (denoted as true and approximate). Following the calculations of the BED distributions, the dose and BED distributions can be viewed in different projections (e.g. coronal, sagittal and transverse). The different elements of this toolkit are presented and the important steps for the execution of its calculations are illustrated. The toolkit is applied on brain, head & neck and prostate cancer patients, who received primary and boost phases in order to demonstrate its capability in calculating BED distributions, as well as measuring the inaccuracy and imprecision of the approximate BED distributions. Finally, the clinical situations in which the use of the present toolkit would have a significant clinical impact are indicated. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Basic biostatistics for post-graduate students
Dakhale, Ganesh N.; Hiware, Sachin K.; Shinde, Abhijit T.; Mahatme, Mohini S.
2012-01-01
Statistical methods are important to draw valid conclusions from the obtained data. This article provides background information related to fundamental methods and techniques in biostatistics for the use of postgraduate students. Main focus is given to types of data, measurement of central variations and basic tests, which are useful for analysis of different types of observations. Few parameters like normal distribution, calculation of sample size, level of significance, null hypothesis, indices of variability, and different test are explained in detail by giving suitable examples. Using these guidelines, we are confident enough that postgraduate students will be able to classify distribution of data along with application of proper test. Information is also given regarding various free software programs and websites useful for calculations of statistics. Thus, postgraduate students will be benefitted in both ways whether they opt for academics or for industry. PMID:23087501
Taljanovic, Mihra S; Graham, Anna R; Benjamin, James B; Gmitro, Arthur F; Krupinski, Elizabeth A; Schwartz, Stephanie A; Hunter, Tim B; Resnick, Donald L
2008-05-01
To correlate the amount of bone marrow edema (BME) calculated by magnetic resonance imaging(MRI) with clinical findings, histopathology, and radiographic findings, in patients with advanced hip osteoarthritis(OA). The study was approved by The Institutional Human Subject Protection Committee. Coronal MRI of hips was acquired in 19 patients who underwent hip replacement. A spin echo (SE) sequence with four echoes and separate fast spin echo (FSE) proton density (PD)-weighted SE sequences of fat (F) and water (W) were acquired with water and fat suppression, respectively. T2 and water:fat ratio calculations were made for the outlined regions of interest. The calculated MRI values were correlated with the clinical, radiographic, and histopathologic findings. Analyses of variance were done on the MRI data for W/(W + F) and for T2 values (total and focal values) for the symptomatic and contralateral hips. The values were significantly higher in the study group. Statistically significant correlations were found between pain and total W/(W + F), pain and focal T2 values, and the number of microfractures and calculated BME for the focal W/(W + F) in the proximal femora. Statistically significant correlations were found between the radiographic findings and MRI values for total W/(W + F), focal W/(W + F) and focal T2 and among the radiographic findings, pain, and hip movement. On histopathology, only a small amount of BME was seen in eight proximal femora. The amount of BME in the OA hip, as measured by MRI, correlates with the severity of pain, radiographic findings, and number of microfractures.
Liu, Yuewei; Chen, Weihong
2012-02-01
As a nonparametric method, the Kruskal-Wallis test is widely used to compare three or more independent groups when an ordinal or interval level of data is available, especially when the assumptions of analysis of variance (ANOVA) are not met. If the Kruskal-Wallis statistic is statistically significant, Nemenyi test is an alternative method for further pairwise multiple comparisons to locate the source of significance. Unfortunately, most popular statistical packages do not integrate the Nemenyi test, which is not easy to be calculated by hand. We described the theory and applications of the Kruskal-Wallis and Nemenyi tests, and presented a flexible SAS macro to implement the two tests. The SAS macro was demonstrated by two examples from our cohort study in occupational epidemiology. It provides a useful tool for SAS users to test the differences among three or more independent groups using a nonparametric method.
Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis. PMID:27792763
Chen, Shi-Yi; Deng, Feilong; Huang, Ying; Li, Cao; Liu, Linhai; Jia, Xianbo; Lai, Song-Jia
2016-01-01
Although various computer tools have been elaborately developed to calculate a series of statistics in molecular population genetics for both small- and large-scale DNA data, there is no efficient and easy-to-use toolkit available yet for exclusively focusing on the steps of mathematical calculation. Here, we present PopSc, a bioinformatic toolkit for calculating 45 basic statistics in molecular population genetics, which could be categorized into three classes, including (i) genetic diversity of DNA sequences, (ii) statistical tests for neutral evolution, and (iii) measures of genetic differentiation among populations. In contrast to the existing computer tools, PopSc was designed to directly accept the intermediate metadata, such as allele frequencies, rather than the raw DNA sequences or genotyping results. PopSc is first implemented as the web-based calculator with user-friendly interface, which greatly facilitates the teaching of population genetics in class and also promotes the convenient and straightforward calculation of statistics in research. Additionally, we also provide the Python library and R package of PopSc, which can be flexibly integrated into other advanced bioinformatic packages of population genetics analysis.
Breaker, Brian K.
2015-01-01
Equations for two regions were found to be statistically significant for developing regression equations for estimating harmonic mean flows at ungaged basins; thus, equations are applicable only to streams in those respective regions in Arkansas. Regression equations for dry season mean monthly flows are applicable only to streams located throughout Arkansas. All regression equations are applicable only to unaltered streams where flows were not significantly affected by regulation, diversion, or urbanization. The median number of years used for dry season mean monthly flow calculation was 43, and the median number of years used for harmonic mean flow calculations was 34 for region 1 and 43 for region 2.
Scherpelz, Peter; Govoni, Marco; Hamada, Ikutaro; ...
2016-06-22
We present an implementation of G 0W 0 calculations including spin–orbit coupling (SOC) enabling investigations of large systems, with thousands of electrons, and we discuss results for molecules, solids, and nanocrystals. Using a newly developed set of molecules with heavy elements (called GW-SOC81), we find that, when based upon hybrid density functional calculations, fully relativistic (FR) and scalar-relativistic (SR) G 0W 0 calculations of vertical ionization potentials both yield excellent performance compared to experiment, with errors below 1.9%. We demonstrate that while SR calculations have higher random errors, FR calculations systematically underestimate the VIP by 0.1 to 0.2 eV. Wemore » further verify that SOC effects may be well approximated at the FR density functional level and then added to SR G 0W 0 results for a broad class of systems. We also address the use of different root-finding algorithms for the G 0W 0 quasiparticle equation and the significant influence of including d electrons in the valence partition of the pseudopotential for G 0W 0 calculations. Lastly, we present statistical analyses of our data, highlighting the importance of separating definitive improvements from those that may occur by chance due to a limited number of samples. We suggest the statistical analyses used here will be useful in the assessment of the accuracy of a large variety of electronic structure methods« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherpelz, Peter; Govoni, Marco; Hamada, Ikutaro
We present an implementation of G 0W 0 calculations including spin–orbit coupling (SOC) enabling investigations of large systems, with thousands of electrons, and we discuss results for molecules, solids, and nanocrystals. Using a newly developed set of molecules with heavy elements (called GW-SOC81), we find that, when based upon hybrid density functional calculations, fully relativistic (FR) and scalar-relativistic (SR) G 0W 0 calculations of vertical ionization potentials both yield excellent performance compared to experiment, with errors below 1.9%. We demonstrate that while SR calculations have higher random errors, FR calculations systematically underestimate the VIP by 0.1 to 0.2 eV. Wemore » further verify that SOC effects may be well approximated at the FR density functional level and then added to SR G 0W 0 results for a broad class of systems. We also address the use of different root-finding algorithms for the G 0W 0 quasiparticle equation and the significant influence of including d electrons in the valence partition of the pseudopotential for G 0W 0 calculations. Lastly, we present statistical analyses of our data, highlighting the importance of separating definitive improvements from those that may occur by chance due to a limited number of samples. We suggest the statistical analyses used here will be useful in the assessment of the accuracy of a large variety of electronic structure methods« less
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A
2016-05-08
The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.
Helmy, Tamer Abdallah; El-Reweny, Ehab Mahmoud; Ghazy, Farahat Gomaa
2017-09-01
The partial pressure of venous to arterial carbon dioxide gradient (PCO 2 gap) is considered as an alternative marker of tissue hypoperfusion and has been used to guide treatment for shock. The aim of this study was to investigate the prognostic value of venous-to-arterial carbon dioxide difference during early resuscitation of patients with septic shock and compared it with that of lactate clearance and Acute Physiology and Chronic Health Evaluation II (APACHE-II) score. Forty patients admitted to one Intensive Care Unit were enrolled. APACHE-II score was calculated on admission. An arterial blood gas, central venous, and lactate samples were obtained on admission and after 6 h, and lactate clearance was calculated. Patients were classified retrospectively into Group I (survivors) and Group II (nonsurvivors). Pv-aCO 2 difference in the two groups was evaluated. Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. At T0, Group II showed high PCO 2 gap (8.37 ± 1.36 mmHg) than Group I (7.55 ± 0.95 mmHg) with statistically significant difference ( P = 0.030). While at T6, Group II showed higher PCO 2 gap (9.48 ± 1.47 mmHg) with statistically significant difference ( P < 0.001) and higher mean lactate values (62.71 ± 23.66 mg/dl) with statistically significant difference ( P < 0.001) than Group I where PCO 2 gap and mean lactate values became much lower, 5.91 ± 1.12 mmHg and 33.61 ± 5.80 mg mg/dl, respectively. Group I showed higher lactate clearance (25.42 ± 6.79%) with statistically significant difference ( P < 0.001) than Group II (-69.40-15.46%). High PCO 2 gap >7.8 mmHg after 6 h from resuscitation of septic shock patients is associated with high mortality.
The Influence of Ability Grouping on Math Achievement in a Rural Middle School
ERIC Educational Resources Information Center
Pritchard, Robert R.
2012-01-01
The researcher examined the academic performance of low-tracked students (n = 156) using standardized math test scores to determine whether there is a statistically significant difference in achievement depending on academic environment, tracked or nontracked. An analysis of variance (ANOVA) was calculated, using a paired samples t-test for a…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, C.; Potts, I.; Reeks, M. W., E-mail: mike.reeks@ncl.ac.uk
We present a simple stochastic quadrant model for calculating the transport and deposition of heavy particles in a fully developed turbulent boundary layer based on the statistics of wall-normal fluid velocity fluctuations obtained from a fully developed channel flow. Individual particles are tracked through the boundary layer via their interactions with a succession of random eddies found in each of the quadrants of the fluid Reynolds shear stress domain in a homogeneous Markov chain process. In this way, we are able to account directly for the influence of ejection and sweeping events as others have done but without resorting tomore » the use of adjustable parameters. Deposition rate predictions for a wide range of heavy particles predicted by the model compare well with benchmark experimental measurements. In addition, deposition rates are compared with those obtained from continuous random walk models and Langevin equation based ejection and sweep models which noticeably give significantly lower deposition rates. Various statistics related to the particle near wall behavior are also presented. Finally, we consider the model limitations in using the model to calculate deposition in more complex flows where the near wall turbulence may be significantly different.« less
Freezing temperature of finger skin.
Wilson, O; Goldman, R F; Molnar, G W
1976-10-01
In 45 subjects, 154 frostnips of the finger were induced by cooling in air at -15 degrees C with various wind speeds. The mean supercooled skin temperature at which frostnip appeared was -9.4 degrees C. The mean skin temperature rise due to heat of fusion at ice crystallization was 5.3 degrees C. The skin temperature rose to what was termed the apparent freezing point. The relation of this point to the supercooled skin temperature was analyzed for the three wind speeds used. An apparent freezing point for a condition of no supercooling was calculated, estimating the highest temperature at which skin freezes at a given wind speed. The validity of the obtained differences in apparent freezing point was tested by an analysis of covariance. Although not statistically significant, the data suggest that the apparent freezing point with no supercooling decreases with increasing wind velocity. The highest calculated apparent freezing point at -15 degrees C and 6.8 m/s was 1.2 degrees C lower than the true freezing point for skin previously determined in brine, which is a statistically significant difference.
A shift from significance test to hypothesis test through power analysis in medical research.
Singh, G
2006-01-01
Medical research literature until recently, exhibited substantial dominance of the Fisher's significance test approach of statistical inference concentrating more on probability of type I error over Neyman-Pearson's hypothesis test considering both probability of type I and II error. Fisher's approach dichotomises results into significant or not significant results with a P value. The Neyman-Pearson's approach talks of acceptance or rejection of null hypothesis. Based on the same theory these two approaches deal with same objective and conclude in their own way. The advancement in computing techniques and availability of statistical software have resulted in increasing application of power calculations in medical research and thereby reporting the result of significance tests in the light of power of the test also. Significance test approach, when it incorporates power analysis contains the essence of hypothesis test approach. It may be safely argued that rising application of power analysis in medical research may have initiated a shift from Fisher's significance test to Neyman-Pearson's hypothesis test procedure.
Calculating wave-generated bottom orbital velocities from surface-wave parameters
Wiberg, P.L.; Sherwood, C.R.
2008-01-01
Near-bed wave orbital velocities and shear stresses are important parameters in many sediment-transport and hydrodynamic models of the coastal ocean, estuaries, and lakes. Simple methods for estimating bottom orbital velocities from surface-wave statistics such as significant wave height and peak period often are inaccurate except in very shallow water. This paper briefly reviews approaches for estimating wave-generated bottom orbital velocities from near-bed velocity data, surface-wave spectra, and surface-wave parameters; MATLAB code for each approach is provided. Aspects of this problem have been discussed elsewhere. We add to this work by providing a method for using a general form of the parametric surface-wave spectrum to estimate bottom orbital velocity from significant wave height and peak period, investigating effects of spectral shape on bottom orbital velocity, comparing methods for calculating bottom orbital velocity against values determined from near-bed velocity measurements at two sites on the US east and west coasts, and considering the optimal representation of bottom orbital velocity for calculations of near-bed processes. Bottom orbital velocities calculated using near-bed velocity data, measured wave spectra, and parametric spectra for a site on the northern California shelf and one in the mid-Atlantic Bight compare quite well and are relatively insensitive to spectral shape except when bimodal waves are present with maximum energy at the higher-frequency peak. These conditions, which are most likely to occur at times when bottom orbital velocities are small, can be identified with our method as cases where the measured wave statistics are inconsistent with Donelan's modified form of the Joint North Sea Wave Project (JONSWAP) spectrum. We define the 'effective' forcing for wave-driven, near-bed processes as the product of the magnitude of forcing times its probability of occurrence, and conclude that different bottom orbital velocity statistics may be appropriate for different problems. ?? 2008 Elsevier Ltd.
Chi-Square Statistics, Tests of Hypothesis and Technology.
ERIC Educational Resources Information Center
Rochowicz, John A.
The use of technology such as computers and programmable calculators enables students to find p-values and conduct tests of hypotheses in many different ways. Comprehension and interpretation of a research problem become the focus for statistical analysis. This paper describes how to calculate chisquare statistics and p-values for statistical…
2014-01-01
Background Thresholds for statistical significance are insufficiently demonstrated by 95% confidence intervals or P-values when assessing results from randomised clinical trials. First, a P-value only shows the probability of getting a result assuming that the null hypothesis is true and does not reflect the probability of getting a result assuming an alternative hypothesis to the null hypothesis is true. Second, a confidence interval or a P-value showing significance may be caused by multiplicity. Third, statistical significance does not necessarily result in clinical significance. Therefore, assessment of intervention effects in randomised clinical trials deserves more rigour in order to become more valid. Methods Several methodologies for assessing the statistical and clinical significance of intervention effects in randomised clinical trials were considered. Balancing simplicity and comprehensiveness, a simple five-step procedure was developed. Results For a more valid assessment of results from a randomised clinical trial we propose the following five-steps: (1) report the confidence intervals and the exact P-values; (2) report Bayes factor for the primary outcome, being the ratio of the probability that a given trial result is compatible with a ‘null’ effect (corresponding to the P-value) divided by the probability that the trial result is compatible with the intervention effect hypothesised in the sample size calculation; (3) adjust the confidence intervals and the statistical significance threshold if the trial is stopped early or if interim analyses have been conducted; (4) adjust the confidence intervals and the P-values for multiplicity due to number of outcome comparisons; and (5) assess clinical significance of the trial results. Conclusions If the proposed five-step procedure is followed, this may increase the validity of assessments of intervention effects in randomised clinical trials. PMID:24588900
[Laser's biostimulation in healing or crural ulcerations].
Król, P; Franek, A; Huńka-Zurawińska, W; Bil, J; Swist, D; Polak, A; Bendkowski, W
2001-11-01
The objective of this paper was to evaluate effect of laser's biostimulation on the process of healing of crural ulcerations. Three comparative groups of patients, A, B and C, were made at random from the patients with venous crural ulcerations. The group A consisted of 17, the group B 15, the group C 17 patients. The patients in all comparative groups were treated pharmacologically and got compress therapy. Ulcerations at patients in group A were additionally irradiated by light of biostimulation's laser (810 nm) in this way that every time ulcerations got dose of energy 4 J/cm2. The patient's in-group B additionally got blind trial (with placebo in the form of quasi-laserotherapy). The evaluated factors were to estimate how laser's biostimulation causes any changes of the size of the ulcers and of the volume of tissue defect. The speed of changes of size and volume of tissue defect per week was calculated. After the treatment there was statistically significant decrease of size of ulcers in all comparative groups while there was no statistically significant difference between the groups observed. After the treatment there was statistically significant decrease of volume of ulcers only in groups A and C but there was no statistically significant difference between the groups observed.
Langan, Dean; Higgins, Julian P T; Gregory, Walter; Sutton, Alexander J
2012-05-01
We aim to illustrate the potential impact of a new study on a meta-analysis, which gives an indication of the robustness of the meta-analysis. A number of augmentations are proposed to one of the most widely used of graphical displays, the funnel plot. Namely, 1) statistical significance contours, which define regions of the funnel plot in which a new study would have to be located to change the statistical significance of the meta-analysis; and 2) heterogeneity contours, which show how a new study would affect the extent of heterogeneity in a given meta-analysis. Several other features are also described, and the use of multiple features simultaneously is considered. The statistical significance contours suggest that one additional study, no matter how large, may have a very limited impact on the statistical significance of a meta-analysis. The heterogeneity contours illustrate that one outlying study can increase the level of heterogeneity dramatically. The additional features of the funnel plot have applications including 1) informing sample size calculations for the design of future studies eligible for inclusion in the meta-analysis; and 2) informing the updating prioritization of a portfolio of meta-analyses such as those prepared by the Cochrane Collaboration. Copyright © 2012 Elsevier Inc. All rights reserved.
Analysis of the sleep quality of elderly people using biomedical signals.
Moreno-Alsasua, L; Garcia-Zapirain, B; Mendez-Zorrilla, A
2015-01-01
This paper presents a technical solution that analyses sleep signals captured by biomedical sensors to find possible disorders during rest. Specifically, the method evaluates electrooculogram (EOG) signals, skin conductance (GSR), air flow (AS), and body temperature. Next, a quantitative sleep quality analysis determines significant changes in the biological signals, and any similarities between them in a given time period. Filtering techniques such as the Fourier transform method and IIR filters process the signal and identify significant variations. Once these changes have been identified, all significant data is compared and a quantitative and statistical analysis is carried out to determine the level of a person's rest. To evaluate the correlation and significant differences, a statistical analysis has been calculated showing correlation between EOG and AS signals (p=0,005), EOG, and GSR signals (p=0,037) and, finally, the EOG and Body temperature (p=0,04). Doctors could use this information to monitor changes within a patient.
Blum, Thomas; Chowdhury, Saumitra; Hayakawa, Masashi; ...
2015-01-07
The form factor that yields the light-by-light scattering contribution to the muon anomalous magnetic moment is computed in lattice QCD+QED and QED. A non-perturbative treatment of QED is used and is checked against perturbation theory. The hadronic contribution is calculated for unphysical quark and muon masses, and only the diagram with a single quark loop is computed. Statistically significant signals are obtained. Initial results appear promising, and the prospect for a complete calculation with physical masses and controlled errors is discussed.
Meteorology Assessment of Historic Rainfall for Los Alamos During September 2013
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruggeman, David Alan; Dewart, Jean Marie
2016-02-12
DOE Order 420.1, Facility Safety, requires that site natural phenomena hazards be evaluated every 10 years to support the design of nuclear facilities. The evaluation requires calculating return period rainfall to determine roof loading requirements and flooding potential based on our on-site rainfall measurements. The return period rainfall calculations are done based on statistical techniques and not site-specific meteorology. This and future studies analyze the meteorological factors that produce the significant rainfall events. These studies provide the meteorology context of the return period rainfall events.
A statistical method for the conservative adjustment of false discovery rate (q-value).
Lai, Yinglei
2017-03-14
q-value is a widely used statistical method for estimating false discovery rate (FDR), which is a conventional significance measure in the analysis of genome-wide expression data. q-value is a random variable and it may underestimate FDR in practice. An underestimated FDR can lead to unexpected false discoveries in the follow-up validation experiments. This issue has not been well addressed in literature, especially in the situation when the permutation procedure is necessary for p-value calculation. We proposed a statistical method for the conservative adjustment of q-value. In practice, it is usually necessary to calculate p-value by a permutation procedure. This was also considered in our adjustment method. We used simulation data as well as experimental microarray or sequencing data to illustrate the usefulness of our method. The conservativeness of our approach has been mathematically confirmed in this study. We have demonstrated the importance of conservative adjustment of q-value, particularly in the situation that the proportion of differentially expressed genes is small or the overall differential expression signal is weak.
BCM: toolkit for Bayesian analysis of Computational Models using samplers.
Thijssen, Bram; Dijkstra, Tjeerd M H; Heskes, Tom; Wessels, Lodewyk F A
2016-10-21
Computational models in biology are characterized by a large degree of uncertainty. This uncertainty can be analyzed with Bayesian statistics, however, the sampling algorithms that are frequently used for calculating Bayesian statistical estimates are computationally demanding, and each algorithm has unique advantages and disadvantages. It is typically unclear, before starting an analysis, which algorithm will perform well on a given computational model. We present BCM, a toolkit for the Bayesian analysis of Computational Models using samplers. It provides efficient, multithreaded implementations of eleven algorithms for sampling from posterior probability distributions and for calculating marginal likelihoods. BCM includes tools to simplify the process of model specification and scripts for visualizing the results. The flexible architecture allows it to be used on diverse types of biological computational models. In an example inference task using a model of the cell cycle based on ordinary differential equations, BCM is significantly more efficient than existing software packages, allowing more challenging inference problems to be solved. BCM represents an efficient one-stop-shop for computational modelers wishing to use sampler-based Bayesian statistics.
Alsadhan, Salwa A; Alsayari, Najla F; Abuabat, Mashael F
2018-02-22
The main aim of this cross-sectional study was to assess knowledge concerning traumatic dental injuries and their management among primary schoolteachers in Riyadh, Saudi Arabia. The secondary objective was to evaluate the effect of gender, nationality, marital status, school type, geographical area, age group, level of education and years of experience on teachers' knowledge. Data were collected, through a self-administered questionnaire, from both male and female teachers employed in public and private primary schools in the five geographical areas of Riyadh City. The total sample size was 1,520 teachers. Data were entered into the Statistical Package for the Social Sciences. Frequencies and percentages were calculated. An independent t-test and a one-way analysis of variance (ANOVA) were used to calculate significance. The total score for the questions assessing knowledge was calculated out of 9, and the highest score was 7 with an average score of 2.85. Over half of the sampled participants stated that they did not know how to manage soft-tissue injuries. Regarding the management of fractured teeth, 38.8% believed that the fractured part is useless; and for the management of an avulsed permanent tooth, only 6.2% of the respondents selected the correct answer. For the question regarding suitable storage medium of an avulsed tooth, only 19.7% chose milk and 3.2% chose the injured person's saliva. Teachers between 41 and 50 years of age and those with longer years of experience had the highest level of knowledge. Teachers in the north area of Riyadh had a higher level of knowledge than teachers in other areas. There was a lack of knowledge among primary schoolteachers in Riyadh concerning traumatic dental injuries and their management. Statistically significant differences were found among geographical areas, age groups and years of experience; no statistically significant differences were found regarding gender, nationality, marital status, level of education and school type (public/private). © 2018 FDI World Dental Federation.
Physics-based statistical model and simulation method of RF propagation in urban environments
Pao, Hsueh-Yuan; Dvorak, Steven L.
2010-09-14
A physics-based statistical model and simulation/modeling method and system of electromagnetic wave propagation (wireless communication) in urban environments. In particular, the model is a computationally efficient close-formed parametric model of RF propagation in an urban environment which is extracted from a physics-based statistical wireless channel simulation method and system. The simulation divides the complex urban environment into a network of interconnected urban canyon waveguides which can be analyzed individually; calculates spectral coefficients of modal fields in the waveguides excited by the propagation using a database of statistical impedance boundary conditions which incorporates the complexity of building walls in the propagation model; determines statistical parameters of the calculated modal fields; and determines a parametric propagation model based on the statistical parameters of the calculated modal fields from which predictions of communications capability may be made.
Fisher statistics for analysis of diffusion tensor directional information.
Hutchinson, Elizabeth B; Rutecki, Paul A; Alexander, Andrew L; Sutula, Thomas P
2012-04-30
A statistical approach is presented for the quantitative analysis of diffusion tensor imaging (DTI) directional information using Fisher statistics, which were originally developed for the analysis of vectors in the field of paleomagnetism. In this framework, descriptive and inferential statistics have been formulated based on the Fisher probability density function, a spherical analogue of the normal distribution. The Fisher approach was evaluated for investigation of rat brain DTI maps to characterize tissue orientation in the corpus callosum, fornix, and hilus of the dorsal hippocampal dentate gyrus, and to compare directional properties in these regions following status epilepticus (SE) or traumatic brain injury (TBI) with values in healthy brains. Direction vectors were determined for each region of interest (ROI) for each brain sample and Fisher statistics were applied to calculate the mean direction vector and variance parameters in the corpus callosum, fornix, and dentate gyrus of normal rats and rats that experienced TBI or SE. Hypothesis testing was performed by calculation of Watson's F-statistic and associated p-value giving the likelihood that grouped observations were from the same directional distribution. In the fornix and midline corpus callosum, no directional differences were detected between groups, however in the hilus, significant (p<0.0005) differences were found that robustly confirmed observations that were suggested by visual inspection of directionally encoded color DTI maps. The Fisher approach is a potentially useful analysis tool that may extend the current capabilities of DTI investigation by providing a means of statistical comparison of tissue structural orientation. Copyright © 2012 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maruyama, Mitsunari, E-mail: mitunari@med-shimane.u.ac.jp; Yoshizako, Takeshi, E-mail: yosizako@med.shimane-u.ac.jp; Nakamura, Tomonori, E-mail: t-naka@med.shimane-u.ac.jp
2016-03-15
PurposeThis study was performed to evaluate the accumulation of lipiodol emulsion (LE) and adverse events during our initial experience of balloon-occluded trans-catheter arterial chemoembolization (B-TACE) for hepatocellular carcinoma (HCC) compared with conventional TACE (C-TACE).MethodsB-TACE group (50 cases) was compared with C-TACE group (50 cases). The ratio of the LE concentration in the tumor to that in the surrounding embolized liver parenchyma (LE ratio) was calculated after each treatment. Adverse events were evaluated according to the Common Terminology Criteria for Adverse Effects (CTCAE) version 4.0.ResultsThe LE ratio at the level of subsegmental showed a statistically significant difference between the groups (tmore » test: P < 0.05). Only elevation of alanine aminotransferase was more frequent in the B-TACE group, showing a statistically significant difference (Mann–Whitney test: P < 0.05). While B-TACE caused severe adverse events (liver abscess and infarction) in patients with bile duct dilatation, there was no statistically significant difference in incidence between the groups. Multivariate logistic regression analysis suggested that the significant risk factor for liver abscess/infarction was bile duct dilatation (P < 0.05).ConclusionThe LE ratio at the level of subsegmental showed a statistically significant difference between the groups (t test: P < 0.05). B-TACE caused severe adverse events (liver abscess and infarction) in patients with bile duct dilatation.« less
Lee, O-Sung; Ahn, Soyeon; Lee, Yong Seuk
2017-07-01
The purpose of this systematic review and meta-analysis was to evaluate the effectiveness and safety of early weight-bearing by comparing clinical and radiological outcomes between early and traditional delayed weight-bearing after OWHTO. A rigorous and systematic approach was used. The methodological quality was also assessed. Results that are possible to be compared in two or more than two articles were presented as forest plots. A 95% confidence interval was calculated for each effect size, and we calculated the I 2 statistic, which presents the percentage of total variation attributable to the heterogeneity among studies. The random-effects model was used to calculate the effect size. Six articles were included in the final analysis. All case groups were composed of early full weight-bearing within 2 weeks. All control groups were composed of late full weight-bearing between 6 weeks and 2 months. Pooled analysis was possible for the improvement in Lysholm score, but there was no statistically significant difference shown between groups. Other clinical results were also similar between groups. Four studies reported mechanical femorotibial angle (mFTA) and this result showed no statistically significant difference between groups in the pooled analysis. Furthermore, early weight-bearing showed more favorable results in some radiologic results (osseointegration and patellar height) and complications (thrombophlebitis and recurrence). Our analysis supports that early full weight-bearing after OWHTO using a locking plate leads to improvement in outcomes and was comparable to the delayed weight-bearing in terms of clinical and radiological outcomes. On the contrary, early weight-bearing was more favorable with respect to some radiologic parameters and complications compared with delayed weight-bearing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Boram; Gupta, Rajan; Bhattacharya, Tanmoy
We present a detailed analysis of methods to reduce statistical errors and excited-state contamination in the calculation of matrix elements of quark bilinear operators in nucleon states. All the calculations were done on a 2+1 flavor ensemble with lattices of sizemore » $$32^3 \\times 64$$ generated using the rational hybrid Monte Carlo algorithm at $a=0.081$~fm and with $$M_\\pi=312$$~MeV. The statistical precision of the data is improved using the all-mode-averaging method. We compare two methods for reducing excited-state contamination: a variational analysis and a two-state fit to data at multiple values of the source-sink separation $$t_{\\rm sep}$$. We show that both methods can be tuned to significantly reduce excited-state contamination and discuss their relative advantages and cost-effectiveness. A detailed analysis of the size of source smearing used in the calculation of quark propagators and the range of values of $$t_{\\rm sep}$$ needed to demonstrate convergence of the isovector charges of the nucleon to the $$t_{\\rm sep} \\to \\infty $$ estimates is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, Boram; Gupta, Rajan; Bhattacharya, Tanmoy
We present a detailed analysis of methods to reduce statistical errors and excited-state contamination in the calculation of matrix elements of quark bilinear operators in nucleon states. All the calculations were done on a 2+1-flavor ensemble with lattices of size 32 3 × 64 generated using the rational hybrid Monte Carlo algorithm at a = 0.081 fm and with M π = 312 MeV. The statistical precision of the data is improved using the all-mode-averaging method. We compare two methods for reducing excited-state contamination: a variational analysis and a 2-state fit to data at multiple values of the source-sink separationmore » t sep. We show that both methods can be tuned to significantly reduce excited-state contamination and discuss their relative advantages and cost effectiveness. As a result, a detailed analysis of the size of source smearing used in the calculation of quark propagators and the range of values of t sep needed to demonstrate convergence of the isovector charges of the nucleon to the t sep → ∞ estimates is presented.« less
Controlling excited-state contamination in nucleon matrix elements
Yoon, Boram; Gupta, Rajan; Bhattacharya, Tanmoy; ...
2016-06-08
We present a detailed analysis of methods to reduce statistical errors and excited-state contamination in the calculation of matrix elements of quark bilinear operators in nucleon states. All the calculations were done on a 2+1-flavor ensemble with lattices of size 32 3 × 64 generated using the rational hybrid Monte Carlo algorithm at a = 0.081 fm and with M π = 312 MeV. The statistical precision of the data is improved using the all-mode-averaging method. We compare two methods for reducing excited-state contamination: a variational analysis and a 2-state fit to data at multiple values of the source-sink separationmore » t sep. We show that both methods can be tuned to significantly reduce excited-state contamination and discuss their relative advantages and cost effectiveness. As a result, a detailed analysis of the size of source smearing used in the calculation of quark propagators and the range of values of t sep needed to demonstrate convergence of the isovector charges of the nucleon to the t sep → ∞ estimates is presented.« less
Ueda, Fumiaki; Aburano, Hiroyuki; Ryu, Yasuji; Yoshie, Yuichi; Nakada, Mitsutoshi; Hayashi, Yutaka; Matsui, Osamu; Gabata, Toshifumi
2017-07-10
The purpose of this study was to discriminate supratentorial intraventricular subependymoma (SIS) from central neurocytoma (CNC) using magnetic resonance spectroscopy (MRS). Single-voxel proton MRS using a 1.5T or 3T MR scanner from five SISs, five CNCs, and normal controls were evaluated. They were examined using a point-resolved spectroscopy. Automatically calculated ratios comparing choline (Cho), N-acetylaspartate (NAA), myoinositol (MI), and/or glycine (Gly) to creatine (Cr) were determined. Evaluation of Cr to unsuppressed water (USW) was also performed. Mann-Whitney U test was carried out to test the significance of differences in the metabolite ratios. Detectability of lactate (Lac) and alanine (Ala) was evaluated. Although a statistically significant difference (P < 0.0001) was observed in Cho/Cr among SIS, control spectra, and CNC, no statistical difference was noted between SIS and control spectra (P = 0.11). Statistically significant differences were observed in NAA/Cr between SIS and CNC (P = 0.04) or control spectra (P < 0.0001). A statistically significant difference was observed in MI and/or Gly to Cr between SIS and control spectra (P = 0.03), and CNC and control spectra (P < 0.0006). There were no statistical differences between SIS and CNC for MI and/or Gly to Cr (P = 0.32). Significant statistical differences were found between SIS and control spectra (P < 0.0053), control spectra and CNC (P < 0.0016), and SIS and CNC (P < 0.0083) for Cr to USW. Lac inverted doublets were confirmed in two SISs. Triplets of Lac and Ala were detected in four spectra of CNC. The present study showed that MRS can be useful in discriminating SIS from CNC.
Evaluating fMRI methods for assessing hemispheric language dominance in healthy subjects.
Baciu, Monica; Juphard, Alexandra; Cousin, Emilie; Bas, Jean François Le
2005-08-01
We evaluated two methods for quantifying the hemispheric language dominance in healthy subjects, by using a rhyme detection (deciding whether couple of words rhyme) and a word fluency (generating words starting with a given letter) task. One of methods called "flip method" (FM) was based on the direct statistical comparison between hemispheres' activity. The second one, the classical lateralization indices method (LIM), was based on calculating lateralization indices by taking into account the number of activated pixels within hemispheres. The main difference between methods is the statistical assessment of the inter-hemispheric difference: while FM shows if the difference between hemispheres' activity is statistically significant, LIM shows only that if there is a difference between hemispheres. The robustness of LIM and FM was assessed by calculating correlation coefficients between LIs obtained with each of these methods and manual lateralization indices MLI obtained with Edinburgh inventory. Our results showed significant correlation between LIs provided by each method and the MIL, suggesting that both methods are robust for quantifying hemispheric dominance for language in healthy subjects. In the present study we also evaluated the effect of spatial normalization, smoothing and "clustering" (NSC) on the intra-hemispheric location of activated regions and inter-hemispheric asymmetry of the activation. Our results have shown that NSC did not affect the hemispheric specialization but increased the value of the inter-hemispheric difference.
The Triangle Technique: a new evidence-based educational tool for pediatric medication calculations.
Sredl, Darlene
2006-01-01
Many nursing student verbalize an aversion to mathematical concepts and experience math anxiety whenever a mathematical problem is confronted. Since nurses confront mathematical problems on a daily basis, they must learn to feel comfortable with their ability to perform these calculations correctly. The Triangle Technique, a new educational tool available to nurse educators, incorporates evidence-based concepts within a graphic model using visual, auditory, and kinesthetic learning styles to demonstrate pediatric medication calculations of normal therapeutic ranges. The theoretical framework for the technique is presented, as is a pilot study examining the efficacy of the educational tool. Statistically significant results obtained by Pearson's product-moment correlation indicate that students are better able to calculate accurate pediatric therapeutic dosage ranges after participation in the educational intervention of learning the Triangle Technique.
Multi-fidelity machine learning models for accurate bandgap predictions of solids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Multi-fidelity machine learning models for accurate bandgap predictions of solids
Pilania, Ghanshyam; Gubernatis, James E.; Lookman, Turab
2016-12-28
Here, we present a multi-fidelity co-kriging statistical learning framework that combines variable-fidelity quantum mechanical calculations of bandgaps to generate a machine-learned model that enables low-cost accurate predictions of the bandgaps at the highest fidelity level. Additionally, the adopted Gaussian process regression formulation allows us to predict the underlying uncertainties as a measure of our confidence in the predictions. In using a set of 600 elpasolite compounds as an example dataset and using semi-local and hybrid exchange correlation functionals within density functional theory as two levels of fidelities, we demonstrate the excellent learning performance of the method against actual high fidelitymore » quantum mechanical calculations of the bandgaps. The presented statistical learning method is not restricted to bandgaps or electronic structure methods and extends the utility of high throughput property predictions in a significant way.« less
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
Farrell, Mary Beth
2018-06-01
This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being measured. A wide confidence interval indicates that if the experiment were repeated multiple times on other samples, the measured statistic would lie within a wide range of possibilities. The confidence interval relies on the SE. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Onthe stability of carbonicacid under conditions in the atmosphere of Venus
NASA Technical Reports Server (NTRS)
Khanna, R. K.; Tossell, J. A.; Fox, K.
1994-01-01
Results of quantum statistical mechanical calculations and thermodynamic evaluation of the structure of H2CO3 and its stability against dissociation are reported. Under temperature and pressure conditions near the surface of Venus, carbonic acid would predominatly dissociate into H2O and CO2 and, hence, could not contribute to any significant absorption there.
Government Expenditures on Education as the Percentage of GDP in the EU
ERIC Educational Resources Information Center
Galetic, Fran
2015-01-01
This paper analyzes the government expenditures as the percentage of gross domestic product across countries of the European Union. There is a statistical model based on Z-score, whose aim is to calculate how much each EU country deviates from the average value. The model shows that government expenditures on education vary significantly between…
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Capillary fluctuations of surface steps: An atomistic simulation study for the model Cu(111) system
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-10-01
Molecular dynamics (MD) simulations are employed to investigate the capillary fluctuations of steps on the surface of a model metal system. The fluctuation spectrum, characterized by the wave number (k ) dependence of the mean squared capillary-wave amplitudes and associated relaxation times, is calculated for 〈110 〉 and 〈112 〉 steps on the {111 } surface of elemental copper near the melting temperature of the classical potential model considered. Step stiffnesses are derived from the MD results, yielding values from the largest system sizes of (37 ±1 ) meV/A ˚ for the different line orientations, implying that the stiffness is isotropic within the statistical precision of the calculations. The fluctuation lifetimes are found to vary by approximately four orders of magnitude over the range of wave numbers investigated, displaying a k dependence consistent with kinetics governed by step-edge mediated diffusion. The values for step stiffness derived from these simulations are compared to step free energies for the same system and temperature obtained in a recent MD-based thermodynamic-integration (TI) study [Freitas, Frolov, and Asta, Phys. Rev. B 95, 155444 (2017), 10.1103/PhysRevB.95.155444]. Results from the capillary-fluctuation analysis and TI calculations yield statistically significant differences that are discussed within the framework of statistical-mechanical theories for configurational contributions to step free energies.
Abraham, Aby; George, Jinu; Peter, Elbe; Philip, Koshi; Chankramath, Rajesh; Johns, Dexton Antony; Bhaskar, Anitha
2015-01-01
Objective: The present study is intended to add a new parameter that would be useful in orthodontic clinical evaluation, treatment planning, and determination of vertical dimension (at occlusion). Materials and Methods: Standardized videographic recording of 79 subjects during posed smile was captured. Each video was then cut into 30 photos using the free studio software. The widest commissure-to-commissure posed smile frame (posed smile width [SW]) was selected as one of 10 or more frames showing an identical smile. Lower third of the face is measured from subnasale to soft tissue menton using a digital vernier caliper. Two values were then compared. Ratio between lower facial height and posed SW was calculated. Results: The co-relation between smiling width and lower facial height was found to be statistically significant (P < 0.01). The ratio of lower facial height and smiling width was calculated as 1.0016 with a standard deviation (SD) = 0.04 in males and 1.0301 with an SD = 0.07 in females. The difference between the mean lower facial height in males and females was statistically significant with a t = 10.231 and P = 0.000. The difference between the mean smiling width in males and females was also statistically significant with a t = 5.653 and P = 0.000. Conclusion: In class I subjects with pleasing appearance, normal facial proportions, normal overjet and overbite, and average Frankfort mandibular angle, the lower facial height (subnasale to soft tissue menton) is equal to posed SW. PMID:26430369
Piñero, David P; Caballero, María T; Nicolás-Albujer, Juan M; de Fez, Dolores; Camps, Vicent J
2018-06-01
To evaluate a new method of calculation of total corneal astigmatism based on Gaussian optics and the power design of a spherocylindrical lens (C) in the healthy eye and to compare it with keratometric (K) and power vector (PV) methods. A total of 92 healthy eyes of 92 patients (age, 17-65 years) were enrolled. Corneal astigmatism was calculated in all cases using K, PV, and our new approach C that considers the contribution of corneal thickness. An evaluation of the interchangeability of our new approach with the other 2 methods was performed using Bland-Altman analysis. Statistically significant differences between methods were found in the magnitude of astigmatism (P < 0.001), with the highest values provided by K. These differences in the magnitude of astigmatism were clinically relevant when K and C were compared [limits of agreement (LoA), -0.40 to 0.62 D), but not for the comparison between PV and C (LoA, -0.03 to 0.01 D). Differences in the axis of astigmatism between methods did not reach statistical significance (P = 0.408). However, they were clinically relevant when comparing K and C (LoA, -5.48 to 15.68 degrees) but not for the comparison between PV and C (LoA, -1.68 to 1.42 degrees). The use of our new approach for the calculation of total corneal astigmatism provides astigmatic results comparable to the PV method, which suggests that the effect of pachymetry on total corneal astigmatism is minimal in healthy eyes.
Pediatric Academic Productivity: Pediatric Benchmarks for the h- and g-Indices.
Tschudy, Megan M; Rowe, Tashi L; Dover, George J; Cheng, Tina L
2016-02-01
To describe h- and g-indices benchmarks in pediatric subspecialties and general academic pediatrics. Academic productivity is measured increasingly through bibliometrics that derive a statistical enumeration of academic output and impact. The h- and g-indices incorporate the number of publications and citations. Benchmarks for pediatrics have not been reported. Thirty programs were selected randomly from pediatric residency programs accredited by the Accreditation Council for Graduate Medical Education. The h- and g-indices of department chairs were calculated. For general academic pediatrics, pediatric gastroenterology, and pediatric nephrology, a random sample of 30 programs with fellowships were selected. Within each program, an MD faculty member from each academic rank was selected randomly. Google Scholar via Harzing's Publish or Perish was used to calculate the h-index, g-index, and total manuscripts. Only peer-reviewed and English language publications were included. For Chairs, calculations from Google Scholar were compared with Scopus. For all specialties, the mean h- and g-indices significantly increased with academic rank (all P < .05) with the greatest h-indices among Chairs. The h- and g-indices were not statistically different between specialty groups of the same rank; however, mean rank h-indices had large SDs. The h-index calculation using different bibliographic databases only differed by ±1. Mean h-indices increased with academic rank and were not significantly different across the pediatric specialties. Benchmarks for h- and g-indices in pediatrics are provided and may be one measure of academic productivity and impact. Copyright © 2016 Elsevier Inc. All rights reserved.
Eliaçik, Mustafa; Bayramlar, Hüseyin; Erdur, Sevil K.; Karabela, Yunus; Demirci, Göktuğ; Gülkilik, İbrahim G.; Özsütçü, Mustafa
2015-01-01
Objectives: To compare epithelial healing time following laser epithelial keratomileusis (LASEK) and photorefractive keratectomy (PRK) with anterior segment optic coherence tomography (AS-OCT). Methods: This prospective interventional case series study comprised 56 eyes of 28 patients that underwent laser refractive surgery in the Department of Ophthalmology, Medipol University Medical Faculty, Istanbul, Turkey, between March 2014 and May 2014. Each patient was randomized to have one eye operated on with PRK, and the other with LASEK. Patients were examined daily for 5 days, and epithelial healing time was assessed by using AS-OCT without removing therapeutic contact lens (TCL). Average discomfort scores were calculated from ratings obtained from questions regarding pain, photophobia, and lacrimation according to a scale of 0 (none) to 5. Results: The mean re-epithelialization time assessed with AS-OCT was 3.07±0.64 days in the PRK group, 3.55±0.54 days in the LASEK group, and the difference was statistically significant (p=0.03). Mean subjective discomfort score was 4.42±0.50 in the PRK eyes, and 2.85±0.44 in the LASEK eyes on the first exam day (p=0.001). The score obtained on the second (p=0.024), and third day (p=0.03) were also statistically significant. The fourth (p=0.069), and fifth days scores (p=0.1) showed no statistically significant difference between groups. Conclusion: The PRK showed a statistically significant shorter epithelial healing time, but had a statistically significant higher discomfort score until the postoperative fourth day compared with LASEK. PMID:25630007
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epstein, D; Shekel, E; Levin, D
Purpose: The purpose of this work was to verify the accuracy of the dose distribution along the field junction in a half beam irradiation technique for breast cancer patients receiving radiation to the breast or chest wall (CW) and the supraclavicular LN region for both free breathing and deep inspiration breath hold (DIBH) technique. Methods: We performed in vivo measurements for nine breast cancer patients receiving radiation to the breast/CW and to the supraclavicular LN region. Six patients were treated to the left breast/CW using DIBH technique and three patients were treated to the right breast/CW in free breath. Wemore » used five microMOSFET dosimeters: three located along the field junction, one located 1 cm above the junction and the fifth microMOSFET located 1 cm below the junction. We performed consecutive measurements over several days for each patient and compared the measurements to the TPS calculation (Eclipse, Varian™). Results: The calculated and measured doses along the junction were 0.97±0.08 Gy and 1.02±0.14 Gy, respectively. Above the junction calculated and measured doses were 0.91±0.08 Gy and 0.98±0.09 Gy respectively, and below the junction calculated and measured doses were 1.70±0.15 Gy and 1.61±0.09 Gy, respectively. All differences were not statistically significant. When comparing calculated and measured doses for DIBH patients only, there was still no statistically significant difference between values for all dosimeter locations. Analysis was done using the Mann-Whitney Rank-Sum Test. Conclusion: We found excellent correlation between calculated doses from the TPS and measured skin doses at the junction of several half beam fields. Even for the DIBH technique, where there is more potential for variance due to depth of breath, there is no over or underdose along the field junction. This correlation validates the TPS, as well an accurate, reproducible patient setup.« less
Method for Real-Time Model Based Structural Anomaly Detection
NASA Technical Reports Server (NTRS)
Urnes, James M., Sr. (Inventor); Smith, Timothy A. (Inventor); Reichenbach, Eric Y. (Inventor)
2015-01-01
A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.
Power, S; Mirza, M; Thakorlal, A; Ganai, B; Gavagan, L D; Given, M F; Lee, M J
2015-06-01
This prospective pilot study was undertaken to evaluate the feasibility and effectiveness of using a radiation absorbing shield to reduce operator dose from scatter during lower limb endovascular procedures. A commercially available bismuth shield system (RADPAD) was used. Sixty consecutive patients undergoing lower limb angioplasty were included. Thirty procedures were performed without the RADPAD (control group) and thirty with the RADPAD (study group). Two separate methods were used to measure dose to a single operator. Thermoluminescent dosimeter (TLD) badges were used to measure hand, eye, and unshielded body dose. A direct dosimeter with digital readout was also used to measure eye and unshielded body dose. To allow for variation between control and study groups, dose per unit time was calculated. TLD results demonstrated a significant reduction in median body dose per unit time for the study group compared with controls (p = 0.001), corresponding to a mean dose reduction rate of 65 %. Median eye and hand dose per unit time were also reduced in the study group compared with control group, however, this was not statistically significant (p = 0.081 for eye, p = 0.628 for hand). Direct dosimeter readings also showed statistically significant reduction in median unshielded body dose rate for the study group compared with controls (p = 0.037). Eye dose rate was reduced for the study group but this was not statistically significant (p = 0.142). Initial results are encouraging. Use of the shield resulted in a statistically significant reduction in unshielded dose to the operator's body. Measured dose to the eye and hand of operator were also reduced but did not reach statistical significance in this pilot study.
Evaluation of salivary fluoride retention from a new high fluoride mouthrinse.
Mason, Stephen C; Shirodaria, Soha; Sufi, Farzana; Rees, Gareth D; Birkhed, Dowen
2010-11-01
To evaluate salivary fluoride retention from a new high fluoride daily use mouthrinse over a 120 min period. Sixteen subjects completed a randomised single-blind, four-treatment cross-over trial. Sensodyne® Pronamel® mouthrinse (A) contained 450 ppm fluoride; reference products were Colgate® Fluorigard® (B), Listerine® Total Care (C) and Listerine Softmint Sensation (D) containing 225, 100 and 0 ppm fluoride respectively. Salivary fluoride retention was monitored ex vivo after a single supervised use of test product (10 mL, 60 s). Samples were collected at 0, 1, 3, 5, 15, 30, 60 and 120 min post-rinse, generating fluoride clearance curves from which the area under the curve (AUC) was calculated. Differences in salivary fluoride concentrations for each product were analysed using ANCOVA at each time point using a 5% significance level, as well as lnAUC for the periods 0-120, 0-1, 1-15, 15-60 and 60-120 min. Pairwise comparisons between all treatment groups were performed. Salivary fluoride levels for A-C peaked immediately following use. Fluoride levels were statistically significantly higher for A versus B-D (p≤ 0.004), linear dose responses were apparent. AUC(0-120) was statistically significantly greater for A than for B (p = 0.035), C (p< 0.0001) and D (p< 0.0001). Post-hoc comparisons of lnAUC for the remaining time domains showed fluoride retention from A was statistically significantly greater versus B-D (p< 0.0001). Single-use treatment with the new mouthrinse containing 450 ppm fluoride resulted in statistically significantly higher salivary fluoride levels throughout the 120 min test period. Total fluoride retention (AUC(0-120)) was also statistically significantly greater versus comparator rinse treatments. Copyright © 2010 Elsevier Ltd. All rights reserved.
Notes on numerical reliability of several statistical analysis programs
Landwehr, J.M.; Tasker, Gary D.
1999-01-01
This report presents a benchmark analysis of several statistical analysis programs currently in use in the USGS. The benchmark consists of a comparison between the values provided by a statistical analysis program for variables in the reference data set ANASTY and their known or calculated theoretical values. The ANASTY data set is an amendment of the Wilkinson NASTY data set that has been used in the statistical literature to assess the reliability (computational correctness) of calculated analytical results.
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
No significant association between prenatal exposure poliovirus epidemics and psychosis.
Cahill, Matthew; Chant, David; Welham, Joy; McGrath, John
2002-06-01
To examine the association between prenatal exposure to poliovirus infection and later development of schizophrenia or affective psychosis in a Southern Hemisphere psychiatric register. We calculated rates of poliomyelitis cases per 10 000 background population and rates for schizophrenia(n = 6078) and affective psychosis (n = 3707)per 10 000 births for the period 1930-1964. Empirically weighted regression was used to measure the association between a given psychosis birth-rate and a poliomyelitis epidemic during gestation. There was no statistically significant association between exposure to a poliomyelitis epidemic during gestation and subsequent development of schizophrenia or affective psychosis. The lack of a consistent statistically significant association between poliovirus epidemics and schizophrenia suggests that either poliovirus may have a small effect which is only detectable with large data-sets and/or the effect may be modified by location. Further investigation of such inconsistencies may help elucidate candidate risk-modifying factors for schizophrenia.
Mundim, Fabrício M; Antunes, Pedro L; Sousa, Ana Beatriz S; Garcia, Lucas F R; Pires-de-Souza, Fernanda C P
2012-06-01
To evaluate the colour stability of paints used for ocular prosthesis iris painting submitted for accelerated artificial ageing (AAA). Forty specimens of acrylic resin for sclera (16 × 2 mm) were made and separated into eight groups (n = 10) according to the type of paint (gouache, GP; oil, OP; acrylic AP; and composite resin for characterisation, CR) and the colours used (blue/brown). After drying (72 h), a new layer of colourless acrylic resin was applied and the initial colour readout was performed (Spectrophotometer PCB 6807). New colour readouts were performed after AAA, and ΔE was calculated. Statistical analysis (two-way anova-Bonferroni, p < 0.05) demonstrated that the brown colour showed lower ΔE means in comparison with the blue colour, with statistically significant difference for AP only. Blue colour showed no statistically significant difference with regard to the type of paint used. Brown AP showed lower ΔE than the other groups, with significant difference for OP and GP. GP showed greater alteration in ΔE for the brown colour, being statistically similar only to OP. Only the AP group for brown pigment shows clinically acceptable values for colour stability after AAA. © 2011 The Gerodontology Society and John Wiley & Sons A/S.
Bushmakin, A G; Cappelleri, J C; Symonds, T; Stecher, V J
2014-01-01
To apportion the direct effect and the indirect effect (through erections) that sildenafil (vs placebo) has on individual satisfaction and couple satisfaction over time, longitudinal mediation modeling was applied to outcomes on the Sexual Experience Questionnaire. The model included data from weeks 4 and 10 (double-blind phase) and week 16 (open-label phase) of a controlled study. Data from 167 patients with erectile dysfunction (ED) were available for analysis. Estimation of statistical significance was based on bootstrap simulations, which allowed inferences at and between time points. Percentages (and corresponding 95% confidence intervals) for direct and indirect effects of treatment were calculated using the model. For the individual satisfaction and couple satisfaction domains, direct treatment effects were negligible (not statistically significant) whereas indirect treatment effects via the erection domain represented >90% of the treatment effects (statistically significant). Week 4 vs week 10 percentages of direct and indirect effects were not statistically different, indicating that the mediation effects are longitudinally invariant. As there was no placebo arm in the open-label phase, mediation effects at week 16 were not estimable. In conclusion, erection has a crucial role as a mediator in restoring individual satisfaction and couple satisfaction in men with ED treated with sildenafil.
NASA Astrophysics Data System (ADS)
Çeven, E. K.; Günaydın, G. K.
2017-10-01
The aim of this study is filling the gap in the literature about investigating the effect of yarn and fabric structural parameters on burning behavior of polyester fabrics. According to the experimental design three different fabric types, three different weft densities and two different weave types were selected and a total of eighteen different polyester drapery fabrics were produced. All statistical procedures were conducted using the SPSS Statistical software package. The results of the Analysis of Variance (ANOVA) tests indicated that; there were statistically significant (5% significance level) differences between the mass loss ratios (%) in weft and mass loss ratios (%) in warp direction of different fabrics calculated after the flammability test. The Student-Newman-Keuls (SNK) results for mass loss ratios (%) both in weft and warp directions revealed that the mass loss ratios (%) of fabrics containing Trevira CS type polyester were lower than the mass loss ratios of polyester fabrics subjected to washing treatment and flame retardancy treatment.
Changing world extreme temperature statistics
NASA Astrophysics Data System (ADS)
Finkel, J. M.; Katz, J. I.
2018-04-01
We use the Global Historical Climatology Network--daily database to calculate a nonparametric statistic that describes the rate at which all-time daily high and low temperature records have been set in nine geographic regions (continents or major portions of continents) during periods mostly from the mid-20th Century to the present. This statistic was defined in our earlier work on temperature records in the 48 contiguous United States. In contrast to this earlier work, we find that in every region except North America all-time high records were set at a rate significantly (at least $3\\sigma$) higher than in the null hypothesis of a stationary climate. Except in Antarctica, all-time low records were set at a rate significantly lower than in the null hypothesis. In Europe, North Africa and North Asia the rate of setting new all-time highs increased suddenly in the 1990's, suggesting a change in regional climate regime; in most other regions there was a steadier increase.
Particle-sampling statistics in laser anemometers Sample-and-hold systems and saturable systems
NASA Technical Reports Server (NTRS)
Edwards, R. V.; Jensen, A. S.
1983-01-01
The effect of the data-processing system on the particle statistics obtained with laser anemometry of flows containing suspended particles is examined. Attention is given to the sample and hold processor, a pseudo-analog device which retains the last measurement until a new measurement is made, followed by time-averaging of the data. The second system considered features a dead time, i.e., a saturable system with a significant reset time with storage in a data buffer. It is noted that the saturable system operates independent of the particle arrival rate. The probabilities of a particle arrival in a given time period are calculated for both processing systems. It is shown that the system outputs are dependent on the mean particle flow rate, the flow correlation time, and the flow statistics, indicating that the particle density affects both systems. The results are significant for instances of good correlation between the particle density and velocity, such as occurs near the edge of a jet.
Factors Affecting Hemodialysis Adequacy in Cohort of Iranian Patient with End Stage Renal Disease.
Shahdadi, Hosein; Balouchi, Abbas; Sepehri, Zahra; Rafiemanesh, Hosein; Magbri, Awad; Keikhaie, Fereshteh; Shahakzehi, Ahmad; Sarjou, Azizullah Arbabi
2016-08-01
There are many factors that can affect dialysis adequacy; such as the type of vascular access, filter type, device used, and the dose, and rout of erythropoietin stimulation agents (ESA) used. The aim of this study was investigating factors affecting Hemodialysis adequacy in cohort of Iranian patient with end stage renal disease (ESRD). This is a cross-sectional study conducted on 133 Hemodialysis patients referred to two dialysis units in Sistan-Baluchistan province in the cities of Zabol and Iranshahr, Iran. We have looked at, (the effects of the type of vascular access, the filter type, the device used, and the dose, route of delivery, and the type of ESA used) on Hemodialysis adequacy. Dialysis adequacy was calculated using kt/v formula, two-part information questionnaire including demographic data which also including access type, filter type, device used for hemodialysis (HD), type of Eprex injection, route of administration, blood groups and hemoglobin response to ESA were utilized. The data was analyzed using the SPSS v16 statistical software. Descriptive statistical methods, Mann-Whitney statistical test, and multiple regressions were used when applicable. The range of calculated dialysis adequacy is 0.28 to 2.39 (units of adequacy of dialysis). 76.7% of patients are being dialyzed via AVF and 23.3% of patients used central venous catheters (CVC). There was no statistical significant difference between dialysis adequacy, vascular access type, device used for HD (Fresenius and B. Braun), and the filter used for HD (p> 0.05). However, a significant difference was observed between the adequacy of dialysis and Eprex injection and patients' time of dialysis (p <0.05). Subcutaneous ESA (Eprex) injection and dialysis shift (being dialyzed in the morning) can have positive impact on dialysis adequacy. Patients should be educated on the facts that the type of device used for HD and the vascular access used has no significant effects on dialysis adequacy.
Kılıç, Salih; Çelik, Ahmet; Çakmak, Hüseyin Altuğ; Afşin, Abdülmecit; Tekkeşin, Ahmet İlker; Açıksarı, Gönül; Memetoğlu, Mehmet Erdem; Özpamuk Karadeniz, Fatma; Şahan, Ekrem; Alıcı, Mehmet Hayri; Dereli, Yüksel; Sinan, Ümit Yaşar; Zoghi, Mehdi
2017-08-04
The time in therapeutic range values may vary between different geographical regions of Turkey in patients vitamin K antagonist therapy. To evaluate the time in therapeutic range percentages, efficacy, safety and awareness of warfarin according to the different geographical regions in patients who participated in the WARFARIN-TR study (The Awareness, Efficacy, Safety and Time in Therapeutic Range of Warfarin in the Turkish population) in Turkey. Cross-sectional study. The WARFARIN-TR study includes 4987 patients using warfarin and involved regular international normalized ratio monitoring between January 1, 2014 and December 31, 2014. Patients attended follow-ups for 12 months. The sample size calculations were analysed according to the density of the regional population and according to Turkish Statistical Institute data. The time in therapeutic range was calculated according to F.R. Roosendaal's algorithm. Awareness was evaluated based on the patients' knowledge of the effect of warfarin and food-drug interactions with simple questions developed based on a literature review. The Turkey-wide time in therapeutic range was reported as 49.5%±22.9 in the WARFARIN-TR study. There were statistically significant differences between regions in terms of time in therapeutic range (p>0.001). The highest rate was reported in the Marmara region (54.99%±20.91) and the lowest was in the South-eastern Anatolia region (41.95±24.15) (p>0.001). Bleeding events were most frequently seen in Eastern Anatolia (41.6%), with major bleeding in the Aegean region (5.11%) and South-eastern Anatolia (5.36%). There were statistically significant differences between the regions in terms of awareness (p>0.001). Statistically significant differences were observed in terms of the efficacy, safety and awareness of warfarin therapy according to different geographical regions in Turkey.
The dependence of stellar properties on initial cloud density
NASA Astrophysics Data System (ADS)
Jones, Michael O.; Bate, Matthew R.
2018-05-01
We investigate the dependence of stellar properties on the initial mean density of the molecular cloud in which stellar clusters form using radiation hydrodynamical simulations that resolve the opacity limit for fragmentation. We have simulated the formation of three star clusters from the gravitational collapse of molecular clouds whose densities vary by a factor of a hundred. As with previous calculations including radiative feedback, we find that the dependence of the characteristic stellar mass, Mc, on the initial mean density of the cloud, ρ, is weaker than the dependence of the thermal Jeans mass. However, unlike previous calculations, which found no statistically significant variation in the median mass with density, we find a weak dependence approximately of the form Mc∝ρ-1/5. The distributions of properties of multiple systems do not vary significantly between the calculations. We compare our results to the result of observational surveys of star-forming regions, and suggest that the similarities between the properties of our lowest density calculation and the nearby Taurus-Auriga region indicate that the apparent excess of solar-type stars observed may be due to the region's low density.
PyEvolve: a toolkit for statistical modelling of molecular evolution.
Butterfield, Andrew; Vedagiri, Vivek; Lang, Edward; Lawrence, Cath; Wakefield, Matthew J; Isaev, Alexander; Huttley, Gavin A
2004-01-05
Examining the distribution of variation has proven an extremely profitable technique in the effort to identify sequences of biological significance. Most approaches in the field, however, evaluate only the conserved portions of sequences - ignoring the biological significance of sequence differences. A suite of sophisticated likelihood based statistical models from the field of molecular evolution provides the basis for extracting the information from the full distribution of sequence variation. The number of different problems to which phylogeny-based maximum likelihood calculations can be applied is extensive. Available software packages that can perform likelihood calculations suffer from a lack of flexibility and scalability, or employ error-prone approaches to model parameterisation. Here we describe the implementation of PyEvolve, a toolkit for the application of existing, and development of new, statistical methods for molecular evolution. We present the object architecture and design schema of PyEvolve, which includes an adaptable multi-level parallelisation schema. The approach for defining new methods is illustrated by implementing a novel dinucleotide model of substitution that includes a parameter for mutation of methylated CpG's, which required 8 lines of standard Python code to define. Benchmarking was performed using either a dinucleotide or codon substitution model applied to an alignment of BRCA1 sequences from 20 mammals, or a 10 species subset. Up to five-fold parallel performance gains over serial were recorded. Compared to leading alternative software, PyEvolve exhibited significantly better real world performance for parameter rich models with a large data set, reducing the time required for optimisation from approximately 10 days to approximately 6 hours. PyEvolve provides flexible functionality that can be used either for statistical modelling of molecular evolution, or the development of new methods in the field. The toolkit can be used interactively or by writing and executing scripts. The toolkit uses efficient processes for specifying the parameterisation of statistical models, and implements numerous optimisations that make highly parameter rich likelihood functions solvable within hours on multi-cpu hardware. PyEvolve can be readily adapted in response to changing computational demands and hardware configurations to maximise performance. PyEvolve is released under the GPL and can be downloaded from http://cbis.anu.edu.au/software.
Disparities in U.S. Air Force Preventive Health Assessments and Medical Deployability
2010-01-01
preventive appointments and nondeployable status were calculated by race/ethnicity, gender , and rank, and adjusted for age. Results: Permanent medical...appointments were higher for minorities. Statistically significant differences were identified by gender , but were clinically insignificant. Currency rates...Conclusions: Evidence of disparities in medical deployability rates for Asian/Pacific Islanders, non-Hispanic Blacks, and senior enlisted active duty
ERIC Educational Resources Information Center
Warne, Russell T.; Nagaishi, Chanel; Slade, Michael K.; Hermesmeyer, Paul; Peck, Elizabeth Kimberli
2014-01-01
While research has shown the statistical significance of high school grade point averages (HSGPAs) in predicting future academic outcomes, the systems with which HSGPAs are calculated vary drastically across schools. Some schools employ unweighted grades that carry the same point value regardless of the course in which they are earned; other…
An evaluation of shear bond strength of self-etch adhesive on pre-etched enamel: an in vitro study.
Rao, Bhadra; Reddy, Satti Narayana; Mujeeb, Abdul; Mehta, Kanchan; Saritha, G
2013-11-01
To determine the shear bond strength of self-etch adhesive G-bond on pre-etched enamel. Thirty caries free human mandibular premolars extracted for orthodontic purpose were used for the study. Occlusal surfaces of all the teeth were flattened with diamond bur and a silicon carbide paper was used for surface smoothening. The thirty samples were randomly grouped into three groups. Three different etch systems were used for the composite build up: group 1 (G-bond self-etch adhesive system), group 2 (G-bond) and group 3 (Adper single bond). Light cured was applied for 10 seconds with a LED unit for composite buildup on the occlusal surface of each tooth with 8 millimeters (mm) in diameter and 3 mm in thickness. The specimens in each group were tested in shear mode using a knife-edge testing apparatus in a universal testing machine across head speed of 1 mm/ minute. Shear bond strength values in Mpa were calculated from the peak load at failure divided by the specimen surface area. The mean shear bond strength of all the groups were calculated and statistical analysis was carried out using one-way Analysis of Variance (ANOVA). The mean bond strength of group 1 is 15.5 Mpa, group 2 is 19.5 Mpa and group 3 is 20.1 Mpa. Statistical analysis was carried out between the groups using one-way ANOVA. Group 1 showed statistically significant lower bond strength when compared to groups 2 and 3. No statistical significant difference between groups 2 and 3 (p < 0.05). Self-etch adhesive G-bond showed increase in shear bond strength on pre-etched enamel.
ZERODUR: deterministic approach for strength design
NASA Astrophysics Data System (ADS)
Hartmann, Peter
2012-12-01
There is an increasing request for zero expansion glass ceramic ZERODUR substrates being capable of enduring higher operational static loads or accelerations. The integrity of structures such as optical or mechanical elements for satellites surviving rocket launches, filigree lightweight mirrors, wobbling mirrors, and reticle and wafer stages in microlithography must be guaranteed with low failure probability. Their design requires statistically relevant strength data. The traditional approach using the statistical two-parameter Weibull distribution suffered from two problems. The data sets were too small to obtain distribution parameters with sufficient accuracy and also too small to decide on the validity of the model. This holds especially for the low failure probability levels that are required for reliable applications. Extrapolation to 0.1% failure probability and below led to design strengths so low that higher load applications seemed to be not feasible. New data have been collected with numbers per set large enough to enable tests on the applicability of the three-parameter Weibull distribution. This distribution revealed to provide much better fitting of the data. Moreover it delivers a lower threshold value, which means a minimum value for breakage stress, allowing of removing statistical uncertainty by introducing a deterministic method to calculate design strength. Considerations taken from the theory of fracture mechanics as have been proven to be reliable with proof test qualifications of delicate structures made from brittle materials enable including fatigue due to stress corrosion in a straight forward way. With the formulae derived, either lifetime can be calculated from given stress or allowable stress from minimum required lifetime. The data, distributions, and design strength calculations for several practically relevant surface conditions of ZERODUR are given. The values obtained are significantly higher than those resulting from the two-parameter Weibull distribution approach and no longer subject to statistical uncertainty.
Evaporation residue cross-section measurements for 48Ti-induced reactions
NASA Astrophysics Data System (ADS)
Sharma, Priya; Behera, B. R.; Mahajan, Ruchi; Thakur, Meenu; Kaur, Gurpreet; Kapoor, Kushal; Rani, Kavita; Madhavan, N.; Nath, S.; Gehlot, J.; Dubey, R.; Mazumdar, I.; Patel, S. M.; Dhibar, M.; Hosamani, M. M.; Khushboo, Kumar, Neeraj; Shamlath, A.; Mohanto, G.; Pal, Santanu
2017-09-01
Background: A significant research effort is currently aimed at understanding the synthesis of heavy elements. For this purpose, heavy ion induced fusion reactions are used and various experimental observations have indicated the influence of shell and deformation effects in the compound nucleus (CN) formation. There is a need to understand these two effects. Purpose: To investigate the effect of proton shell closure and deformation through the comparison of evaporation residue (ER) cross sections for the systems involving heavy compound nuclei around the ZCN=82 region. Methods: A systematic study of ER cross-section measurements was carried out for the 48Ti+Nd,150142 , 144Sm systems in the energy range of 140 -205 MeV . The measurement has been performed using the gas-filled mode of the hybrid recoil mass analyzer present at the Inter University Accelerator Centre (IUAC), New Delhi. Theoretical calculations based on a statistical model were carried out incorporating an adjustable barrier scaling factor to fit the experimental ER cross section. Coupled-channel calculations were also performed using the ccfull code to obtain the spin distribution of the CN, which was used as an input in the calculations. Results: Experimental ER cross sections for 48Ti+Nd,150142 were found to be considerably smaller than the statistical model predictions whereas experimental and statistical model predictions for 48Ti+144Sm were of comparable magnitudes. Conclusion: Though comparison of experimental ER cross sections with statistical model predictions indicate considerable non-compound-nuclear processes for 48Ti+Nd,150142 reactions, no such evidence is found for the 48Ti+144Sm system. Further investigations are required to understand the difference in fusion probabilities of 48Ti+142Nd and 48Ti+144Sm systems.
Mittelstaedt, Daniel
2015-01-01
Objective A quantitative contrast-enhanced micro–computed tomography (qCECT) method was developed to investigate the depth dependency and heterogeneity of the glycosaminoglycan (GAG) concentration of ex vivo cartilage equilibrated with an anionic radiographic contrast agent, Hexabrix. Design Full-thickness fresh native (n = 19 in 3 subgroups) and trypsin-degraded (n = 6) articular cartilage blocks were imaged using micro–computed tomography (μCT) at high resolution (13.4 μm3) before and after equilibration with various Hexabrix bathing concentrations. The GAG concentration was calculated depth-dependently based on Gibbs-Donnan equilibrium theory. Analysis of variance with Tukey’s post hoc was used to test for statistical significance (P < 0.05) for effect of Hexabrix bathing concentration, and for differences in bulk and zonal GAG concentrations individually and compared between native and trypsin-degraded cartilage. Results The bulk GAG concentration was calculated to be 74.44 ± 6.09 and 11.99 ± 4.24 mg/mL for native and degraded cartilage, respectively. A statistical difference was demonstrated for bulk and zonal GAG between native and degraded cartilage (P < 0.032). A statistical difference was not demonstrated for bulk GAG when comparing Hexabrix bathing concentrations (P > 0.3214) for neither native nor degraded cartilage. Depth-dependent GAG analysis of native cartilage revealed a statistical difference only in the radial zone between 30% and 50% Hexabrix bathing concentrations. Conclusions This nondestructive qCECT methodology calculated the depth-dependent GAG concentration for both native and trypsin-degraded cartilage at high spatial resolution. qCECT allows for more detailed understanding of the topography and depth dependency, which could help diagnose health, degradation, and repair of native and contrived cartilage. PMID:26425259
Gordon, Derek; Londono, Douglas; Patel, Payal; Kim, Wonkuk; Finch, Stephen J; Heiman, Gary A
2016-01-01
Our motivation here is to calculate the power of 3 statistical tests used when there are genetic traits that operate under a pleiotropic mode of inheritance and when qualitative phenotypes are defined by use of thresholds for the multiple quantitative phenotypes. Specifically, we formulate a multivariate function that provides the probability that an individual has a vector of specific quantitative trait values conditional on having a risk locus genotype, and we apply thresholds to define qualitative phenotypes (affected, unaffected) and compute penetrances and conditional genotype frequencies based on the multivariate function. We extend the analytic power and minimum-sample-size-necessary (MSSN) formulas for 2 categorical data-based tests (genotype, linear trend test [LTT]) of genetic association to the pleiotropic model. We further compare the MSSN of the genotype test and the LTT with that of a multivariate ANOVA (Pillai). We approximate the MSSN for statistics by linear models using a factorial design and ANOVA. With ANOVA decomposition, we determine which factors most significantly change the power/MSSN for all statistics. Finally, we determine which test statistics have the smallest MSSN. In this work, MSSN calculations are for 2 traits (bivariate distributions) only (for illustrative purposes). We note that the calculations may be extended to address any number of traits. Our key findings are that the genotype test usually has lower MSSN requirements than the LTT. More inclusive thresholds (top/bottom 25% vs. top/bottom 10%) have higher sample size requirements. The Pillai test has a much larger MSSN than both the genotype test and the LTT, as a result of sample selection. With these formulas, researchers can specify how many subjects they must collect to localize genes for pleiotropic phenotypes. © 2017 S. Karger AG, Basel.
Attitude towards Oral Health at Various Colleges of the University of Zagreb: A Pilot Study.
Ivica, Anja; Galić, Nada
2014-06-01
The aim of this study was to compare the oral status of three various groups of students: students of the School of Dental Medicine, students of technical sciences and students of humanities. Research included 58 students of the University of Zagreb. They answered 3 questions: how often they brush their teeth, how often they visit their dentist and how important dental health is to them. After a standard dental check-up we calculated the DMFT index. They were given an indicator for plaque Mira-2-Ton® (Hager Werken, Duisburg, Germany) and we calculated the plaque index. For statistical analysis the ANOVA test was used. Students of the School of Dental Medicine had a lower plaque index than other students and this was statistically significant (p=0.0018; f=7.14). They also had a lower DMFT index, but it was not statistically significant (p=0.1004; f=2.4). 83% of students said that they brushed their teeth 2-3 times a day. Only 17% of all students brush their teeth more than 3 times a day and they are all students of the School of Dental Medicine (21% of them). Perception of oral health is on a high level, but perception of oral disease is not. The social approval of the answer was also an important factor. Students of the School of Dental Medicine are an illustrative example of improving our habits due to education.
Attitude towards Oral Health at Various Colleges of the University of Zagreb: A Pilot Study
Ivica, Anja; Galić, Nada
2014-01-01
Purpose The aim of this study was to compare the oral status of three various groups of students: students of the School of Dental Medicine, students of technical sciences and students of humanities. Material and methods Research included 58 students of the University of Zagreb. They answered 3 questions: how often they brush their teeth, how often they visit their dentist and how important dental health is to them. After a standard dental check-up we calculated the DMFT index. They were given an indicator for plaque Mira-2-Ton® (Hager Werken, Duisburg, Germany) and we calculated the plaque index. For statistical analysis the ANOVA test was used. Results Students of the School of Dental Medicine had a lower plaque index than other students and this was statistically significant (p=0.0018; f=7.14). They also had a lower DMFT index, but it was not statistically significant (p=0.1004; f=2.4). 83% of students said that they brushed their teeth 2-3 times a day. Only 17% of all students brush their teeth more than 3 times a day and they are all students of the School of Dental Medicine (21% of them). Perception of oral health is on a high level, but perception of oral disease is not. The social approval of the answer was also an important factor. Conclusion Students of the School of Dental Medicine are an illustrative example of improving our habits due to education. PMID:27688358
Extra-articular manifestations of seronegative and seropositive rheumatoid arthritis.
Sahatciu-Meka, Vjollca; Rexhepi, Sylejman; Manxhuka-Kerliu, Suzana; Rexhepi, Mjellma
2010-02-01
Although considered a "joint disease," rheumatoid arthritis is associated with the involvement of extra-articular manifestations. The aim of the study is the investigation and comparison of frequency and type of extra-articular manifestations in a well defined community based cohort of patients with seropositive and seronegative rheumatoid arthritis. Using the ACR (1987) criteria for rheumatoid arthritis, patients have been classified into the 2nd and 3rd functional class (ARA). The studied group consisted of 125 seronegative patients with titters lower than 1:64 as defined by Rose-Waaler test, whereas the control group consisted of 125 seropositive patients with titters of 1:64 or higher. All patients were between 25-60 years of age (Xb=49,96), with disease duration between 1-27 years (Xb=6,41). In order to present the findings of the study, the structure, prevalence, arithmetic mean (Xb), standard deviation (SB), variation quotient (QV%) and variation interval (Rmax-Rmin) have been used. Probability level has been expressed by p<0,01 and p<0,05. Correlation between the number of extra-articular manifestations and duration of the disease has been calculated by means of Pearson linear correlation. Higher presence of diffuse lung fibrosis, central and peripheral nervous system damages have been confirmed in the seropositive group, and osteoporosis in the seronegative; however, no statistical difference has been found. In extra-articular manifestations, "rheumatoid core" in the seropositive subset (chi2=4,80, p<0,05) presented significant statistical difference. Rheumatoid nodules were more frequent in seropositive subset (12%:16%), in both sexes; however, they were not of significant statistical difference. Neuropathy and lung diseases were also frequently present in seropositive group, but no statistical difference has been found regarding the statistical difference. Longer duration of the disease resulted in an increase of the number of extra-articular manifestations. Calculated linear correlation by Pearson, resulted as positive and high correlation in total (r=0,36, p<0,01), and for groups [(r=0,52, p<0,01) seronegative, (r=0,25, p<0,01) seropositive], nevertheless no significant statistical difference was found regarding the sero-status. In conclusion, extra-articular manifestations are more frequent in the seropositive patients. The longer the duration of the disease the larger the number of extra-articular manifestations. Differences with regard to sero-status and sex, with some exceptions, are not observed.
Statistical Properties of SEE Rate Calculation in the Limits of Large and Small Event Counts
NASA Technical Reports Server (NTRS)
Ladbury, Ray
2007-01-01
This viewgraph presentation reviews the Statistical properties of Single Event Effects (SEE) rate calculations. The goal of SEE rate calculation is to bound the SEE rate, though the question is by how much. The presentation covers: (1) Understanding errors on SEE cross sections, (2) Methodology: Maximum Likelihood and confidence Contours, (3) Tests with Simulated data and (4) Applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saenz, D; Stathakis, S
Purpose: Deep inspiration breath-hold (DIBH) is used for left-sided breast radiotherapy to spare the heart and lung. The magnitude of sparing has been shown to be significant. Monte Carlo, furthermore, has the potential to calculate most accurately the dose in the heterogeneous lung medium at the interface with the lung wall. The lung dose was investigated in Monaco to determine the level of sparing relative to that calculated in Pinnacle{sup 3}. Methods: Five patients undergoing DIBH radiotherapy on an Elekta Versa HD linear accelerator in conjunction with the Catalyst C-RAD surface imaging system were planned using Phillips Pinnacle{sup 3}. Freemore » breathing plans were also created to clinically assure a benefit. Both plans were re-calculated in Monaco to determine if there were any significant differences. The mean heart dose, mean left lung, and mean total lung dose were compared in addition to the V20 for left and both lungs. Dose was calculated as dose to medium as well as dose to water with a statistical precision of 0.7%. Results: Mean lung dose was significantly different (p < 0.003) between the two calculations for both DIBH (11.6% higher in Monaco) and free breathing (14.2% higher in Monaco). V20 was also higher in Monaco (p < 0.05) for DIBH (5.7% higher) and free breathing (4.9% higher). The mean heart dose was not significantly different between the dose calculations for either DIBH or free breathing. Results were no more than 0.1% different when calculated as dose to water. Conclusion: The use of Monte Carlo can provide insight on the lung dose for both free breathing and DIBH techniques for whole breast irradiation. While the sparing (dose reductions with DIBH as compared to free breathing) is equivalent for either planning system, the lung doses themselves are higher when calculated with Monaco.« less
Leptin to adiponectin ratio in preeclampsia.
Khosrowbeygi, A; Ahmadvand, H
2013-04-01
The aim of the present study was to assess leptin/adiponectin ratio in preeclamptic patients compared with normal pregnant women. A cross-sectional study was designed. The study population consisted of 30 preeclamptic patients and 30 healthy pregnant women. Serum levels of total leptin and adiponectin were assessed using commercially available enzyme-linked immunosorbent assay methods. The one-way ANOVA and Student's t tests and Pearson's correlation analysis were used for statistical calculations. Levels of leptin and adiponectin were also adjusted for BMI. A p-value < 0.05 was considered statistically significant. The leptin/adiponectin ratio was increased significantly in preeclamptic patients. The leptin/adiponectin ratio was significantly higher in severe preeclamptic patient than in mild preeclampsia. Adjusted leptin/adiponectin ratio was also significantly increased in preeclamptic patients than in normal pregnant women. The findings of the present study suggest that the leptin/adiponectin ratio was increased in preeclamsia and imbalance between the adipocytokines could be involved in the pathogenesis of preeclampsia.
Close, Helen; Mason, James M; Wilson, Douglas; Hungin, A Pali S
2012-05-29
Oestrogen and progestogen have the potential to influence gastro-intestinal motility; both are key components of hormone replacement therapy (HRT). Results of observational studies in women taking HRT rely on self-reporting of gastro-oesophageal symptoms and the aetiology of gastro-oesophageal reflux disease (GORD) remains unclear. This study investigated the association between HRT and GORD in menopausal women using validated general practice records. 51,182 menopausal women were identified using the UK General Practice Research Database between 1995-2004. Of these, 8,831 were matched with and without hormone use. Odds ratios (ORs) were calculated for GORD and proton-pump inhibitor (PPI) use in hormone and non-hormone users, adjusting for age, co-morbidities, and co-pharmacy. In unadjusted analysis, all forms of hormone use (oestrogen-only, tibolone, combined HRT and progestogen) were statistically significantly associated with GORD. In adjusted models, this association remained statistically significant for oestrogen-only treatment (OR 1.49; 1.18-1.89). Unadjusted analysis showed a statistically significant association between PPI use and oestrogen-only and combined HRT treatment. When adjusted for covariates, oestrogen-only treatment was significant (OR 1.34; 95% CI 1.03-1.74). Findings from the adjusted model demonstrated the greater use of PPI by progestogen users (OR 1.50; 1.01-2.22). This first large cohort study of the association between GORD and HRT found a statistically significant association between oestrogen-only hormone and GORD and PPI use. This should be further investigated using prospective follow-up to validate the strength of association and describe its clinical significance.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model.
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding.
Characterization of protein folding by a Φ-value calculation with a statistical-mechanical model
Wako, Hiroshi; Abe, Haruo
2016-01-01
The Φ-value analysis approach provides information about transition-state structures along the folding pathway of a protein by measuring the effects of an amino acid mutation on folding kinetics. Here we compared the theoretically calculated Φ values of 27 proteins with their experimentally observed Φ values; the theoretical values were calculated using a simple statistical-mechanical model of protein folding. The theoretically calculated Φ values reflected the corresponding experimentally observed Φ values with reasonable accuracy for many of the proteins, but not for all. The correlation between the theoretically calculated and experimentally observed Φ values strongly depends on whether the protein-folding mechanism assumed in the model holds true in real proteins. In other words, the correlation coefficient can be expected to illuminate the folding mechanisms of proteins, providing the answer to the question of which model more accurately describes protein folding: the framework model or the nucleation-condensation model. In addition, we tried to characterize protein folding with respect to various properties of each protein apart from the size and fold class, such as the free-energy profile, contact-order profile, and sensitivity to the parameters used in the Φ-value calculation. The results showed that any one of these properties alone was not enough to explain protein folding, although each one played a significant role in it. We have confirmed the importance of characterizing protein folding from various perspectives. Our findings have also highlighted that protein folding is highly variable and unique across different proteins, and this should be considered while pursuing a unified theory of protein folding. PMID:28409079
Additive scales in degenerative disease--calculation of effect sizes and clinical judgment.
Riepe, Matthias W; Wilkinson, David; Förstl, Hans; Brieden, Andreas
2011-12-16
The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.
Reflexion on linear regression trip production modelling method for ensuring good model quality
NASA Astrophysics Data System (ADS)
Suprayitno, Hitapriya; Ratnasari, Vita
2017-11-01
Transport Modelling is important. For certain cases, the conventional model still has to be used, in which having a good trip production model is capital. A good model can only be obtained from a good sample. Two of the basic principles of a good sampling is having a sample capable to represent the population characteristics and capable to produce an acceptable error at a certain confidence level. It seems that this principle is not yet quite understood and used in trip production modeling. Therefore, investigating the Trip Production Modelling practice in Indonesia and try to formulate a better modeling method for ensuring the Model Quality is necessary. This research result is presented as follows. Statistics knows a method to calculate span of prediction value at a certain confidence level for linear regression, which is called Confidence Interval of Predicted Value. The common modeling practice uses R2 as the principal quality measure, the sampling practice varies and not always conform to the sampling principles. An experiment indicates that small sample is already capable to give excellent R2 value and sample composition can significantly change the model. Hence, good R2 value, in fact, does not always mean good model quality. These lead to three basic ideas for ensuring good model quality, i.e. reformulating quality measure, calculation procedure, and sampling method. A quality measure is defined as having a good R2 value and a good Confidence Interval of Predicted Value. Calculation procedure must incorporate statistical calculation method and appropriate statistical tests needed. A good sampling method must incorporate random well distributed stratified sampling with a certain minimum number of samples. These three ideas need to be more developed and tested.
Solano, Rubén; Gómez-Barroso, Diana; Simón, Fernando; Lafuente, Sarah; Simón, Pere; Rius, Cristina; Gorrindo, Pilar; Toledo, Diana; Caylà, Joan A
2014-05-01
A retrospective, space-time study of whooping cough cases reported to the Public Health Agency of Barcelona, Spain between the years 2000 and 2011 is presented. It is based on 633 individual whooping cough cases and the 2006 population census from the Spanish National Statistics Institute, stratified by age and sex at the census tract level. Cluster identification was attempted using space-time scan statistic assuming a Poisson distribution and restricting temporal extent to 7 days and spatial distance to 500 m. Statistical calculations were performed with Stata 11 and SatScan and mapping was performed with ArcGis 10.0. Only clusters showing statistical significance (P <0.05) were mapped. The most likely cluster identified included five census tracts located in three neighbourhoods in central Barcelona during the week from 17 to 23 August 2011. This cluster included five cases compared with the expected level of 0.0021 (relative risk = 2436, P <0.001). In addition, 11 secondary significant space-time clusters were detected with secondary clusters occurring at different times and localizations. Spatial statistics is felt to be useful by complementing epidemiological surveillance systems through visualizing excess in the number of cases in space and time and thus increase the possibility of identifying outbreaks not reported by the surveillance system.
Hitting Is Contagious in Baseball: Evidence from Long Hitting Streaks
Bock, Joel R.; Maewal, Akhilesh; Gough, David A.
2012-01-01
Data analysis is used to test the hypothesis that “hitting is contagious”. A statistical model is described to study the effect of a hot hitter upon his teammates’ batting during a consecutive game hitting streak. Box score data for entire seasons comprising streaks of length games, including a total observations were compiled. Treatment and control sample groups () were constructed from core lineups of players on the streaking batter’s team. The percentile method bootstrap was used to calculate confidence intervals for statistics representing differences in the mean distributions of two batting statistics between groups. Batters in the treatment group (hot streak active) showed statistically significant improvements in hitting performance, as compared against the control. Mean for the treatment group was found to be to percentage points higher during hot streaks (mean difference increased points), while the batting heat index introduced here was observed to increase by points. For each performance statistic, the null hypothesis was rejected at the significance level. We conclude that the evidence suggests the potential existence of a “statistical contagion effect”. Psychological mechanisms essential to the empirical results are suggested, as several studies from the scientific literature lend credence to contagious phenomena in sports. Causal inference from these results is difficult, but we suggest and discuss several latent variables that may contribute to the observed results, and offer possible directions for future research. PMID:23251507
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
The chi-square test of independence.
McHugh, Mary L
2013-01-01
The Chi-square statistic is a non-parametric (distribution free) tool designed to analyze group differences when the dependent variable is measured at a nominal level. Like all non-parametric statistics, the Chi-square is robust with respect to the distribution of the data. Specifically, it does not require equality of variances among the study groups or homoscedasticity in the data. It permits evaluation of both dichotomous independent variables, and of multiple group studies. Unlike many other non-parametric and some parametric statistics, the calculations needed to compute the Chi-square provide considerable information about how each of the groups performed in the study. This richness of detail allows the researcher to understand the results and thus to derive more detailed information from this statistic than from many others. The Chi-square is a significance statistic, and should be followed with a strength statistic. The Cramer's V is the most common strength test used to test the data when a significant Chi-square result has been obtained. Advantages of the Chi-square include its robustness with respect to distribution of the data, its ease of computation, the detailed information that can be derived from the test, its use in studies for which parametric assumptions cannot be met, and its flexibility in handling data from both two group and multiple group studies. Limitations include its sample size requirements, difficulty of interpretation when there are large numbers of categories (20 or more) in the independent or dependent variables, and tendency of the Cramer's V to produce relative low correlation measures, even for highly significant results.
Impact of operator on determining functional parameters of nuclear medicine procedures.
Mohammed, A M; Naddaf, S Y; Mahdi, F S; Al-Mutawa, Q I; Al-Dossary, H A; Elgazzar, A H
2006-01-01
The study was designed to assess the significance of the interoperator variability in the estimation of functional parameters for four nuclear medicine procedures. Three nuclear medicine technologists with varying years of experience processed the following randomly selected 20 cases with diverse functions of each study type: renography, renal cortical scans, myocardial perfusion gated single-photon emission computed tomography (MP-GSPECT) and gated blood pool ventriculography (GBPV). The technologists used the same standard processing routines and were blinded to the results of each other. The means of the values and the means of differences calculated case by case were statistically analyzed by one-way ANOVA. The values were further analyzed using Pearson correlation. The range of the mean values and standard deviation of relative renal function obtained by the three technologists were 50.65 +/- 3.9 to 50.92 +/- 4.4% for renography, 51.43 +/- 8.4 to 51.55 +/- 8.8% for renal cortical scans, 57.40 +/- 14.3 to 58.30 +/- 14.9% for left ventricular ejection fraction from MP-GSPECT and 54.80 +/- 12.8 to 55.10 +/- 13.1% for GBPV. The difference was not statistically significant, p > 0.9. The values showed a high correlation of more than 0.95. Calculated case by case, the mean of differences +/- SD was found to range from 0.42 +/- 0.36% in renal cortical scans to 1.35 +/- 0.87% in MP-GSPECT with a maximum difference of 4.00%. The difference was not statistically significant, p > 0.19. The estimated functional parameters were reproducible and operator independent as long as the standard processing instructions were followed. Copyright 2006 S. Karger AG, Basel.
Tang, Liang; Feng, Shiqing; Gao, Ruixiao; Han, Chenfu; Sun, Xiaochen; Bao, Yucheng; Zhang, Wenlong
2017-12-01
The aim of the present study was to compare the efficacy of the commercial Xpert Mycobacterium tuberculosis/rifampin (MTB/RIF) test for evaluating different types of spinal tuberculosis (TB) tissue specimens. Pus, granulation tissue, and caseous necrotic tissue specimens from 223 patients who were diagnosed with spinal TB and who underwent curettage were collected for bacterial culture and the Xpert MTB/RIF assay to calculate the positive rate. Bacterial culture and phenotypic drug sensitivity testing (pDST) were adopted as the gold standards to calculate the sensitivity and specificity of the Xpert bacterial detection and drug resistance (DR) test. The positive rate (68.61% ± 7.35%) from the Xpert MTB/RIF assays of spinal TB patients' tissue specimens was higher compared with bacterial culture (44.39% ± 6.51%, Z = 5.1642, p < 0.01), and the positive rates from Xpert MTB/RIF assays on the three types of specimens were all higher than those of bacterial culture, with statistically significant results for pus and granulation tissue specimens. The positive rates for pus using the two bacteriological tests were higher than those for granulation tissue but were not statistically significant. However, the positive rates obtained from granulation tissue were statistically significantly higher than those obtained from caseous necrotic tissue. With bacterial culture and pDST as the gold standards, the sensitivity of Xpert MTB/RIF assays for MTB was 96.97%, while the sensitivity and specificity of the DR test also remained relatively high. For efficient and accurate diagnosis of spinal TB and DR and timely provision of effective treatment, multiple specimens, especially the pus of spinal TB patients, should be collected for Xpert MTB/RIF assays.
Geronikolou, Styliani; Zimeras, Stelios; Davos, Constantinos H.; Michalopoulos, Ioannis; Tsitomeneas, Stephanos
2014-01-01
Introduction The impact of electromagnetic fields on health is of increasing scientific interest. The aim of this study was to examine how the Drosophila melanogaster animal model is affected when exposed to portable or mobile phone fields. Methods/Results Two experiments have been designed and performed in the same laboratory conditions. Insect cultures were exposed to the near field of a 2G mobile phone (the GSM 2G networks support and complement in parallel the 3G wide band or in other words the transmission of information via voice signals is served by the 2G technology in both mobile phones generations) and a 1880 MHz cordless phone both digitally modulated by human voice. Comparison with advanced statistics of the egg laying of the second generation exposed and non-exposed cultures showed limited statistical significance for the cordless phone exposed culture and statistical significance for the 900 MHz exposed insects. We calculated by physics, simulated and illustrated in three dimensional figures the calculated near fields of radiation inside the experimenting vials and their difference. Comparison of the power of the two fields showed that the difference between them becomes null when the experimental cylinder radius and the height of the antenna increase. Conclusions/Significance Our results suggest a possible radiofrequency sensitivity difference in insects which may be due to the distance from the antenna or to unexplored intimate factors. Comparing the near fields of the two frequencies bands, we see similar not identical geometry in length and height from the antenna and that lower frequencies tend to drive to increased radiofrequency effects. PMID:25402465
Paige, John T; Garbee, Deborah D; Kozmenko, Valeriy; Yu, Qingzhao; Kozmenko, Lyubov; Yang, Tong; Bonanno, Laura; Swartz, William
2014-01-01
Effective teamwork in the operating room (OR) is often undermined by the "silo mentality" of the differing professions. Such thinking is formed early in one's professional experience and is fostered by undergraduate medical and nursing curricula lacking interprofessional education. We investigated the immediate impact of conducting interprofessional student OR team training using high-fidelity simulation (HFS) on students' team-related attitudes and behaviors. Ten HFS OR interprofessional student team training sessions were conducted involving 2 standardized HFS scenarios, each of which was followed by a structured debriefing that targeted team-based competencies. Pre- and post-session mean scores were calculated and analyzed for 15 Likert-type items measuring self-efficacy in teamwork competencies using the t-test. Additionally, mean scores of observer ratings of team performance after each scenario and participant ratings after the second scenario for an 11-item Likert-type teamwork scale were calculated and analyzed using one-way ANOVA and t-test. Eighteen nursing students, 20 nurse anesthetist students, and 28 medical students participated in the training. Statistically significant gains from mean pre- to post-training scores occurred on 11 of the 15 self-efficacy items. Statistically significant gains in mean observer performance scores were present on all 3 subscales of the teamwork scale from the first scenario to the second. A statistically significant difference was found in comparisons of mean observer scores with mean participant scores for the team-based behaviors subscale. High-fidelity simulation OR interprofessional student team training improves students' team-based attitudes and behaviors. Students tend to overestimate their team-based behaviors. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Dong, Jing; Zhang, Yaqin; Zhang, Haining; Jia, Zhijie; Zhang, Suhua; Wang, Xiaogang
2018-01-01
To compare the axial length (AL), anterior chamber depth (ACD) and intraocular lens power (IOLP) of IOLMaster and Ultrasound in normal, long and short eyes. Seventy-four normal eyes (≥ 22 mm and ≤ 25 mm), 74 long eyes (> 25 mm) and 78 short eyes (< 22 mm) underwent AL and ACD measurements with both devices in the order of IOLMaster followed by Ultrasound. The IOLP were calculated using a free online LADAS IOL formula calculator. The difference in AL and IOLP between IOLMaster and Ultrasound was statistically significant when all three groups were combined. The difference in ACD between IOLMaster and Ultrasound was statistically significant in the normal group (P<0.001) and short eye group (P<0.001) but not the long eye group (P = 0.465). For the IOLP difference between IOLMaster and Ultrasound in the normal group, the percentage of IOLP differences <|0.5|D, ≥|0.5|D<|0.75|D, ≥|0.75|D<|1.0|D, and ≥|1.0|D were 90.5%, 8.1%, 1.4% and 0%, respectively. For the long eye group, they were 90.5%, 5.4%, 4.1% and 0%, respectively. For the short eye group, they were 61.5%, 23.1%, 10.3%, and 5.1%, respectively. IOLMaster and Ultrasound have statistically significant differences in AL measurements and IOLP (using LADAS formula) for normal, long eye and short eye. The two instruments agree regarding ACD measurements for the long eye group, but differ for the normal and short eye groups. Moreover, the high percentage of IOLP differences greater than |0.5|D in the short eye group is noteworthy.
Geochemistry of some rare earth elements in groundwater, Vierlingsbeek, The Netherlands.
Janssen, René P T; Verweij, Wilko
2003-03-01
Groundwater samples were taken from seven bore holes at depths ranging from 2 to 41m nearby drinking water pumping station Vierlingsbeek, The Netherlands and analysed for Y, La, Ce, Pr, Nd, Sm and Eu. Shale-normalized patterns were generally flat and showed that the observed rare earth elements (REE) were probably of natural origin. In the shallow groundwaters the REEs were light REE (LREE) enriched, probably caused by binding of LREEs to colloids. To improve understanding of the behaviour of the REE, two approaches were used: calculations of the speciation and a statistical approach. For the speciation calculations, complexation and precipitation reactions including inorganic and dissolved organic carbon (DOC) compounds, were taken into account. The REE speciation showed REE(3+), REE(SO(4))(+), REE(CO(3))(+) and REE(DOC) being the major species. Dissolution of pure REE precipitates and REE-enriched solid phases did not account for the observed REEs in groundwater. Regulation of REE concentrations by adsorption-desorption processes to Fe(III)(OH)(3) and Al(OH)(3) minerals, which were calculated to be present in nearly all groundwaters, is a probable explanation. The statistical approach (multiple linear regression) showed that pH is by far the most significant groundwater characteristic which contributes to the variation in REE concentrations. Also DOC, SO(4), Fe and Al contributed significantly, although to a much lesser extent, to the variation in REE concentrations. This is in line with the calculated REE-species in solution and REE-adsorption to iron and aluminium (hydr)oxides. Regression equations including only pH, were derived to predict REE concentrations in groundwater. External validation showed that these regression equations were reasonably successful to predict REE concentrations of groundwater of another drinking water pumping station in quite different region of The Netherlands.
Eisinga, Rob; Heskes, Tom; Pelzer, Ben; Te Grotenhuis, Manfred
2017-01-25
The Friedman rank sum test is a widely-used nonparametric method in computational biology. In addition to examining the overall null hypothesis of no significant difference among any of the rank sums, it is typically of interest to conduct pairwise comparison tests. Current approaches to such tests rely on large-sample approximations, due to the numerical complexity of computing the exact distribution. These approximate methods lead to inaccurate estimates in the tail of the distribution, which is most relevant for p-value calculation. We propose an efficient, combinatorial exact approach for calculating the probability mass distribution of the rank sum difference statistic for pairwise comparison of Friedman rank sums, and compare exact results with recommended asymptotic approximations. Whereas the chi-squared approximation performs inferiorly to exact computation overall, others, particularly the normal, perform well, except for the extreme tail. Hence exact calculation offers an improvement when small p-values occur following multiple testing correction. Exact inference also enhances the identification of significant differences whenever the observed values are close to the approximate critical value. We illustrate the proposed method in the context of biological machine learning, were Friedman rank sum difference tests are commonly used for the comparison of classifiers over multiple datasets. We provide a computationally fast method to determine the exact p-value of the absolute rank sum difference of a pair of Friedman rank sums, making asymptotic tests obsolete. Calculation of exact p-values is easy to implement in statistical software and the implementation in R is provided in one of the Additional files and is also available at http://www.ru.nl/publish/pages/726696/friedmanrsd.zip .
Reinstein, Dan Z.; Archer, Timothy J.; Silverman, Ronald H.; Coleman, D. Jackson
2008-01-01
Purpose To determine the accuracy, repeatability, and reproducibility of measurement of lateral dimensions using the Artemis (Ultralink LLC) very high-frequency (VHF) digital ultrasound (US) arc scanner. Setting London Vision Clinic, London, United Kingdom. Methods A test object was measured first with a micrometer and then with the Artemis arc scanner. Five sets of 10 consecutive B-scans of the test object were performed with the scanner. The test object was removed from the system between each scan set. One expert observer and one newly trained observer separately measured the lateral dimension of the test object. Two-factor analysis of variance was performed. The accuracy was calculated as the average bias of the scan set averages. The repeatability and reproducibility coefficients were calculated. The coefficient of variation (CV) was calculated for repeatability and reproducibility. Results The test object was measured to be 10.80 mm wide. The mean lateral dimension bias was 0.00 mm. The repeatability coefficient was 0.114 mm. The reproducibility coefficient was 0.026 mm. The repeatability CV was 0.38%, and the reproducibility CV was 0.09%. There was no statistically significant variation between observers (P = .0965). There was a statistically significant variation between scan sets (P = .0036) attributed to minor vertical changes in the alignment of the test object between consecutive scan sets. Conclusion The Artemis VHF digital US arc scanner obtained accurate, repeatable, and reproducible measurements of lateral dimensions of the size commonly found in the anterior segment. PMID:17081860
A time to be born: Variation in the hour of birth in a rural population of Northern Argentina.
Chaney, Carlye; Goetz, Laura G; Valeggia, Claudia
2018-04-17
The present study aimed at investigating the timing of birth across the day in a rural population of indigenous and nonindigenous women in the province of Formosa, Argentina in order to explore the variation in patterns in a non-Western setting. This study utilized birth record data transcribed from delivery room records at a rural hospital in the province of Formosa, northern Argentina. The sample included data for Criollo, Wichí, and Toba/Qom women (n = 2421). Statistical analysis was conducted using directional statistics to identify a mean sample direction. Chi-square tests for homogeneity were also used to test for statistical significant differences between hours of the day. The mean sample direction was 81.04°, which equates to 5:24 AM when calculated as time on a 24-hr clock. Chi-squared analyses showed a statistically significant peak in births between 12:00 and 4:00 AM. Birth counts generally declined throughout the day until a statistically significant trough around 5:00 PM. This pattern may be associated with the circadian rhythms of hormone release, particularly melatonin, on a proximate level. At the ultimate level, giving birth in the early hours of the morning may have been selected to time births when the mother could benefit from the predator protection and support provided by her social group as well as increased mother-infant bonding from a more peaceful environment. © 2018 Wiley Periodicals, Inc.
Revisiting photon-statistics effects on multiphoton ionization
NASA Astrophysics Data System (ADS)
Mouloudakis, G.; Lambropoulos, P.
2018-05-01
We present a detailed analysis of the effects of photon statistics on multiphoton ionization. Through a detailed study of the role of intermediate states, we evaluate the conditions under which the premise of nonresonant processes is valid. The limitations of its validity are manifested in the dependence of the process on the stochastic properties of the radiation and found to be quite sensitive to the intensity. The results are quantified through detailed calculations for coherent, chaotic, and squeezed vacuum radiation. Their significance in the context of recent developments in radiation sources such as the short-wavelength free-electron laser and squeezed vacuum radiation is also discussed.
Correlation between urodynamic function and 3D cat scan anatomy in neobladders: does it exist?
Crivellaro, S; Mami, E; Wald, C; Smith, J J; Kocjancic, E; Stoffel, J; Bresette, J; Libertino, J A
2009-01-01
We compared the functional and anatomical differences among three different orthotopic neobladders, utilizing video urodynamics and 3D CT to determine what parameters, if any, correlate to function. Thirty-four patients were able to participate in the evaluation of their neobladder by 3D CT and video urodynamics. Three different orthotopic neobladders were identified (12 ileal, 7 ileocecal, 15 sigmoid). Multiple measurements, observations and functional data have been obtained. Statistical analysis for this study employed a linear regression test and an odds ratio calculation (using StatSoft V. 5.1). In comparing three different neobladders, no significant differences were noted. Looking at the entire population, the following association was statistically significant in linear correlation: the maximal capacity and the neobladder volume; the pressure at the maximal capacity and the distance from the symphysis, the pressure at maximal flow and both the distance from the symphysis and the thickness of the neobladder. The distance from the left femoral head was directly correlated with the post void residual and inversely correlated with the maximal flow. The Odds ratio calculation revealed (with significant P < 0.05) that the further the center of the neobladder is from the right femoral head, the higher risk of incontinence. The study seems to show no significant anatomical or functional difference among the three different types of neobladders. A possible correlation between the position of the neobladder and urinary incontinence is suggested, recognizing further study in a larger population is required.
McLawhorn, Alexander S; Levack, Ashley E; Fields, Kara G; Sheha, Evan D; DelPizzo, Kathryn R; Sink, Ernest L
2016-03-01
Periacetabular osteotomy (PAO) reorients the acetabular cartilage through a complex series of pelvic osteotomies, which risks significant blood loss often necessitating blood transfusion. Therefore, it is important to identify effective strategies to manage blood loss and decrease morbidity after PAO. The purpose of this study was to determine the association of epsilon-aminocaproic acid (EACA), an antifibrinolytic agent, with blood loss from PAO. Ninety-three patients out of 110 consecutive patients that underwent unilateral PAO for acetabular dysplasia met inclusion criteria. Fifty patients received EACA intraoperatively. Demographics, autologous blood predonation, anesthetic type, intraoperative estimated blood loss (EBL), cell-saver utilization, and transfusions were recorded. Total blood loss was calculated. Two-sample t-test and chi-square or Fisher's exact test were used as appropriate. The associations between EACA administration and calculated EBL, cell-saver utilization, intraoperative EBL, and maximum difference in postoperative hemoglobin were assessed via multiple regression, adjusting for confounders. Post hoc power analysis demonstrated sufficient power to detect a 250-mL difference in calculated EBL between groups. Alpha level was 0.05 for all tests. No demographic differences existed between groups. Mean blood loss and allogeneic transfusion rates were not statistically significant between groups (P = .093 and .170, respectively). There were no differences in cell-saver utilization, intraoperative EBL, and/or postoperative hemoglobin. There was a higher rate of autologous blood utilization in the group not receiving EACA because of a clinical practice change. EACA administration was not associated with a statistically significant reduction in blood loss or allogeneic transfusion in patients undergoing PAO. Copyright © 2016 Elsevier Inc. All rights reserved.
2013-01-01
Introduction Small-study effects refer to the fact that trials with limited sample sizes are more likely to report larger beneficial effects than large trials. However, this has never been investigated in critical care medicine. Thus, the present study aimed to examine the presence and extent of small-study effects in critical care medicine. Methods Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (≥100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation. Results A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data. Conclusions Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials. PMID:23302257
Oxidative status and lipid profile in metabolic syndrome: gender differences.
Kaya, Aysem; Uzunhasan, Isil; Baskurt, Murat; Ozkan, Alev; Ataoglu, Esra; Okcun, Baris; Yigit, Zerrin
2010-02-01
Metabolic syndrome is associated with cardiovascular disease and oxidative stress. The aim of this study was to investigate the differences of novel oxidative stress parameters and lipid profiles in men and women with metabolic syndrome. The study population included 88 patients with metabolic syndrome, consisting of 48 postmenauposal women (group I) and 40 men (group II). Premenauposal women were excluded. Plasma levels of total antioxidant status (TAS) and total oxidative status (TOS) were determined by using the Erel automated measurement method, and oxidative stress index (OSI) was calculated. To perform the calculation, the resulting unit of TAS, mmol Trolox equivalent/L, was converted to micromol equivalent/L and the OSI value was calculated as: OSI = [(TOS, micromol/L)/(TAS, mmol Trolox equivalent/L) x 100]. The Student t-test, Mann-Whitney-U test, and chi-squared test were used for statistical analysis; the Pearson correlation coefficient and Spearman rank test were used for correlation analysis. P < or = 0.05 was considered to be statistically significant. Both women and men had similar properties regarding demographic characteristics and biochemical work up. Group II had significantly lower levels of antioxidant levels of TAS and lower levels of TOS and OSI compared with group I (P = 0.0001, P = 0.0035, and P = 0,0001). Apolipoprotein A (ApoA) levels were significantly higher in group I compared with group II. Our findings indicate that women with metabolic syndrome have a better antioxidant status and higher ApoA levels compared with men. Our findings suggest the existence of a higher oxidative stress index in men with metabolic syndrome. Considering the higher risk of atherosclerosis associated with men, these novel oxidative stress parameters may be valuable in the evaluation of patients with metabolic sydrome.
Kamburoğlu, Kıvanç; Yılmaz, Funda; Yeta, Elif Naz; Özen, Doĝukan
2016-06-01
To investigate observer ability to diagnose ex vivo simulated endodontic furcal perforations in root-filled teeth from cone beam computed tomography (CBCT) images using different artifact reduction algorithms. Our study consisted of 135 first maxillary molar teeth. In 89 teeth, furcation perforations were created using dental burs. Forty-six teeth without artificial perforations were used as controls. MTA Fillapex, Activ GP, and AH Plus were used with or without metal posts. All teeth were imaged using Planmeca ProMax 3-D Max CBCT, and four image modes were obtained as without artifact reduction and with artifact reduction in low, medium, and high modes. Images were evaluated by three observers for the presence or absence of furcation perforation using a five-point scale. Weighted kappa coefficients were calculated to assess observer agreement. Receiver operating characteristic analysis was performed. Areas under the curve (AUCs) were calculated for each image mode, observer, treatment group, and reading and were compared using Χ(2) tests, with a significance level of α = 0.05. The effects on diagnosis were calculated using analysis of variance (ANOVA). Intraobserver agreements for all observers ranged from 0.857 to 0.945. Kappa coefficients among different observers ranged from 0.673 to 0.763. AUC values ranged from 0.83 to 0.92, and there were no statistically significant differences (P > .05) between different CBCT image modes. Ratings in Activ GP treatment groups with or without posts showed statistically significant differences (P < .001). All CBCT image modes performed similarly in detecting furcal perforations near different root canal sealers with or without posts. Copyright © 2016 Elsevier Inc. All rights reserved.
Lovvorn, Harold N.; Ayers, Dan; Zhao, Zhiguo; Hilmes, Melissa; Prasad, Pinki; Shinall, Myrick C.; Berch, Barry; Neblett, Wallace W.; O'Neill, James A.
2010-01-01
Purpose Hepatoblastoma is commonly unresectable at presentation, necessitating induction chemotherapy before definitive resection. To refine the paradigm for timing of resection, we questioned whether a plateau in hepatoblastoma responsiveness to neoadjuvant therapy could be detected by calculating tumor volume (TV) and serum α-fetoprotein (sAFP) kinetics. Methods To calculate TV and sAFP as measures of treatment responsiveness over time, infants having initially unresectable epithelial-type hepatoblastomas were identified at a single institution (1996-2008). Effects of therapy type, therapy duration, and lobe of liver involvement on TV, sAFP, margin status, and toxicity were analyzed. Results Of 24 infants treated for epithelial-type hepatoblastoma during this interval, 5 were resected primarily, and 15 had complete digital films for kinetics analysis. Both TV and sAFP decreased dramatically over time (p<0.0001). No statistically significant difference in mean TV or sAFP was detected after chemotherapy cycle 2. Left lobe tumors had greater presenting levels of and significantly slower decay in sAFP compared to right lobe tumors (p=0.005), although no statistically significant differences in TV existed between liver lobes. Resection margins did not change with therapy duration. Conclusions Measuring TV and sAFP kinetics accurately reflects hepatoblastoma responsiveness to induction therapy. Treatment toxicities may be reduced by earlier resection and tailoring of chemotherapeutic regimens. PMID:20105591
Diurnal Variations in Global Joule Heating Morphology and Magnitude Due To Neutral Winds
NASA Astrophysics Data System (ADS)
Billett, D. D.; Grocott, A.; Wild, J. A.; Walach, M.-T.; Kosch, M. J.
2018-03-01
In the polar ionosphere, variations in Joule heating are significantly controlled by changes in plasma convection, such as that brought about by changes in the interplanetary magnetic field. However, another important consideration when calculating Joule heating is the velocity difference between this plasma and the neutral thermosphere colocated with the ionosphere. Neutral wind data are often difficult to obtain on a global scale; thus, Joule heating has often previously been calculated assuming that neutral velocities are small and can therefore be neglected. Previous work has shown the effect of neutral winds on Joule heating estimations to be more significant than originally thought; however, the diurnal variations of the neutrals due to changes in solar pressure gradients and Coriolis forces have yet to have their impact on Joule heating assessed. We show this universal time effect to be significant in calculating Joule heating and thus can differ significantly from that calculated by neglecting the neutrals. In this study, we use empirical models for the neutral wind, conductivities, and magnetic field to create Northern Hemispheric patterns of Joule heating for approximately 800,000 individual plasma convection patterns generated using data from the Super Dual Auroral Radar Network. From this, a statistical analysis of how Joule heating varies in morphology and magnitude with universal time is shown for differing seasons and levels of geomagnetic activity. We find that neutral winds do play a significant role in the morphology and total energy output of Joule heating.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-09
... statistically significant is not applied. HUD applies this test as a means to minimize fluctuations in rents due... the Difference between the new and old rent estimate (EST 1 -EST 2 ) divided by the square root of the... FMRs is 2006-2008. That means that no 2006 survey data are included in this ``three-year'' recent mover...
NASA Astrophysics Data System (ADS)
Miswan, M. A.; Gopir, G.; Anas, M. M.
2016-11-01
Geometry optimization is one of the most widely used methods to study in carbon cluster Cn to understand its structural properties. The total energy for each of the structures was calculated using Octopus software with conjugate gradient Broyden-Fletcher-Goldfarb-Shanno (CG-BFGS). Our calculation and other studies indicate that the linear forms are the most stable structures. However, the C3 isomers have equal probability to form, as the differences in our calculation of total energy are statistically insignificant. Despite there are two cohort of total energy, the calculations are acceptable due to the energy ratio between C3 to C2 and C2 to C1 are comparable to others work. Meanwhile, the bond properties of the C2 and C3 bonds also gives significant difference between our work and previous study.
Huynh, Lynn; Totev, Todor; Vekeman, Francis; Neary, Maureen P; Duh, Mei S; Benson, Al B
2017-09-01
To calculate the cost reduction associated with diarrhea/flushing symptom resolution/improvement following treatment with above-standard dose octreotide-LAR from the commercial payor's perspective. Diarrhea and flushing are two major carcinoid syndrome symptoms of neuroendocrine tumor (NET). Previously, a study of NET patients from three US tertiary oncology centers (NET 3-Center Study) demonstrated that dose escalation of octreotide LAR to above-standard dose resolved/improved diarrhea/flushing in 79% of the patients within 1 year. Time course of diarrhea/flushing symptom data were collected from the NET 3-Center Study. Daily healthcare costs were calculated from a commercial claims database analysis. For the patient cohort experiencing any diarrhea/flushing symptom resolution/improvement, their observation period was divided into days of symptom resolution/improvement or no improvement, which were then multiplied by the respective daily healthcare cost and summed over 1 year to yield the blended mean annual cost per patient. For patients who experienced no diarrhea/flushing symptom improvement, mean annual daily healthcare cost of diarrhea/flushing over a 1-year period was calculated. The economic model found that 108 NET patients who experienced diarrhea/flushing symptom resolution/improvement within 1 year had statistically significantly lower mean annual healthcare cost/patient than patients with no symptom improvement, by $14,766 (p = .03). For the sub-set of 85 patients experiencing resolution/improvement of diarrhea, their cost reduction was more pronounced, at $18,740 (p = .01), statistically significantly lower than those with no improvement; outpatient costs accounted for 56% of the cost reduction (p = .02); inpatient costs, emergency department costs, and pharmacy costs accounted for the remaining 44%. The economic model relied on two different sources of data, with some heterogeneity in the prior treatment and disease status of patients. Symptom resolution/improvement of diarrhea/flushing after treatment with an above-standard dose of octreotide-LAR in NET was associated with a statistically significant healthcare cost decrease compared to a scenario of no symptom improvement.
Busfield, Benjamin T; Kharrazi, F Daniel; Starkey, Chad; Lombardo, Stephen J; Seegmiller, Jeffrey
2009-08-01
The purpose of this study was to determine the rate of return to play and to quantify the effect on the basketball player's performance after surgical reconstruction of the anterior cruciate ligament (ACL). Surgical injuries involving the ACL were queried for a 10-year period (1993-1994 season through 2004-2005 season) from the database maintained by the National Basketball Association (NBA). Standard statistical categories and player efficiency rating (PER), a measure that accounts for positive and negative playing statistics, were calculated to determine the impact of the injury on player performance relative to a matched comparison group. Over the study period, 31 NBA players had 32 ACL reconstructions. Two patients were excluded because of multiple ACL injuries, one was excluded because he never participated in league play, and another was the result of nonathletic activity. Of the 27 players in the study group, 6 (22%) did not return to NBA competition. Of the 21 players (78%) who did return to play, 4 (15%) had an increase in the preinjury PER, 5 (19%) remained within 1 point of the preinjury PER, and the PER decreased by more than 1 point after return to play in 12 (44%). Although decreases occurred in most of the statistical categories for players returning from ACL surgery, the number of games played, field goal percentage, and number of turnovers per game were the only categories with a statistically significant decrease. Players in the comparison group had a statistically significant increase in the PER over their careers, whereas the study group had a marked, though not statistically significant, increase in the PER in the season after reconstruction. After ACL reconstruction in 27 basketball players, 22% did not return to a sanctioned NBA game. For those returning to play, performance decreased by more than 1 PER point in 44% of the patients, although the changes were not statistically significant relative to the comparison group. Level IV, therapeutic case series.
Bundschuh, Mirco; Newman, Michael C; Zubrod, Jochen P; Seitz, Frank; Rosenfeldt, Ricki R; Schulz, Ralf
2015-03-01
We argued recently that the positive predictive value (PPV) and the negative predictive value (NPV) are valuable metrics to include during null hypothesis significance testing: They inform the researcher about the probability of statistically significant and non-significant test outcomes actually being true. Although commonly misunderstood, a reported p value estimates only the probability of obtaining the results or more extreme results if the null hypothesis of no effect was true. Calculations of the more informative PPV and NPV require a priori estimate of the probability (R). The present document discusses challenges of estimating R.
Landsgesell, Jonas; Holm, Christian; Smiatek, Jens
2017-02-14
We present a novel method for the study of weak polyelectrolytes and general acid-base reactions in molecular dynamics and Monte Carlo simulations. The approach combines the advantages of the reaction ensemble and the Wang-Landau sampling method. Deprotonation and protonation reactions are simulated explicitly with the help of the reaction ensemble method, while the accurate sampling of the corresponding phase space is achieved by the Wang-Landau approach. The combination of both techniques provides a sufficient statistical accuracy such that meaningful estimates for the density of states and the partition sum can be obtained. With regard to these estimates, several thermodynamic observables like the heat capacity or reaction free energies can be calculated. We demonstrate that the computation times for the calculation of titration curves with a high statistical accuracy can be significantly decreased when compared to the original reaction ensemble method. The applicability of our approach is validated by the study of weak polyelectrolytes and their thermodynamic properties.
Rainfall Threshold Assessment Corresponding to the Maximum Allowable Turbidity for Source Water.
Fan, Shu-Kai S; Kuan, Wen-Hui; Fan, Chihhao; Chen, Chiu-Yang
2016-12-01
This study aims to assess the upstream rainfall thresholds corresponding to the maximum allowable turbidity of source water, using monitoring data and artificial neural network computation. The Taipei Water Source Domain was selected as the study area, and the upstream rainfall records were collected for statistical analysis. Using analysis of variance (ANOVA), the cumulative rainfall records of one-day Ping-lin, two-day Ping-lin, two-day Tong-hou, one-day Guie-shan, and one-day Tai-ping (rainfall in the previous 24 or 48 hours at the named weather stations) were found to be the five most significant parameters for downstream turbidity development. An artificial neural network model was constructed to predict the downstream turbidity in the area investigated. The observed and model-calculated turbidity data were applied to assess the rainfall thresholds in the studied area. By setting preselected turbidity criteria, the upstream rainfall thresholds for these statistically determined rain gauge stations were calculated.
Kirgiz, Ahmet; Atalay, Kurşat; Kaldirim, Havva; Cabuk, Kubra Serefoglu; Akdemir, Mehmet Orcun; Taskapili, Muhittin
2017-08-01
The purpose of this study was to compare the keratometry (K) values obtained by the Scheimpflug camera combined with placido-disk corneal topography (Sirius) and optical biometry (Lenstar) for intraocular lens (IOL) power calculation before the cataract surgery, and to evaluate the accuracy of postoperative refraction. 50 eyes of 40 patients were scheduled to have phacoemulsification with the implantation of a posterior chamber intraocular lens. The IOL power was calculated using the SRK/T formula with Lenstar K and K readings from Sirius. Simulated K (SimK), K at 3-, 5-, and 7-mm zones from Sirius were compared with Lenstar K readings. The accuracy of these parameters was determined by calculating the mean absolute error (MAE). The mean Lenstar K value was 44.05 diopters (D) ±1.93 (SD) and SimK, K at 3-, 5-, and 7-mm zones were 43.85 ± 1.91, 43.88 ± 1.9, 43.84 ± 1.9, 43.66 ± 1.85 D, respectively. There was no statistically significant difference between the K readings (P = 0.901). When Lenstar was used for the corneal power measurements, MAE was 0.42 ± 0.33 D, but when simK of Sirius was used, it was 0.37 ± 0.32 D (the lowest MAE (0.36 ± 0.32 D) was achieved as a result of 5 mm K measurement), but it was not statistically significant (P = 0.892). Of all the K readings of Sirius and Lenstar, Sirius 5-mm zone K readings were the best in predicting a more precise IOL power. The corneal power measurements with the Scheimpflug camera combined with placido-disk corneal topography can be safely used for IOL power calculation.
Clarifying changes in student empathy throughout medical school: a scoping review.
Ferreira-Valente, Alexandra; Monteiro, Joana S; Barbosa, Rita M; Salgueira, Ana; Costa, Patrício; Costa, Manuel J
2017-12-01
Despite the increasing awareness of the relevance of empathy in patient care, some findings suggest that medical schools may be contributing to the deterioration of students' empathy. Therefore, it is important to clarify the magnitude and direction of changes in empathy during medical school. We employed a scoping review to elucidate trends in students' empathy changes/differences throughout medical school and examine potential bias associated with research design. The literature published in English, Spanish, Portuguese and French from 2009 to 2016 was searched. Two-hundred and nine potentially relevant citations were identified. Twenty articles met the inclusion criteria. Effect sizes of empathy scores variations were calculated to assess the practical significance of results. Our results demonstrate that scoped studies differed considerably in their design, measures used, sample sizes and results. Most studies (12 out of 20 studies) reported either positive or non-statistically significant changes/differences in empathy regardless of the measure used. The predominant trend in cross-sectional studies (ten out of 13 studies) was of significantly higher empathy scores in later years or of similar empathy scores across years, while most longitudinal studies presented either mixed-results or empathy declines. There was not a generalized international trend in changes in students' empathy throughout medical school. Although statistically significant changes/differences were detected in 13 out of 20 studies, the calculated effect sizes were small in all but two studies, suggesting little practical significance. At the present moment, the literature does not offer clear conclusions relative to changes in student empathy throughout medical school.
Lobsien, D; Ettrich, B; Sotiriou, K; Classen, J; Then Bergh, F; Hoffmann, K-T
2014-01-01
Functional correlates of microstructural damage of the brain affected by MS are incompletely understood. The purpose of this study was to evaluate correlations of visual-evoked potentials with microstructural brain changes as determined by DTI in patients with demyelinating central nervous disease. Sixty-one patients with clinically isolated syndrome or MS were prospectively recruited. The mean P100 visual-evoked potential latencies of the right and left eyes of each patient were calculated and used for the analysis. For DTI acquisition, a single-shot echo-planar imaging pulse sequence with 80 diffusion directions was performed at 3T. Fractional anisotropy, radial diffusivity, and axial diffusivity were calculated and correlated with mean P100 visual-evoked potentials by tract-based spatial statistics. Significant negative correlations between mean P100 visual-evoked potentials and fractional anisotropy and significant positive correlations between mean P100 visual-evoked potentials and radial diffusivity were found widespread over the whole brain. The highest significance was found in the optic radiation, frontoparietal white matter, and corpus callosum. Significant positive correlations between mean P100 visual-evoked potentials and axial diffusivity were less widespread, notably sparing the optic radiation. Microstructural changes of the whole brain correlated significantly with mean P100 visual-evoked potentials. The distribution of the correlations showed clear differences among axial diffusivity, fractional anisotropy, and radial diffusivity, notably in the optic radiation. This finding suggests a stronger correlation of mean P100 visual-evoked potentials to demyelination than to axonal damage. © 2014 by American Journal of Neuroradiology.
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.; ...
2014-12-11
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makhov, Dmitry V.; Saita, Kenichiro; Martinez, Todd J.
In this study, we report a detailed computational simulation of the photodissociation of pyrrole using the ab initio Multiple Cloning (AIMC) method implemented within MOLPRO. The efficiency of the AIMC implementation, employing train basis sets, linear approximation for matrix elements, and Ehrenfest configuration cloning, allows us to accumulate significant statistics. We calculate and analyze the total kinetic energy release (TKER) spectrum and Velocity Map Imaging (VMI) of pyrrole and compare the results directly with experimental measurements. Both the TKER spectrum and the structure of the velocity map image (VMI) are well reproduced. Previously, it has been assumed that the isotropicmore » component of the VMI arises from long time statistical dissociation. Instead, our simulations suggest that ultrafast dynamics contributes significantly to both low and high energy portions of the TKER spectrum.« less
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment.
Szucs, Denes; Ioannidis, John P A
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out.
Efficacy of a proactive health and safety risk management system in the fire service.
Poplin, Gerald S; Griffin, Stephanie; Pollack Porter, Keshia; Mallett, Joshua; Hu, Chengcheng; Day-Nash, Virginia; Burgess, Jefferey L
2018-04-16
This study evaluated the efficacy of a fire department proactive risk management program aimed at reducing firefighter injuries and their associated costs. Injury data were collected for the intervention fire department and a contemporary control department. Workers' compensation claim frequency and costs were analyzed for the intervention fire department only. Total, exercise, patient transport, and fireground operations injury rates were calculated for both fire departments. There was a post-intervention average annual reduction in injuries (13%), workers' compensation injury claims (30%) and claims costs (21%). Median monthly injury rates comparing the post-intervention to the pre-intervention period did not show statistically significant changes in either the intervention or control fire department. Reduced workers' compensation claims and costs were observed following the risk management intervention, but changes in injury rates were not statistically significant.
When Null Hypothesis Significance Testing Is Unsuitable for Research: A Reassessment
Szucs, Denes; Ioannidis, John P. A.
2017-01-01
Null hypothesis significance testing (NHST) has several shortcomings that are likely contributing factors behind the widely debated replication crisis of (cognitive) neuroscience, psychology, and biomedical science in general. We review these shortcomings and suggest that, after sustained negative experience, NHST should no longer be the default, dominant statistical practice of all biomedical and psychological research. If theoretical predictions are weak we should not rely on all or nothing hypothesis tests. Different inferential methods may be most suitable for different types of research questions. Whenever researchers use NHST they should justify its use, and publish pre-study power calculations and effect sizes, including negative findings. Hypothesis-testing studies should be pre-registered and optimally raw data published. The current statistics lite educational approach for students that has sustained the widespread, spurious use of NHST should be phased out. PMID:28824397
Dermatoglyphic features in patients with multiple sclerosis
Sabanciogullari, Vedat; Cevik, Seyda; Karacan, Kezban; Bolayir, Ertugrul; Cimen, Mehmet
2014-01-01
Objective: To examine dermatoglyphic features to clarify implicated genetic predisposition in the etiology of multiple sclerosis (MS). Methods: The study was conducted between January and December 2013 in the Departments of Anatomy, and Neurology, Cumhuriyet University School of Medicine, Sivas, Turkey. The dermatoglyphic data of 61 patients, and a control group consisting of 62 healthy adults obtained with a digital scanner were transferred to a computer environment. The ImageJ program was used, and atd, dat, adt angles, a-b ridge count, sample types of all fingers, and ridge counts were calculated. Results: In both hands of the patients with MS, the a-b ridge count and ridge counts in all fingers increased, and the differences in these values were statistically significant. There was also a statistically significant increase in the dat angle in both hands of the MS patients. On the contrary, there was no statistically significant difference between the groups in terms of dermal ridge samples, and the most frequent sample in both groups was the ulnar loop. Conclusions: Aberrations in the distribution of dermatoglyphic samples support the genetic predisposition in MS etiology. Multiple sclerosis susceptible individuals may be determined by analyzing dermatoglyphic samples. PMID:25274586
Kelley, George A.; Kelley, Kristi S.; Kohrt, Wendy M.
2013-01-01
Objective. Examine the effects of exercise on femoral neck (FN) and lumbar spine (LS) bone mineral density (BMD) in premenopausal women. Methods. Meta-analysis of randomized controlled exercise trials ≥24 weeks in premenopausal women. Standardized effect sizes (g) were calculated for each result and pooled using random-effects models, Z score alpha values, 95% confidence intervals (CIs), and number needed to treat (NNT). Heterogeneity was examined using Q and I 2. Moderator and predictor analyses using mixed-effects ANOVA and simple metaregression were conducted. Statistical significance was set at P ≤ 0.05. Results. Statistically significant improvements were found for both FN (7g's, 466 participants, g = 0.342, 95% CI = 0.132, 0.553, P = 0.001, Q = 10.8, P = 0.22, I 2 = 25.7%, NNT = 5) and LS (6g's, 402 participants, g = 0.201, 95% CI = 0.009, 0.394, P = 0.04, Q = 3.3, P = 0.65, I 2 = 0%, NNT = 9) BMD. A trend for greater benefits in FN BMD was observed for studies published in countries other than the United States and for those who participated in home versus facility-based exercise. Statistically significant, or a trend for statistically significant, associations were observed for 7 different moderators and predictors, 6 for FN BMD and 1 for LS BMD. Conclusions. Exercise benefits FN and LS BMD in premenopausal women. The observed moderators and predictors deserve further investigation in well-designed randomized controlled trials. PMID:23401684
Kelley, George A; Kelley, Kristi S; Kohrt, Wendy M
2013-01-01
Objective. Examine the effects of exercise on femoral neck (FN) and lumbar spine (LS) bone mineral density (BMD) in premenopausal women. Methods. Meta-analysis of randomized controlled exercise trials ≥24 weeks in premenopausal women. Standardized effect sizes (g) were calculated for each result and pooled using random-effects models, Z score alpha values, 95% confidence intervals (CIs), and number needed to treat (NNT). Heterogeneity was examined using Q and I(2). Moderator and predictor analyses using mixed-effects ANOVA and simple metaregression were conducted. Statistical significance was set at P ≤ 0.05. Results. Statistically significant improvements were found for both FN (7g's, 466 participants, g = 0.342, 95% CI = 0.132, 0.553, P = 0.001, Q = 10.8, P = 0.22, I(2) = 25.7%, NNT = 5) and LS (6g's, 402 participants, g = 0.201, 95% CI = 0.009, 0.394, P = 0.04, Q = 3.3, P = 0.65, I(2) = 0%, NNT = 9) BMD. A trend for greater benefits in FN BMD was observed for studies published in countries other than the United States and for those who participated in home versus facility-based exercise. Statistically significant, or a trend for statistically significant, associations were observed for 7 different moderators and predictors, 6 for FN BMD and 1 for LS BMD. Conclusions. Exercise benefits FN and LS BMD in premenopausal women. The observed moderators and predictors deserve further investigation in well-designed randomized controlled trials.
Predictors of surgeons' efficiency in the operating rooms.
Nakata, Yoshinori; Watanabe, Yuichi; Narimatsu, Hiroto; Yoshimura, Tatsuya; Otake, Hiroshi; Sawa, Tomohiro
2017-02-01
The sustainability of the Japanese healthcare system is questionable because of a huge fiscal debt. One of the solutions is to improve the efficiency of healthcare. The purpose of this study is to determine what factors are predictive of surgeons' efficiency scores. The authors collected data from all the surgical procedures performed at Teikyo University Hospital from April 1 through September 30 in 2013-2015. Output-oriented Charnes-Cooper-Rhodes model of data envelopment analysis was employed to calculate each surgeon's efficiency score. Seven independent variables that may predict their efficiency scores were selected: experience, medical school, surgical volume, gender, academic rank, surgical specialty, and the surgical fee schedule. Multiple regression analysis using random-effects Tobit model was used for our panel data. The data from total 8722 surgical cases were obtained in 18-month study period. The authors analyzed 134 surgeons. The only statistically significant coefficients were surgical specialty and surgical fee schedule (p = 0.000 and p = 0.016, respectively). Experience had some positive association with efficiency scores but did not reach statistical significance (p = 0.062). The other coefficients were not statistically significant. These results demonstrated that the surgical reimbursement system, not surgeons' personal characteristics, is a significant predictor of surgeons' efficiency.
Clinical evaluation of selected Yogic procedures in individuals with low back pain
Pushpika Attanayake, A. M.; Somarathna, K. I. W. K.; Vyas, G. H.; Dash, S. C.
2010-01-01
The present study has been conducted to evaluate selected yogic procedures on individuals with low back pain. The understanding of back pain as one of the commonest clinical presentations during clinical practice made the path to the present study. It has also been calculated that more than three-quarters of the world's population experience back pain at some time in their lives. Twelve patients were selected and randomly divided into two groups, viz., group A yogic group and group B control group. Advice for life style and diet was given for all the patients. The effect of the therapy was assessed subjectively and objectively. Particular scores drawn for yogic group and control group were individually analyzed before and after treatment and the values were compared using standard statistical protocols. Yogic intervention revealed 79% relief in both subjective and objective parameters (i.e., 7 out of 14 parameters showed statistically highly significant P < 0.01 results, while 4 showed significant results P < 0.05). Comparative effect of yogic group and control group showed 79% relief in both subjective and objective parameters. (i.e., total 6 out of 14 parameters showed statistically highly significant (P < 0.01) results, while 5 showed significant results (P < 0.05). PMID:22131719
WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.
Grech, Victor
2018-03-01
The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.
Maćków, Anna; Małachowska-Sobieska, Monika; Demczuk-Włodarczyk, Ewa; Sidorowska, Marta; Szklarska, Alicja; Lipowicz, Anna
2014-01-01
The aim of the study was to present the influence of neurophysiological hippotherapy on the transference of the centre of gravity (COG) among children with cerebral palsy (CP). The study involved 19 children aged 4-13 years suffering from CP who demonstrated an asymmetric (A/P) model of compensation. Body balance was studied with the Cosmogamma Balance Platform. An examination on this platform was performed before and after a session of neurophysiological hippotherapy. In order to compare the correlations and differences between the examinations, the results were analysed using Student's T-test for dependent samples at p ≤ 0.05 as the level of statistical significance and descriptive statistics were calculated. The mean value of the body's centre of gravity in the frontal plane (COG X) was 18.33 (mm) during the first examination, changing by 21.84 (mm) after neurophysiological hippotherapy towards deloading of the antigravity lower limb (p ≤ 0.0001). The other stabilographic parameters increased; however, only the change in average speed of antero - posterior COG oscillation was statistically significant (p = 0.0354). 1. One session of neurophysiological hippotherapy induced statistically significant changes in the position of the centre of gravity in the body in the frontal plane and the average speed of COG oscillation in the sagittal plane among CP children demonstrating an asymmetric model of compensation (A/P).
Soil carbon inventories under a bioenergy crop (switchgrass): Measurement limitations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garten, C.T. Jr.; Wullschleger, S.D.
Approximately 5 yr after planting, coarse root carbon (C) and soil organic C (SOC) inventories were compared under different types of plant cover at four switchgrass (Panicum virgatum L.) production field trials in the southeastern USA. There was significantly more coarse root C under switchgrass (Alamo variety) and forest cover than tall fescue (Festuca arundinacea Schreb.), corn (Zea mays L.), or native pastures of mixed grasses. Inventories of SOC under switchgrass were not significantly greater than SOC inventories under other plant covers. At some locations the statistical power associated with ANOVA of SOC inventories was low, which raised questions aboutmore » whether differences in SOC could be detected statistically. A minimum detectable difference (MDD) for SOC inventories was calculated. The MDD is the smallest detectable difference between treatment means once the variation, significance level, statistical power, and sample size are specified. The analysis indicated that a difference of {approx}50 mg SOC/cm{sup 2} or 5 Mg SOC/ha, which is {approx}10 to 15% of existing SOC, could be detected with reasonable sample sizes and good statistical power. The smallest difference in SOC inventories that can be detected, and only with exceedingly large sample sizes, is {approx}2 to 3%. These measurement limitations have implications for monitoring and verification of proposals to ameliorate increasing global atmospheric CO{sub 2} concentrations by sequestering C in soils.« less
Riffel, Philipp; Michaely, Henrik J; Morelli, John N; Pfeuffer, Josef; Attenberger, Ulrike I; Schoenberg, Stefan O; Haneder, Stefan
2014-01-01
Implementation of DWI in the abdomen is challenging due to artifacts, particularly those arising from differences in tissue susceptibility. Two-dimensional, spatially-selective radiofrequency (RF) excitation pulses for single-shot echo-planar imaging (EPI) combined with a reduction in the FOV in the phase-encoding direction (i.e. zooming) leads to a decreased number of k-space acquisition lines, significantly shortening the EPI echo train and potentially susceptibility artifacts. To assess the feasibility and image quality of a zoomed diffusion-weighted EPI (z-EPI) sequence in MR imaging of the pancreas. The approach is compared to conventional single-shot EPI (c-EPI). 23 patients who had undergone an MRI study of the abdomen were included in this retrospective study. Examinations were performed on a 3T whole-body MR system (Magnetom Skyra, Siemens) equipped with a two-channel fully dynamic parallel transmit array (TimTX TrueShape, Siemens). The acquired sequences consisted of a conventional EPI DWI of the abdomen and a zoomed EPI DWI of the pancreas. For z-EPI, the standard sinc excitation was replaced with a two-dimensional spatially-selective RF pulse using an echo-planar transmit trajectory. Images were evaluated with regard to image blur, respiratory motion artifacts, diagnostic confidence, delineation of the pancreas, and overall scan preference. Additionally ADC values of the pancreatic head, body, and tail were calculated and compared between sequences. The pancreas was better delineated in every case (23/23) with z-EPI versus c-EPI. In every case (23/23), both readers preferred z-EPI overall to c-EPI. With z-EPI there was statistically significantly less image blur (p<0.0001) and respiratory motion artifact compared to c-EPI (p<0.0001). Diagnostic confidence was statistically significantly better with z-EPI (p<0.0001). No statistically significant differences in calculated ADC values were observed between the two sequences. Zoomed diffusion-weighted EPI leads to substantial image quality improvements with reduction of susceptibility artifacts in pancreatic DWI.
Verification of calculated skin doses in postmastectomy helical tomotherapy.
Ito, Shima; Parker, Brent C; Levine, Renee; Sanders, Mary Ella; Fontenot, Jonas; Gibbons, John; Hogstrom, Kenneth
2011-10-01
To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi·Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. The mean difference and standard error of the mean difference between measurement and calculation for the scar measurements was -1.8% ± 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% ± 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% ± 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%. Copyright © 2011 Elsevier Inc. All rights reserved.
Verification of Calculated Skin Doses in Postmastectomy Helical Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ito, Shima; Parker, Brent C., E-mail: bcparker@marybird.com; Mary Bird Perkins Cancer Center, Baton Rouge, LA
2011-10-01
Purpose: To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). Methods and Materials: In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi.Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. Results: The mean difference and standard errormore » of the mean difference between measurement and calculation for the scar measurements was -1.8% {+-} 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% {+-} 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% {+-} 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. Conclusions: The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%.« less
Futrakul, Sitthivuddhi; Deerojanawong, Jitladda; Prapphal, Nuanchan
2005-07-01
The objectives of this study were to identify possible risk factors of bronchial hyperesponsiveness (BHR) in children up to 5 years of age with wheezing-associated respiratory infection (WARI), and to study the prevalence of BHR. Children up to 5 years of age with WARI were enrolled in the study. The parents or caregivers of children were asked about their demographic data and clinical histories. Physical examination and clinical score assessment were performed. Pulmonary function tests, i.e., tidal breathing flow volume (TBFV), were performed to measure tidal breathing parameters before and after salbutamol nebulization. If volume at peak tidal expiratory flow/expiratory tidal volume and time to peak expiratory flow/total expiratory time increased > or = 20%, or tidal expiratory flow at 25% of tidal volume/peak tidal expiratory flow increased > or = 20% after nebulization therapy, BHR was diagnosed. The number in the positive BHR group was used to calculate the prevalence of BHR, and clinical features were compared with those of the negative BHR group. Categorical data were analyzed for statistical significance (P < 0.05) by chi-square test or Fisher's exact test, or Student's t-test, as appropriate. Odds ratios (ORs) and 95% confidence intervals (CIs) were calculated for those with statistical significance. One hundred and six wheezing children underwent pulmonary function tests before and after salbutamol nebulization. With the aforementioned criteria, 41 cases (38.7%) were diagnosed with BHR. History of reactive airway disease, (OR, 6.31; 95% CI, 1.68-25), maternal history of asthma (OR, 3.45; 95% CI, 1.34-9), breastfeeding less than 3 months (OR, 3.18; 95% CI, 1.26-8.12), and passive smoking (OR, 3; 95% CI, 1.15-7.62) were significant risk factors of BHR. The eosinophil count was significantly higher in the BHR (+) group particularly, in children 1-5 years of age (P < or = 0.01). Patchy infiltrates were more commonly found in patients with negative BHR but not statistically significant. In conclusion, a history of reactive airway disease, maternal history, breastfeeding less than 3 months, and passive smoking were significant risk factors for BHR. Copyright 2005 Wiley-Liss, Inc.
Blood pressure and serum creatinine in obese female.
Asrin, M; Nessa, A; Hasan, M I; Das, R K
2015-01-01
Obesity is increasing in developed as well as in developing countries. This analytical cross sectional study was carried out to document the relation between blood pressure, serum creatinine and body mass index in female and to assess potential health differences among obese female and normal weight female. This study was done in the Department of Physiology, Mymensingh Medical College, Mymensingh, Bangladesh from July 2012 to June 2013. Seventy female persons volunteered as subjects. Among them 35 were within normal weight (BMI 18.5-24.9kg/m²) and 35 were obese (BMI≥30kg/m²). Non probability purposive type of sampling technique was used to select the subjects. Measurement of body mass index and blood pressure were done as per procedure. Serum creatinine level was estimated by enzymatic colorimetric method. The results were calculated and analyzed by using SPSS (statistical package for social science, version 17.0), scientific electronic calculator and simultaneously with a computer assisted program like Microsoft excel. Unpaired 't' test was applied to find the significance of difference regarding serum creatinine and blood pressure levels in obese female. The value of p was 1% to indicate highly significant and 5% to indicate simply significant or statistically significant. The mean±SE of systolic blood pressure, diastolic blood pressure and serum creatinine levels were 135.71±1.58mmHg, 88.74±0.95mmHg and 1.03±0.01mg/dl respectively; significant at 1% level for obese group of BMI (p<0.0001). The examinations and biochemical investigations revealed that high BMI is significantly related to increased levels of serum creatinine & blood pressure in obese female which indicate the obese subjects are prone to cardiovascular & metabolic risk.
A statistical model of operational impacts on the framework of the bridge crane
NASA Astrophysics Data System (ADS)
Antsev, V. Yu; Tolokonnikov, A. S.; Gorynin, A. D.; Reutov, A. A.
2017-02-01
The technical regulations of the Customs Union demands implementation of the risk analysis of the bridge cranes operation at their design stage. The statistical model has been developed for performance of random calculations of risks, allowing us to model possible operational influences on the bridge crane metal structure in their various combination. The statistical model is practically actualized in the software product automated calculation of risks of failure occurrence of bridge cranes.
Computationally Efficient Multiconfigurational Reactive Molecular Dynamics
Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.
2012-01-01
It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924
Crew, Page E; Rhodes, Nathaniel J; O'Donnell, J Nicholas; Miglis, Cristina; Gilbert, Elise M; Zembower, Teresa R; Qi, Chao; Silkaitis, Christina; Sutton, Sarah H; Scheetz, Marc H
2018-03-01
The purpose of this single-center, ecologic study is to characterize the relationship between facility-wide (FacWide) antibiotic consumption and incident health care facility-onset Clostridium difficile infection (HO-CDI). FacWide antibiotic consumption and incident HO-CDI were tallied on a monthly basis and standardized, from January 2013 through April 2015. Spearman rank-order correlation coefficients were calculated using matched-months analysis and a 1-month delay. Regression analyses were performed, with P < .05 considered statistically significant. FacWide analysis identified a matched-months correlation between ceftriaxone and HO-CDI (ρ = 0.44, P = .018). A unit of stem cell transplant recipients did not have significant correlation between carbapenems and HO-CDI in matched months (ρ = 0.37, P = .098), but a significant correlation was observed when a 1-month lag was applied (ρ = 0.54, P = .014). Three statistically significant lag associations were observed between FacWide/unit-level antibiotic consumption and HO-CDI, and 1 statistically significant nonlagged association was observed FacWide. Antibiotic consumption may convey extended ward-level risk for incident CDI. Consumption of antibiotic agents may have immediate and prolonged influence on incident CDI. Additional studies are needed to investigate the immediate and delayed associations between antibiotic consumption and C difficile colonization, infection, and transmission at the hospital level. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widesott, Lamberto, E-mail: widesott@yahoo.it; Pierelli, Alessio; Fiorino, Claudio
2011-08-01
Purpose: To compare intensity-modulated proton therapy (IMPT) and helical tomotherapy (HT) treatment plans for high-risk prostate cancer (HRPCa) patients. Methods and Materials: The plans of 8 patients with HRPCa treated with HT were compared with IMPT plans with two quasilateral fields set up (-100{sup o}; 100{sup o}) and optimized with the Hyperion treatment planning system. Both techniques were optimized to simultaneously deliver 74.2 Gy/Gy relative biologic effectiveness (RBE) in 28 fractions on planning target volumes (PTVs)3-4 (P + proximal seminal vesicles), 65.5 Gy/Gy(RBE) on PTV2 (distal seminal vesicles and rectum/prostate overlapping), and 51.8 Gy/Gy(RBE) to PTV1 (pelvic lymph nodes). Normalmore » tissue calculation probability (NTCP) calculations were performed for the rectum, and generalized equivalent uniform dose (gEUD) was estimated for the bowel cavity, penile bulb and bladder. Results: A slightly better PTV coverage and homogeneity of target dose distribution with IMPT was found: the percentage of PTV volume receiving {>=}95% of the prescribed dose (V{sub 95%}) was on average >97% in HT and >99% in IMPT. The conformity indexes were significantly lower for protons than for photons, and there was a statistically significant reduction of the IMPT dosimetric parameters, up to 50 Gy/Gy(RBE) for the rectum and bowel and 60 Gy/Gy(RBE) for the bladder. The NTCP values for the rectum were higher in HT for all the sets of parameters, but the gain was small and in only a few cases statistically significant. Conclusions: Comparable PTV coverage was observed. Based on NTCP calculation, IMPT is expected to allow a small reduction in rectal toxicity, and a significant dosimetric gain with IMPT, both in medium-dose and in low-dose range in all OARs, was observed.« less
2012-01-01
Background Oestrogen and progestogen have the potential to influence gastro-intestinal motility; both are key components of hormone replacement therapy (HRT). Results of observational studies in women taking HRT rely on self-reporting of gastro-oesophageal symptoms and the aetiology of gastro-oesophageal reflux disease (GORD) remains unclear. This study investigated the association between HRT and GORD in menopausal women using validated general practice records. Methods 51,182 menopausal women were identified using the UK General Practice Research Database between 1995–2004. Of these, 8,831 were matched with and without hormone use. Odds ratios (ORs) were calculated for GORD and proton-pump inhibitor (PPI) use in hormone and non-hormone users, adjusting for age, co-morbidities, and co-pharmacy. Results In unadjusted analysis, all forms of hormone use (oestrogen-only, tibolone, combined HRT and progestogen) were statistically significantly associated with GORD. In adjusted models, this association remained statistically significant for oestrogen-only treatment (OR 1.49; 1.18–1.89). Unadjusted analysis showed a statistically significant association between PPI use and oestrogen-only and combined HRT treatment. When adjusted for covariates, oestrogen-only treatment was significant (OR 1.34; 95% CI 1.03–1.74). Findings from the adjusted model demonstrated the greater use of PPI by progestogen users (OR 1.50; 1.01–2.22). Conclusions This first large cohort study of the association between GORD and HRT found a statistically significant association between oestrogen-only hormone and GORD and PPI use. This should be further investigated using prospective follow-up to validate the strength of association and describe its clinical significance. PMID:22642788
O'Leary, Neil; Chauhan, Balwantray C; Artes, Paul H
2012-10-01
To establish a method for estimating the overall statistical significance of visual field deterioration from an individual patient's data, and to compare its performance to pointwise linear regression. The Truncated Product Method was used to calculate a statistic S that combines evidence of deterioration from individual test locations in the visual field. The overall statistical significance (P value) of visual field deterioration was inferred by comparing S with its permutation distribution, derived from repeated reordering of the visual field series. Permutation of pointwise linear regression (PoPLR) and pointwise linear regression were evaluated in data from patients with glaucoma (944 eyes, median mean deviation -2.9 dB, interquartile range: -6.3, -1.2 dB) followed for more than 4 years (median 10 examinations over 8 years). False-positive rates were estimated from randomly reordered series of this dataset, and hit rates (proportion of eyes with significant deterioration) were estimated from the original series. The false-positive rates of PoPLR were indistinguishable from the corresponding nominal significance levels and were independent of baseline visual field damage and length of follow-up. At P < 0.05, the hit rates of PoPLR were 12, 29, and 42%, at the fifth, eighth, and final examinations, respectively, and at matching specificities they were consistently higher than those of pointwise linear regression. In contrast to population-based progression analyses, PoPLR provides a continuous estimate of statistical significance for visual field deterioration individualized to a particular patient's data. This allows close control over specificity, essential for monitoring patients in clinical practice and in clinical trials.
Duerden, E G; Foong, J; Chau, V; Branson, H; Poskitt, K J; Grunau, R E; Synnes, A; Zwicker, J G; Miller, S P
2015-08-01
Adverse neurodevelopmental outcome is common in children born preterm. Early sensitive predictors of neurodevelopmental outcome such as MR imaging are needed. Tract-based spatial statistics, a diffusion MR imaging analysis method, performed at term-equivalent age (40 weeks) is a promising predictor of neurodevelopmental outcomes in children born very preterm. We sought to determine the association of tract-based spatial statistics findings before term-equivalent age with neurodevelopmental outcome at 18-months corrected age. Of 180 neonates (born at 24-32-weeks' gestation) enrolled, 153 had DTI acquired early at 32 weeks' postmenstrual age and 105 had DTI acquired later at 39.6 weeks' postmenstrual age. Voxelwise statistics were calculated by performing tract-based spatial statistics on DTI that was aligned to age-appropriate templates. At 18-month corrected age, 166 neonates underwent neurodevelopmental assessment by using the Bayley Scales of Infant Development, 3rd ed, and the Peabody Developmental Motor Scales, 2nd ed. Tract-based spatial statistics analysis applied to early-acquired scans (postmenstrual age of 30-33 weeks) indicated a limited significant positive association between motor skills and axial diffusivity and radial diffusivity values in the corpus callosum, internal and external/extreme capsules, and midbrain (P < .05, corrected). In contrast, for term scans (postmenstrual age of 37-41 weeks), tract-based spatial statistics analysis showed a significant relationship between both motor and cognitive scores with fractional anisotropy in the corpus callosum and corticospinal tracts (P < .05, corrected). Tract-based spatial statistics in a limited subset of neonates (n = 22) scanned at <30 weeks did not significantly predict neurodevelopmental outcomes. The strength of the association between fractional anisotropy values and neurodevelopmental outcome scores increased from early-to-late-acquired scans in preterm-born neonates, consistent with brain dysmaturation in this population. © 2015 by American Journal of Neuroradiology.
Effective field theory of statistical anisotropies for primordial bispectrum and gravitational waves
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rostami, Tahereh; Karami, Asieh; Firouzjahi, Hassan, E-mail: t.rostami@ipm.ir, E-mail: karami@ipm.ir, E-mail: firouz@ipm.ir
2017-06-01
We present the effective field theory studies of primordial statistical anisotropies in models of anisotropic inflation. The general action in unitary gauge is presented to calculate the leading interactions between the gauge field fluctuations, the curvature perturbations and the tensor perturbations. The anisotropies in scalar power spectrum and bispectrum are calculated and the dependence of these anisotropies to EFT couplings are presented. In addition, we calculate the statistical anisotropy in tensor power spectrum and the scalar-tensor cross correlation. Our EFT approach incorporates anisotropies generated in models with non-trivial speed for the gauge field fluctuations and sound speed for scalar perturbationsmore » such as in DBI inflation.« less
Raines, G.L.; Mihalasky, M.J.
2002-01-01
The U.S. Geological Survey (USGS) is proposing to conduct a global mineral-resource assessment using geologic maps, significant deposits, and exploration history as minimal data requirements. Using a geologic map and locations of significant pluton-related deposits, the pluton-related-deposit tract maps from the USGS national mineral-resource assessment have been reproduced with GIS-based analysis and modeling techniques. Agreement, kappa, and Jaccard's C correlation statistics between the expert USGS and calculated tract maps of 87%, 40%, and 28%, respectively, have been achieved using a combination of weights-of-evidence and weighted logistic regression methods. Between the experts' and calculated maps, the ranking of states measured by total permissive area correlates at 84%. The disagreement between the experts and calculated results can be explained primarily by tracts defined by geophysical evidence not considered in the calculations, generalization of tracts by the experts, differences in map scales, and the experts' inclusion of large tracts that are arguably not permissive. This analysis shows that tracts for regional mineral-resource assessment approximating those delineated by USGS experts can be calculated using weights of evidence and weighted logistic regression, a geologic map, and the location of significant deposits. Weights of evidence and weighted logistic regression applied to a global geologic map could provide quickly a useful reconnaissance definition of tracts for mineral assessment that is tied to the data and is reproducible. ?? 2002 International Association for Mathematical Geology.
[Tracking study to improve basic academic ability in chemistry for freshmen].
Sato, Atsuko; Morone, Mieko; Azuma, Yutaka
2010-08-01
The aims of this study were to assess the basic academic ability of freshmen with regard to chemistry and implement suitable educational guidance measures. At Tohoku Pharmaceutical University, basic academic ability examinations are conducted in chemistry for freshmen immediately after entrance into the college. From 2003 to 2009, the examination was conducted using the same questions, and the secular changes in the mean percentage of correct response were statistically analyzed. An experience survey was also conducted on 2007 and 2009 freshmen regarding chemical experiments at senior high school. Analysis of the basic academic ability examinations revealed a significant decrease in the mean percentage of correct responses after 2007. With regard to the answers for each question, there was a significant decrease in the percentage of correct answers for approximately 80% of questions. In particular, a marked decrease was observed for calculation questions involving percentages. A significant decrease was also observed in the number of students who had experiences with chemical experiments in high school. However, notable results have been achieved through the implementation of practice incorporating calculation problems in order to improve calculation ability. Learning of chemistry and a lack of experimental experience in high school may be contributory factors in the decrease in chemistry academic ability. In consideration of the professional ability demanded of pharmacists, the decrease in calculation ability should be regarded as a serious issue and suitable measures for improving calculation ability are urgently required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Power, S.; Mirza, M.; Thakorlal, A.
PurposeThis prospective pilot study was undertaken to evaluate the feasibility and effectiveness of using a radiation absorbing shield to reduce operator dose from scatter during lower limb endovascular procedures.Materials and MethodsA commercially available bismuth shield system (RADPAD) was used. Sixty consecutive patients undergoing lower limb angioplasty were included. Thirty procedures were performed without the RADPAD (control group) and thirty with the RADPAD (study group). Two separate methods were used to measure dose to a single operator. Thermoluminescent dosimeter (TLD) badges were used to measure hand, eye, and unshielded body dose. A direct dosimeter with digital readout was also used tomore » measure eye and unshielded body dose. To allow for variation between control and study groups, dose per unit time was calculated.ResultsTLD results demonstrated a significant reduction in median body dose per unit time for the study group compared with controls (p = 0.001), corresponding to a mean dose reduction rate of 65 %. Median eye and hand dose per unit time were also reduced in the study group compared with control group, however, this was not statistically significant (p = 0.081 for eye, p = 0.628 for hand). Direct dosimeter readings also showed statistically significant reduction in median unshielded body dose rate for the study group compared with controls (p = 0.037). Eye dose rate was reduced for the study group but this was not statistically significant (p = 0.142).ConclusionInitial results are encouraging. Use of the shield resulted in a statistically significant reduction in unshielded dose to the operator’s body. Measured dose to the eye and hand of operator were also reduced but did not reach statistical significance in this pilot study.« less
Comparison of the flexural strength of six reinforced restorative materials.
Cohen, B I; Volovich, Y; Musikant, B L; Deutsch, A S
2001-01-01
This study calculated the flexural strength for six reinforced restorative materials and demonstrated that flexural strength values can be determined simply by using physical parameters (diametral tensile strength and Young's modulus values) that are easily determined experimentally. A one-way ANOVA analysis demonstrated a statistically significant difference between the two reinforced glass ionomers and the four composite resin materials, with the composite resin being stronger than the glass ionomers.
NASA Astrophysics Data System (ADS)
Nam, Kyoung Won; Kim, In Young; Kang, Ho Chul; Yang, Hee Kyung; Yoon, Chang Ki; Hwang, Jeong Min; Kim, Young Jae; Kim, Tae Yun; Kim, Kwang Gi
2012-10-01
Accurate measurement of binocular misalignment between both eyes is important for proper preoperative management, surgical planning, and postoperative evaluation of patients with strabismus. In this study, we proposed a new computerized diagnostic algorithm that can calculate the angle of binocular eye misalignment photographically by using a dedicated three-dimensional eye model mimicking the structure of the natural human eye. To evaluate the performance of the proposed algorithm, eight healthy volunteers and eight individuals with strabismus were recruited in this study, the horizontal deviation angle, vertical deviation angle, and angle of eye misalignment were calculated and the angular differences between the healthy and the strabismus groups were evaluated using the nonparametric Mann-Whitney test and the Pearson correlation test. The experimental results demonstrated a statistically significant difference between the healthy and strabismus groups (p = 0.015 < 0.05), but no statistically significant difference between the proposed method and the Krimsky test (p = 0.912 > 0.05). The measurements of the two methods were highly correlated (r = 0.969, p < 0.05). From the experimental results, we believe that the proposed diagnostic method has the potential to be a diagnostic tool that measures the physical disorder of the human eye to diagnose non-invasively the severity of strabismus.
Quality assessment of butter cookies applying multispectral imaging
Andresen, Mette S; Dissing, Bjørn S; Løje, Hanne
2013-01-01
A method for characterization of butter cookie quality by assessing the surface browning and water content using multispectral images is presented. Based on evaluations of the browning of butter cookies, cookies were manually divided into groups. From this categorization, reference values were calculated for a statistical prediction model correlating multispectral images with a browning score. The browning score is calculated as a function of oven temperature and baking time. It is presented as a quadratic response surface. The investigated process window was the intervals 4–16 min and 160–200°C in a forced convection electrically heated oven. In addition to the browning score, a model for predicting the average water content based on the same images is presented. This shows how multispectral images of butter cookies may be used for the assessment of different quality parameters. Statistical analysis showed that the most significant wavelengths for browning predictions were in the interval 400–700 nm and the wavelengths significant for water prediction were primarily located in the near-infrared spectrum. The water prediction model was found to correctly estimate the average water content with an absolute error of 0.22%. From the images it was also possible to follow the browning and drying propagation from the cookie edge toward the center. PMID:24804036
Cardoso, F C; Sears, W; LeBlanc, S J; Drackley, J K
2011-12-01
The objective of the study was to compare 3 methods for calculating the area under the curve (AUC) for plasma glucose and nonesterified fatty acids (NEFA) after an intravenous epinephrine (EPI) challenge in dairy cows. Cows were assigned to 1 of 6 dietary niacin treatments in a completely randomized 6 × 6 Latin square with an extra period to measure carryover effects. Periods consisted of a 7-d (d 1 to 7) adaptation period followed by a 7-d (d 8 to 14) measurement period. On d 12, cows received an i.v. infusion of EPI (1.4 μg/kg of BW). Blood was sampled at -45, -30, -20, -10, and -5 min before EPI infusion and 2.5, 5, 10, 15, 20, 30, 45, 60, 90, and 120 min after. The AUC was calculated by incremental area, positive incremental area, and total area using the trapezoidal rule. The 3 methods resulted in different statistical inferences. When comparing the 3 methods for NEFA and glucose response, no significant differences among treatments and no interactions between treatment and AUC method were observed. For glucose and NEFA response, the method was statistically significant. Our results suggest that the positive incremental method and the total area method gave similar results and interpretation but differed from the incremental area method. Furthermore, the 3 methods evaluated can lead to different results and statistical inferences for glucose and NEFA AUC after an EPI challenge. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Bach, P M; McCarthy, D T; Deletic, A
2010-01-01
The management of stormwater pollution has placed particular emphasis on the first flush phenomenon. However, definition and current methods of analyses of the phenomena contain serious limitations, the most important being their inability to capture a possible impact of the event size (total event volume) on the first flush. This paper presents the development of a novel approach in defining and assessing the first flush that should overcome these problems. The phenomenon is present in a catchment if the decrease in pollution concentration with the absolute cumulative volume of runoff from the catchment is statistically significant. Using data from seven diverse catchments around Melbourne, Australia, changes in pollutant concentrations for Total Suspended Solids (TSS) and Total Nitrogen (TN) were calculated over the absolute cumulative runoff and aggregated from a collection of different storm events. Due to the discrete nature of the water quality data, each concentration was calculated as a flow-weighted average at 2 mm runoff volume increments. The aggregated concentrations recorded in each increment (termed as a 'slice' of runoff) were statistically compared to each other across the absolute cumulative runoff volume. A first flush is then defined as the volume at which concentrations reach the 'background concentration' (i.e. the statistically significant minimum). Initial results clearly highlight first flush and background concentrations in all but one catchment supporting the validity of this new approach. Future work will need to address factors, which will help assess the first flush's magnitude and volume. Sensitivity testing and correlation with catchment characteristics should also be undertaken.
Geronikolou, Styliani; Zimeras, Stelios; Davos, Constantinos H; Michalopoulos, Ioannis; Tsitomeneas, Stephanos
2014-01-01
The impact of electromagnetic fields on health is of increasing scientific interest. The aim of this study was to examine how the Drosophila melanogaster animal model is affected when exposed to portable or mobile phone fields. Two experiments have been designed and performed in the same laboratory conditions. Insect cultures were exposed to the near field of a 2G mobile phone (the GSM 2G networks support and complement in parallel the 3G wide band or in other words the transmission of information via voice signals is served by the 2G technology in both mobile phones generations) and a 1880 MHz cordless phone both digitally modulated by human voice. Comparison with advanced statistics of the egg laying of the second generation exposed and non-exposed cultures showed limited statistical significance for the cordless phone exposed culture and statistical significance for the 900 MHz exposed insects. We calculated by physics, simulated and illustrated in three dimensional figures the calculated near fields of radiation inside the experimenting vials and their difference. Comparison of the power of the two fields showed that the difference between them becomes null when the experimental cylinder radius and the height of the antenna increase. Our results suggest a possible radiofrequency sensitivity difference in insects which may be due to the distance from the antenna or to unexplored intimate factors. Comparing the near fields of the two frequencies bands, we see similar not identical geometry in length and height from the antenna and that lower frequencies tend to drive to increased radiofrequency effects.
Glaister, Karen
2005-09-01
The ability of nurses to perform accurate drug dosage calculations has repercussions for patients' well-being. How best to assist nurses develop competency in this area is paramount. This paper presents findings of a study conducted with undergraduate nurses to determine the effect of three instructional approaches on the learning of this skill. The quasi-experimental study exposed participants to one of three instructional approaches: integrative learning, computerised learning and a combination of integrative and computerised learning. Quantitative and qualitative approaches were used to explore differences in the instructional approaches and gain further understanding of the learning process. There was no statistical difference between the three instructional approaches on knowledge acquisition and transfer measures, other than measures for procedural knowledge, which was significant (F(2,47) = 3.33 at p < .044). A least-significant difference post hoc test (alpha = 0. 10) indicated computerised learning was significantly more effective in developing procedural knowledge. The provision of instructional strategies, which facilitate development of conditional knowledge and automaticity, is necessary for competency development in dosage calculations. Furthermore, the curriculum must incorporate authentic tasks and permit time to support competency attainment.
The Relationship between Zinc Levels and Autism: A Systematic Review and Meta-analysis.
Babaknejad, Nasim; Sayehmiri, Fatemeh; Sayehmiri, Kourosh; Mohamadkhani, Ashraf; Bahrami, Somaye
2016-01-01
Autism is a complex behaviorally defined disorder.There is a relationship between zinc (Zn) levels in autistic patients and development of pathogenesis, but the conclusion is not permanent. The present study conducted to estimate this probability using meta-analysis method. In this study, Fixed Effect Model, twelve articles published from 1978 to 2012 were selected by searching Google scholar, PubMed, ISI Web of Science, and Scopus and information were analyzed. I² statistics were calculated to examine heterogeneity. The information was analyzed using R and STATA Ver. 12.2. There was no significant statistical difference between hair, nail, and teeth Zn levels between controls and autistic patients: -0.471 [95% confidence interval (95% CI): -1.172 to 0.231]. There was significant statistical difference between plasma Zn concentration and autistic patients besides healthy controls: -0.253 (95% CI: 0.498 to -0.007). Using a Random Effect Model, the overall Integration of data from the two groups was -0.414 (95% CI: -0.878 to -0.051). Based on sensitivity analysis, zinc supplements can be used for the nutritional therapy for autistic patients.
Tiano, Ana Valéria Pagliari; Moimaz, Suzely Adas Saliba; Saliba, Orlando; Saliba, Nemre Adas
2009-01-01
This study determined the prevalence of cavitated caries lesions (CCL) and early childhood caries (ECC), and the contribution of some variables in children up to 36 months of age attending daycare centers in municipalities with different fluoride levels in the water supply: AFC (adequate fluoride content) and LFC (low fluoride content). After approval of the Ethics Committee, the parents were interviewed. The children were clinically examined using the same codes and criteria established by the WHO (World Health Organization) and the ADA (American Dental Association). Fisher's exact test (p<0.05) was applied for statistical analysis of data. The dmft indices calculated in the LFC and AFC municipalities were 0.57 and 0.68, respectively. Considering all children examined, 17.6% presented CCL and 33.8% ECC. The economic classification, mother's education level and duration of breastfeeding were considered statistically significant with regards to CCL prevalence. The age group, duration of the habit of drinking milk before bedtime and age at which oral hygiene started were considered statistically significant with regards to ECC prevalence.
Effect of Different Ceramic Crown Preparations on Tooth Structure Loss: An In Vitro Study
NASA Astrophysics Data System (ADS)
Ebrahimpour, Ashkan
Objective: To quantify and compare the amount of tooth-structure reduction following the full-coverage preparations for crown materials of porcelain-fused-to-metal, lithium disilicate glass-ceramic and yttria-stabilized tetragonal zirconia polycrystalline for three tooth morphologies. Methods: Groups of resin teeth of different morphologies were individually weighed to high precision, then prepared following the preparation guidelines. The teeth were re-weighed after preparation and the amount of structural reduction was calculated. Statistical analyses were performed to find out if there was a significant difference among the groups. Results: Amount of tooth reduction for zirconia crown preparations was the lowest and statistically different compared with the other two materials. No statistical significance was found between the amount of reduction for porcelain-fused-to-metal and lithium disilicate glass-ceramic crowns. Conclusion: Within the limitations of this study, more tooth structure can be saved when utilizing zirconia full-coverage restorations compared with lithium disilicate glass-ceramic and porcelain-fused-to-metal crowns in maxillary central incisors, first premolars and first molars.
[Statistical analysis of German radiologic periodicals: developmental trends in the last 10 years].
Golder, W
1999-09-01
To identify which statistical tests are applied in German radiological publications, to what extent their use has changed during the last decade, and which factors might be responsible for this development. The major articles published in "ROFO" and "DER RADIOLOGE" during 1988, 1993 and 1998 were reviewed for statistical content. The contributions were classified by principal focus and radiological subspecialty. The methods used were assigned to descriptive, basal and advanced statistics. Sample size, significance level and power were established. The use of experts' assistance was monitored. Finally, we calculated the so-called cumulative accessibility of the publications. 525 contributions were found to be eligible. In 1988, 87% used descriptive statistics only, 12.5% basal, and 0.5% advanced statistics. The corresponding figures in 1993 and 1998 are 62 and 49%, 32 and 41%, and 6 and 10%, respectively. Statistical techniques were most likely to be used in research on musculoskeletal imaging and articles dedicated to MRI. Six basic categories of statistical methods account for the complete statistical analysis appearing in 90% of the articles. ROC analysis is the single most common advanced technique. Authors make increasingly use of statistical experts' opinion and programs. During the last decade, the use of statistical methods in German radiological journals has fundamentally improved, both quantitatively and qualitatively. Presently, advanced techniques account for 20% of the pertinent statistical tests. This development seems to be promoted by the increasing availability of statistical analysis software.
Human Genetic Variation and Yellow Fever Mortality during 19th Century U.S. Epidemics
2014-01-01
ABSTRACT We calculated the incidence, mortality, and case fatality rates for Caucasians and non-Caucasians during 19th century yellow fever (YF) epidemics in the United States and determined statistical significance for differences in the rates in different populations. We evaluated nongenetic host factors, including socioeconomic, environmental, cultural, demographic, and acquired immunity status that could have influenced these differences. While differences in incidence rates were not significant between Caucasians and non-Caucasians, differences in mortality and case fatality rates were statistically significant for all epidemics tested (P < 0.01). Caucasians diagnosed with YF were 6.8 times more likely to succumb than non-Caucasians with the disease. No other major causes of death during the 19th century demonstrated a similar mortality skew toward Caucasians. Nongenetic host factors were examined and could not explain these large differences. We propose that the remarkably lower case mortality rates for individuals of non-Caucasian ancestry is the result of human genetic variation in loci encoding innate immune mediators. PMID:24895309
The Physics and Operation of Ultra-Submicron Length Semiconductor Devices.
1994-05-01
300 mei heterostructure diode at T=3001( with Fenni statistics and flat band conditions In all of the calculations with a heterostructure barrier, once...25 24- 22- 21- 0 50 100 150 200 Obhnce (mre Figure 8. Self-consistent T=300K calculation with Fenni statistics showing the density and donor
SU-E-T-29: A Web Application for GPU-Based Monte Carlo IMRT/VMAT QA with Delivered Dose Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkerts, M; University of California, San Diego, La Jolla, CA; Graves, Y
Purpose: To enable an existing web application for GPU-based Monte Carlo (MC) 3D dosimetry quality assurance (QA) to compute “delivered dose” from linac logfile data. Methods: We added significant features to an IMRT/VMAT QA web application which is based on existing technologies (HTML5, Python, and Django). This tool interfaces with python, c-code libraries, and command line-based GPU applications to perform a MC-based IMRT/VMAT QA. The web app automates many complicated aspects of interfacing clinical DICOM and logfile data with cutting-edge GPU software to run a MC dose calculation. The resultant web app is powerful, easy to use, and is ablemore » to re-compute both plan dose (from DICOM data) and delivered dose (from logfile data). Both dynalog and trajectorylog file formats are supported. Users upload zipped DICOM RP, CT, and RD data and set the expected statistic uncertainty for the MC dose calculation. A 3D gamma index map, 3D dose distribution, gamma histogram, dosimetric statistics, and DVH curves are displayed to the user. Additional the user may upload the delivery logfile data from the linac to compute a 'delivered dose' calculation and corresponding gamma tests. A comprehensive PDF QA report summarizing the results can also be downloaded. Results: We successfully improved a web app for a GPU-based QA tool that consists of logfile parcing, fluence map generation, CT image processing, GPU based MC dose calculation, gamma index calculation, and DVH calculation. The result is an IMRT and VMAT QA tool that conducts an independent dose calculation for a given treatment plan and delivery log file. The system takes both DICOM data and logfile data to compute plan dose and delivered dose respectively. Conclusion: We sucessfully improved a GPU-based MC QA tool to allow for logfile dose calculation. The high efficiency and accessibility will greatly facilitate IMRT and VMAT QA.« less
Junger, Axel; Brenck, Florian; Hartmann, Bernd; Klasen, Joachim; Quinzio, Lorenzo; Benson, Matthias; Michel, Achim; Röhrig, Rainer; Hempelmann, Gunter
2004-07-01
The most recent approach to estimate nursing resources consumption has led to the generation of the Nine Equivalents of Nursing Manpower use Score (NEMS). The objective of this prospective study was to establish a completely automatically generated calculation of the NEMS using a patient data management system (PDMS) database and to validate this approach by comparing the results with those of the conventional manual method. Prospective study. Operative intensive care unit of a university hospital. Patients admitted to the ICU between 24 July 2002 and 22 August 2002. Patients under the age of 16 years, and patients undergoing cardiovascular surgery or with burn injuries were excluded. None. The NEMS of all patients was calculated automatically with a PDMS and manually by a physician in parallel. The results of the two methods were compared using the Bland and Altman approach, the interclass correlation coefficient (ICC), and the kappa-statistic. On 20 consecutive working days, the NEMS was calculated in 204 cases. The Bland Altman analysis did not show significant differences in NEMS scoring between the two methods. The ICC (95% confidence intervals) 0.87 (0.84-0.90) revealed a high inter-rater agreement between the PDMS and the physician. The kappa-statistic showed good results (kappa>0.55) for all NEMS items apart from the item "supplementary ventilatory care". This study demonstrates that automatical calculation of the NEMS is possible with high accuracy by means of a PDMS. This may lead to a decrease in consumption of nursing resources.
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-01-01
Background Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Material/Methods Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. Results ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Conclusions Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal. PMID:26092929
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-06-20
Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal.
Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J
2017-12-01
qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.
Murchie, Brent; Tandon, Kanwarpreet; Hakim, Seifeldin; Shah, Kinchit; O'Rourke, Colin; Castro, Fernando J
2017-04-01
Colorectal cancer (CRC) screening guidelines likely over-generalizes CRC risk, 35% of Americans are not up to date with screening, and there is growing incidence of CRC in younger patients. We developed a practical prediction model for high-risk colon adenomas in an average-risk population, including an expanded definition of high-risk polyps (≥3 nonadvanced adenomas), exposing higher than average-risk patients. We also compared results with previously created calculators. Patients aged 40 to 59 years, undergoing first-time average-risk screening or diagnostic colonoscopies were evaluated. Risk calculators for advanced adenomas and high-risk adenomas were created based on age, body mass index, sex, race, and smoking history. Previously established calculators with similar risk factors were selected for comparison of concordance statistic (c-statistic) and external validation. A total of 5063 patients were included. Advanced adenomas, and high-risk adenomas were seen in 5.7% and 7.4% of the patient population, respectively. The c-statistic for our calculator was 0.639 for the prediction of advanced adenomas, and 0.650 for high-risk adenomas. When applied to our population, all previous models had lower c-statistic results although one performed similarly. Our model compares favorably to previously established prediction models. Age and body mass index were used as continuous variables, likely improving the c-statistic. It also reports absolute predictive probabilities of advanced and high-risk polyps, allowing for more individualized risk assessment of CRC.
Wong, Oi Lei; Lo, Gladys G.; Chan, Helen H. L.; Wong, Ting Ting; Cheung, Polly S. Y.
2016-01-01
Background The purpose of this study is to statistically assess whether bi-exponential intravoxel incoherent motion (IVIM) model better characterizes diffusion weighted imaging (DWI) signal of malignant breast tumor than mono-exponential Gaussian diffusion model. Methods 3 T DWI data of 29 malignant breast tumors were retrospectively included. Linear least-square mono-exponential fitting and segmented least-square bi-exponential fitting were used for apparent diffusion coefficient (ADC) and IVIM parameter quantification, respectively. F-test and Akaike Information Criterion (AIC) were used to statistically assess the preference of mono-exponential and bi-exponential model using region-of-interests (ROI)-averaged and voxel-wise analysis. Results For ROI-averaged analysis, 15 tumors were significantly better fitted by bi-exponential function and 14 tumors exhibited mono-exponential behavior. The calculated ADC, D (true diffusion coefficient) and f (pseudo-diffusion fraction) showed no significant differences between mono-exponential and bi-exponential preferable tumors. Voxel-wise analysis revealed that 27 tumors contained more voxels exhibiting mono-exponential DWI decay while only 2 tumors presented more bi-exponential decay voxels. ADC was consistently and significantly larger than D for both ROI-averaged and voxel-wise analysis. Conclusions Although the presence of IVIM effect in malignant breast tumors could be suggested, statistical assessment shows that bi-exponential fitting does not necessarily better represent the DWI signal decay in breast cancer under clinically typical acquisition protocol and signal-to-noise ratio (SNR). Our study indicates the importance to statistically examine the breast cancer DWI signal characteristics in practice. PMID:27709078
Statistical methods for astronomical data with upper limits. II - Correlation and regression
NASA Technical Reports Server (NTRS)
Isobe, T.; Feigelson, E. D.; Nelson, P. I.
1986-01-01
Statistical methods for calculating correlations and regressions in bivariate censored data where the dependent variable can have upper or lower limits are presented. Cox's regression and the generalization of Kendall's rank correlation coefficient provide significant levels of correlations, and the EM algorithm, under the assumption of normally distributed errors, and its nonparametric analog using the Kaplan-Meier estimator, give estimates for the slope of a regression line. Monte Carlo simulations demonstrate that survival analysis is reliable in determining correlations between luminosities at different bands. Survival analysis is applied to CO emission in infrared galaxies, X-ray emission in radio galaxies, H-alpha emission in cooling cluster cores, and radio emission in Seyfert galaxies.
Time Series Analysis Based on Running Mann Whitney Z Statistics
USDA-ARS?s Scientific Manuscript database
A sensitive and objective time series analysis method based on the calculation of Mann Whitney U statistics is described. This method samples data rankings over moving time windows, converts those samples to Mann-Whitney U statistics, and then normalizes the U statistics to Z statistics using Monte-...
NASA Technical Reports Server (NTRS)
Butler, C. M.; Hogge, J. E.
1978-01-01
Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halligan, Matthew
Radiated power calculation approaches for practical scenarios of incomplete high- density interface characterization information and incomplete incident power information are presented. The suggested approaches build upon a method that characterizes power losses through the definition of power loss constant matrices. Potential radiated power estimates include using total power loss information, partial radiated power loss information, worst case analysis, and statistical bounding analysis. A method is also proposed to calculate radiated power when incident power information is not fully known for non-periodic signals at the interface. Incident data signals are modeled from a two-state Markov chain where bit state probabilities aremore » derived. The total spectrum for windowed signals is postulated as the superposition of spectra from individual pulses in a data sequence. Statistical bounding methods are proposed as a basis for the radiated power calculation due to the statistical calculation complexity to find a radiated power probability density function.« less
The slip resistance of common footwear materials measured with two slipmeters.
Chang, W R; Matz, S
2001-12-01
The slip resistance of 16 commonly used footwear materials was measured with the Brungraber Mark II and the English XL on 3 floor surfaces under surface conditions of dry, wet, oily and oily wet. Three samples were used for each material combination and surface condition. The results of a one way ANOVA analysis indicated that the differences among different samples were statistically significant for a large number of material combinations and surface conditions. The results indicated that the ranking of materials based on their slip resistance values depends highly on the slipmeters, floor surfaces and surface conditions. For contaminated surfaces including wet, oily and oily wet surfaces, the slip resistance obtained with the English XL was usually higher than that measured with the Brungraber Mark II. The correlation coefficients between the slip resistance obtained with these two slipmeters calculated for different surface conditions indicated a strong correlation with statistical significance.
Fatal falls in the US construction industry, 1990 to 1999.
Derr, J; Forst, L; Chen, H Y; Conroy, L
2001-10-01
The Occupational Safety and Health Administration's (OSHA's) Integrated Management Information System (IMIS) database allows for the detailed analysis of risk factors surrounding fatal occupational events. This study used IMIS data to (1) perform a risk factor analysis of fatal construction falls, and (2) assess the impact of the February 1995 29 CFR Part 1926 Subpart M OSHA fall protection regulations for construction by calculating trends in fatal fall rates. In addition, IMIS data on fatal construction falls were compared with data from other occupational fatality surveillance systems. For falls in construction, the study identified several demographic factors that may indicate increased risk. A statistically significant downward trend in fatal falls was evident in all construction and within several construction categories during the decade. Although the study failed to show a statistically significant intervention effect from the new OSHA regulations, it may have lacked the power to do so.
[Digital radiography in young children. Considerations based on experiences in practice].
Berkhout, W E R; Mileman, P A; Weerheijm, K L
2004-10-01
In dentistry, digital radiology techniques, such as a charge-coupled device and a storage phosphor plate, are gaining popularity. It was the objective of this study to assess the importance of the advantages and disadvantages of digital radiology techniques for bitewing radiography in young children, when compared to conventional film. A group of dentists received a questionnaire regarding their experiences with digital radiology techniques or conventional films among young children. Using the Simple Multi-Attributive Rating Technique (SMART) a final weighted score was calculated for the charge-coupled device, the phosphor plate, and conventional film. The scores were 7.40, 7.38, and 6.98 respectively. The differences were not statistically significant (p > 0.47). It could be concluded that, on the basis of experiences in practice, there are no statistically significant preferences for the use of digital radioogy techniques for bitewing radiography in young children.
The ASC/SIL ratio for cytopathologists as a quality control measure: a follow-up study.
Nascimento, Alessandra F; Cibas, Edmund S
2007-10-01
Monitoring the relative frequency of the interpretations of atypical squamous cells (ASC) and squamous intraepithelial lesions (SIL) has been proposed as a quality control measure. To assess its value, an ASC/SIL ratio was calculated every 6 months for 3.5 years, and confidential feedback was provided to 10 cytopathologists (CPs). By using simple regression analysis, we analyzed the initial and final ASC/SIL ratios for individual CPs and for the entire group. The ratio was below the upper benchmark of 3:1 for all but 1 CP during every 6-month period. The ratio for all CPs combined showed a downward trend (from 2.05 to 1.73). The ratio for 6 CPs decreased, and for two of them the decrease was statistically significant. One CP showed a statistically significant increase in the ASC/SIL ratio. The decrease for some CPs likely reflects the salutary effect of confidential feedback and counseling.
Clinical Validation of the "Sedentary Lifestyle" Nursing Diagnosis in Secondary School Students.
de Oliveira, Marcos Renato; da Silva, Viviane Martins; Guedes, Nirla Gomes; de Oliveira Lopes, Marcos Venícios
2016-06-01
This study clinically validated the nursing diagnosis of "sedentary lifestyle" (SL) among 564 Brazilian adolescents. Measures of diagnostic accuracy were calculated for defining characteristics, and Mantel-Haenszel analysis was used to identify related factors. The measures of diagnostic accuracy showed that the following defining characteristics were statistically significant: "average daily physical activity less than recommended for gender and age," "preference for activity low in physical activity," "nonengagement in leisure time physical activities," and "diminished respiratory capacity." An SL showed statistically significant associations with the following related factors: insufficient motivation for physical activity; insufficient interest in physical activity; insufficient resources for physical activity; insufficient social support for physical activity; attitudes, beliefs, and health habits that hinder physical activity; and insufficient confidence for practicing physical exercises. The study highlighted the four defining characteristics and six related factors for making decisions related to SL among adolescents. © The Author(s) 2015.
Beck, H J; Birch, G F
2013-06-01
Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.
Impact of tamsulosin and nifedipine on contractility of pregnant rat ureters in vitro.
Haddad, Lisette; Corriveau, Stéphanie; Rousseau, Eric; Blouin, Simon; Pasquier, Jean-Charles; Ponsot, Yves; Roy-Lacroix, Marie-Ève
2018-01-01
To evaluate the in vitro effect of tamsulosin and nifedipine on the contractility of pregnant rat ureters and to perform quantitative analysis of the pharmacological effects. Medical expulsive therapy (MET) is commonly used to treat urolithiasis. However, this treatment is seldom used in pregnant women since no studies support this practice. This was an in vitro study on animal tissue derived from pregnant Sprague-Dawley rats. A total of 124 ureteral segments were mounted in an organ bath system and contractile response to methacholine (MCh) was assessed. Tamsulosin or nifedipine were added at cumulative concentrations (0.001-1 μM). The area under the curve (AUC) from isometric tension measurements was calculated. The effect of pharmacological agents and the respective controls were assessed by calculating the AUC for each 5-min interval. Statistical analyses were performed using the Mann-Whitney-Wilcoxon nonparametric test. Both drugs displayed statistically significant inhibitory activity at concentrations of 0.1 and 1 μM for tamsulosin and 1 μM for nifedipine when calculated as the AUC as compared to DMSO controls. Tamsulosin and nifedipine directly inhibit MCh-induced contractility of pregnant rat ureters. Further work is needed to determine the clinical efficacy of these medications for MET in pregnancy.
Climate change and dissolved organic carbon export to the Gulf of Maine
Huntington, Thomas G.; Balch, William M.; Aiken, George R.; Sheffield, Justin; Luo, Lifeng; Roesler, Collin S.; Camill, Philip
2016-01-01
Ongoing climate change is affecting the concentration, export (flux), and timing of dissolved organic carbon (DOC) exported to the Gulf of Maine (GoM) through changes in hydrologic regime. DOC export was calculated for water years 1950 through 2013 for 20 rivers and for water years 1930 through 2013 for 14 rivers draining to the GoM. DOC export was also estimated for the 21st century based on climate and hydrologic modeling in a previously published study. DOC export was calculated by using the regression model LOADEST to fit seasonally adjusted concentration discharge (C-Q) relations. Our results are an analysis of the sensitivity of DOC export to changes in hydrologic conditions over time since land cover and vegetation were held constant over time. Despite large interannual variability, all rivers had increasing DOC export during winter and these trends were significant (p < 0.05) in 10 out of 20 rivers for 1950 to 2013 and in 13 out of 14 rivers for 1930 to 2013. All rivers also had increasing annual export of DOC although fewer trends were statistically significant than for winter export. Projections for DOC export during the 21st century were variable depending on the climate model and greenhouse gas emission scenario that affected future river discharge through effects on precipitation and evapotranspiration. The most consistent result was a significant increase in DOC export in winter in all model-by-emission scenarios. DOC export was projected to decrease during the summer in all model-by-emission scenarios, with statistically significant decreases in half of the scenarios.
A search for evidence of solar rotation in Super-Kamiokande solar neutrino dataset
NASA Astrophysics Data System (ADS)
Desai, Shantanu; Liu, Dawei W.
2016-09-01
We apply the generalized Lomb-Scargle (LS) periodogram, proposed by Zechmeister and Kurster, to the solar neutrino data from Super-Kamiokande (Super-K) using data from its first five years. For each peak in the LS periodogram, we evaluate the statistical significance in two different ways. The first method involves calculating the False Alarm Probability (FAP) using non-parametric bootstrap resampling, and the second method is by calculating the difference in Bayesian Information Criterion (BIC) between the null hypothesis, viz. the data contains only noise, compared to the hypothesis that the data contains a peak at a given frequency. Using these methods, we scan the frequency range between 7-14 cycles per year to look for any peaks caused by solar rotation, since this is the proposed explanation for the statistically significant peaks found by Sturrock and collaborators in the Super-K dataset. From our analysis, we do confirm that similar to Sturrock et al, the maximum peak occurs at a frequency of 9.42/year, corresponding to a period of 38.75 days. The FAP for this peak is about 1.5% and the difference in BIC (between pure white noise and this peak) is about 4.8. We note that the significance depends on the frequency band used to search for peaks and hence it is important to use a search band appropriate for solar rotation. However, The significance of this peak based on the value of BIC is marginal and more data is needed to confirm if the peak persists and is real.
Vetter, Thomas R
2017-11-01
Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures. This basic statistical tutorial discusses a series of fundamental concepts about descriptive statistics and their reporting. The mean, median, and mode are 3 measures of the center or central tendency of a set of data. In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread). In simplest terms, variability is how much the individual recorded scores or observed values differ from one another. The range, standard deviation, and interquartile range are 3 measures of variability or dispersion. The standard deviation is typically reported for a mean, and the interquartile range for a median. Testing for statistical significance, along with calculating the observed treatment effect (or the strength of the association between an exposure and an outcome), and generating a corresponding confidence interval are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics. A number of journals, including Anesthesia & Analgesia, strongly encourage or require the reporting of pertinent confidence intervals. A confidence interval can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design. Generally speaking, in a clinical trial, the confidence interval is the range of values within which the true treatment effect in the population likely resides. In an observational study, the confidence interval is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"
Rand, Gabriel M; Kwon, Ji Won; Gore, Patrick K; McCartney, Mitchell D; Chuck, Roy S
2017-10-01
To quantify consistency of endothelial cell density (ECD) measurements among technicians in a single US eye bank operating under typical operating conditions. In this retrospective analysis of 51 microscopy technicians using a semiautomated counting method on 35,067 eyes from July 2007 to May 2015, technician- and date-related marginal ECD effects were calculated using linear regression models. ECD variance was correlated with the number of specular microscopy technicians. Technician mean ECDs ranged from 2386 ± 431 to 3005 ± 560 cells/mm. Nine technicians had statistically and clinically significant marginal effects. Annual mean ECDs adjusted for changes in technicians ranged from 2422 ± 433 to 2644 ± 430 cells/mm. The period of 2007 to 2009 had statistically and clinically significant marginal effects. There was a nonstatistically significant association between the number of technicians and ECD standard deviation. There was significant ECD variability associated with specular microscopy technicians and with the date of measurement. We recommend that eye banks collect data related to laboratory factors that have been shown to influence ECD variability.
The statistical average of optical properties for alumina particle cluster in aircraft plume
NASA Astrophysics Data System (ADS)
Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin
2018-04-01
We establish a model for lognormal distribution of monomer radius and number of alumina particle clusters in plume. According to the Multi-Sphere T Matrix (MSTM) theory, we provide a method for finding the statistical average of optical properties for alumina particle clusters in plume, analyze the effect of different distributions and different detection wavelengths on the statistical average of optical properties for alumina particle cluster, and compare the statistical average optical properties under the alumina particle cluster model established in this study and those under three simplified alumina particle models. The calculation results show that the monomer number of alumina particle cluster and its size distribution have a considerable effect on its statistical average optical properties. The statistical average of optical properties for alumina particle cluster at common detection wavelengths exhibit obvious differences, whose differences have a great effect on modeling IR and UV radiation properties of plume. Compared with the three simplified models, the alumina particle cluster model herein features both higher extinction and scattering efficiencies. Therefore, we may find that an accurate description of the scattering properties of alumina particles in aircraft plume is of great significance in the study of plume radiation properties.
Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
De la Cruz, O. O. Galvan; Moreno-Jimenez, S.; Larraga-Gutierrez, J. M.
2010-12-07
In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of themore » high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.« less
Numerical investigation of freak waves
NASA Astrophysics Data System (ADS)
Chalikov, D.
2009-04-01
Paper describes the results of more than 4,000 long-term (up to thousands of peak-wave periods) numerical simulations of nonlinear gravity surface waves performed for investigation of properties and estimation of statistics of extreme (‘freak') waves. The method of solution of 2-D potential wave's equations based on conformal mapping is applied to the simulation of wave behavior assigned by different initial conditions, defined by JONSWAP and Pierson-Moskowitz spectra. It is shown that nonlinear wave evolution sometimes results in appearance of very big waves. The shape of freak waves varies within a wide range: some of them are sharp-crested, others are asymmetric, with a strong forward inclination. Some of them can be very big, but not steep enough to create dangerous conditions for vessels (but not for fixed objects). Initial generation of extreme waves can occur merely as a result of group effects, but in some cases the largest wave suddenly starts to grow. The growth is followed sometimes by strong concentration of wave energy around a peak vertical. It is taking place in the course of a few peak wave periods. The process starts with an individual wave in a physical space without significant exchange of energy with surrounding waves. Sometimes, a crest-to-trough wave height can be as large as nearly three significant wave heights. On the average, only one third of all freak waves come to breaking, creating extreme conditions, however, if a wave height approaches the value of three significant wave heights, all of the freak waves break. The most surprising result was discovery that probability of non-dimensional freak waves (normalized by significant wave height) is actually independent of density of wave energy. It does not mean that statistics of extreme waves does not depend on wave energy. It just proves that normalization of wave heights by significant wave height is so effective, that statistics of non-dimensional extreme waves tends to be independent of wave energy. It is naive to expect that high order moments such as skewness and kurtosis can serve as predictors or even indicators of freak waves. Firstly, the above characteristics cannot be calculated with the use of spectrum usually determined with low accuracy. Such calculations are definitely unstable to a slight perturbation of spectrum. Secondly, even if spectrum is determined with high accuracy (for example calculated with the use of exact model), the high order moments cannot serve as the predictors, since they change synchronically with variations of extreme wave heights. Appearance of freak waves occurs simultaneously with increase of the local kurtosis, hence, kurtosis is simply a passive indicator of the same local geometrical properties of a wave field. This effect disappears completely, if spectrum is calculated over a very wide ensemble of waves. In this case existence of a freak wave is just disguised by other, non freak waves. Thirdly, all high order moments are dependant of spectral presentation - they increase with increasing of spectral resolution and cut-frequency. Statistics of non-dimensional waves as well as emergence of extreme waves is the innate property of a nonlinear wave field. Probability function for steep waves has been constructed. Such type function can be used for development of operational forecast of freak waves based on a standard forecast provided by the 3-d generation wave prediction model (WAVEWATCH or WAM).
Statistical tests for power-law cross-correlated processes
NASA Astrophysics Data System (ADS)
Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene
2011-12-01
For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.
78 FR 24336 - Rules of Practice and Procedure; Adjusting Civil Money Penalties for Inflation
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-25
... courts. \\4\\ The CPI is published by the Department of Labor, Bureau of Statistics, and is available at.... Mathematical Calculation In general, the adjustment calculation required by the Inflation Adjustment Act is... adjusted in 2009. According to the Bureau of Labor Statistics, the CPI for June 1996 and June 2009 was 156...
40 CFR 91.511 - Suspension and revocation of certificates of conformity.
Code of Federal Regulations, 2010 CFR
2010-07-01
... many engines as needed so that the CumSum statistic, as calculated in § 91.508(a), falls below the... family, if the manufacturer desires to continue introduction into commerce of a modified version of that... family so that the CumSum statistic, as calculated in § 91.508(a) using the newly assigned FEL if...
Influence of lead apron shielding on absorbed doses from cone-beam computed tomography.
Rottke, Dennis; Andersson, Jonas; Ejima, Ken-Ichiro; Sawada, Kunihiko; Schulze, Dirk
2017-06-01
The aim of the present work was to investigate absorbed and to calculate effective doses (EDs) in cone-beam computed tomography (CBCT). The study was conducted using examination protocols with and without lead apron shielding. A full-body male RANDO® phantom was loaded with 110 GR200A thermoluminescence dosemeter chips at 55 different sites and set up in two different CBCT systems (CS 9500®, ProMax® 3D). Two different protocols were performed: the phantom was set up (1) with and (2) without a lead apron. No statistically significant differences in organ and absorbed doses from regions outside the primary beam could be found when comparing results from exposures with and without lead apron shielding. Consequently, calculating the ED showed no significant differences between the examination protocols with and without lead apron shielding. For the ProMax® 3D with shielding, the ED was 149 µSv, and for the examination protocol without shielding 148 µSv (SD = 0.31 µSv). For the CS 9500®, the ED was 88 and 86 µSv (SD = 0.95 µSv), respectively, with and without lead apron shielding. The results revealed no statistically significant differences in the absorbed doses between examination with and without lead apron shielding, especially in organs outside the primary beam. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Tourkmani, Abdo Karim; Sánchez-Huerta, Valeria; De Wit, Guillermo; Martínez, Jaime D; Mingo, David; Mahillo-Fernández, Ignacio; Jiménez-Alfaro, Ignacio
2017-01-01
To analyze the relationship between the score obtained in the Risk Score System (RSS) proposed by Hicks et al with penetrating keratoplasty (PKP) graft failure at 1y postoperatively and among each factor in the RSS with the risk of PKP graft failure using univariate and multivariate analysis. The retrospective cohort study had 152 PKPs from 152 patients. Eighteen cases were excluded from our study due to primary failure (10 cases), incomplete medical notes (5 cases) and follow-up less than 1y (3 cases). We included 134 PKPs from 134 patients stratified by preoperative risk score. Spearman coefficient was calculated for the relationship between the score obtained and risk of failure at 1y. Univariate and multivariate analysis were calculated for the impact of every single risk factor included in the RSS over graft failure at 1y. Spearman coefficient showed statistically significant correlation between the score in the RSS and graft failure ( P <0.05). Multivariate logistic regression analysis showed no statistically significant relationship ( P >0.05) between diagnosis and lens status with graft failure. The relationship between the other risk factors studied and graft failure was significant ( P <0.05), although the results for previous grafts and graft failure was unreliable. None of our patients had previous blood transfusion, thus, it had no impact. After the application of multivariate analysis techniques, some risk factors do not show the expected impact over graft failure at 1y.
Tourkmani, Abdo Karim; Sánchez-Huerta, Valeria; De Wit, Guillermo; Martínez, Jaime D.; Mingo, David; Mahillo-Fernández, Ignacio; Jiménez-Alfaro, Ignacio
2017-01-01
AIM To analyze the relationship between the score obtained in the Risk Score System (RSS) proposed by Hicks et al with penetrating keratoplasty (PKP) graft failure at 1y postoperatively and among each factor in the RSS with the risk of PKP graft failure using univariate and multivariate analysis. METHODS The retrospective cohort study had 152 PKPs from 152 patients. Eighteen cases were excluded from our study due to primary failure (10 cases), incomplete medical notes (5 cases) and follow-up less than 1y (3 cases). We included 134 PKPs from 134 patients stratified by preoperative risk score. Spearman coefficient was calculated for the relationship between the score obtained and risk of failure at 1y. Univariate and multivariate analysis were calculated for the impact of every single risk factor included in the RSS over graft failure at 1y. RESULTS Spearman coefficient showed statistically significant correlation between the score in the RSS and graft failure (P<0.05). Multivariate logistic regression analysis showed no statistically significant relationship (P>0.05) between diagnosis and lens status with graft failure. The relationship between the other risk factors studied and graft failure was significant (P<0.05), although the results for previous grafts and graft failure was unreliable. None of our patients had previous blood transfusion, thus, it had no impact. CONCLUSION After the application of multivariate analysis techniques, some risk factors do not show the expected impact over graft failure at 1y. PMID:28393027
Effects of curcumin plus Soy oligosaccharides on intestinal flora of rats with ulcerative colitis.
Huang, G; Ye, L; Du, G; Huang, Y; Wu, Y; Ge, S; Yang, Z; Zhu, G
2017-08-15
To explore the therapeutic effect of curcumin (Cur) and soybean oligosaccharides (SBOS) on ulcerative colitis (UC) through testing the intestinal flora and ulcerative colitis (UC). 80 male SD rats were selected divided into four groups with 20 rats in each group: normal group, sulfasalazine (SASP) group, model group and group of curcumin plus soy oligosaccharide. All animals were treated for 4 weeks. In the fifth week rats were decapitated. Macroscopic damage scores of colonic mucosa were calculated. A 4mL blood sample was taken to detect the contents of serum tumor necrosis factor -α (TNF-α) and interleukin 8 (IL-8) by the double antibody sandwich ABC-ELISA method (enzyme-linked immunosorbent assay). Colonic tissues with the most obvious lesions were obtained using a surgical scissor. A routine hematoxylin-eosin (HE) staining method was used to stain pathological specimens and images of staining results were obtained. Histological injury scores of colonic mucosa were calculated. Ulcerative colitis model rats had the highest macroscopic damage scores and histological injury scores of colonic mucosa. After treatment the contents of TNF-α and IL-8 decreased significantly in the group of curcumin plus soy oligosaccharide compared with the model group with statistical significance (P <0.01) while the contents were close to those in the SASP group. There was no statistical significance (P> 0.05). The treatment could decrease TNF-α and IL- 8 expression and reduce colonic mucosa inflammation and tissue damage.
Muzyka-Woźniak, Maria; Oleszko, Adam
2018-04-26
To compare measurements of axial length (AL), corneal curvature (K), anterior chamber depth (ACD) and white-to-white (WTW) distance on a new device combining Scheimpflug camera and partial coherence interferometry (Pentacam AXL) with a reference optical biometer (IOL Master 500). To evaluate differences between IOL power calculations based on the two biometers. Ninety-seven eyes of 97 consecutive cataract or refractive lens exchange patients were examined preoperatively on IOL Master 500 and Pentacam AXL units. Comparisons between two devices were performed for AL, K, ACD and WTW. Intraocular lens (IOL) power targeting emmetropia was calculated with SRK/T and Haigis formulas on both devices and compared. There were statistically significant differences between two devices for all measured parameters (P < 0.05), except ACD (P = 0.36). Corneal curvature measured with Pentacam AXL was significantly flatter then with IOL Master. The mean difference in AL was clinically insignificant (0.01 mm; 95% LoA 0.16 mm). Pentacam AXL yielded higher IOL power in 75% of eyes for Haigis formula and in 62% of eyes for SRK/T formula, with a mean difference within ± 0.5 D for 72 and 86% of eyes, respectively. There were statistically significant differences between AL, K and WTW measurements obtained with the compared biometers. Flatter corneal curvature measurements on Pentacam AXL necessitate formulas optimisation for Pentacam AXL.
Levin, Iris I; Parker, Patricia G
2012-01-01
Seabirds are considered highly mobile, able to fly great distances with few apparent barriers to dispersal. However, it is often the case that seabird populations exhibit strong population genetic structure despite their potential vagility. Here we show that Galapagos Nazca booby (Sula granti) populations are substantially differentiated, even within the small geographic scale of this archipelago. On the other hand, Galapagos great frigatebird (Fregata minor) populations do not show any genetic structure. We characterized the genetic differentiation by sampling five colonies of both species in the Galapagos archipelago and analyzing eight microsatellite loci and three mitochondrial genes. Using an F-statistic approach on the multilocus data, we found significant differentiation between nearly all island pairs of Nazca booby populations and a Bayesian clustering analysis provided support for three distinct genetic clusters. Mitochondrial DNA showed less differentiation of Nazca booby colonies; only Nazca boobies from the island of Darwin were significantly differentiated from individuals throughout the rest of the archipelago. Great frigatebird populations showed little to no evidence for genetic differentiation at the same scale. Only two island pairs (Darwin – Wolf, N. Seymour – Wolf) were significantly differentiated using the multilocus data, and only two island pairs had statistically significant φST values (N. Seymour – Darwin, N. Seymour – Wolf) according to the mitochondrial data. There was no significant pattern of isolation by distance for either species calculated using both markers. Seven of the ten Nazca booby migration rates calculated between island pairs were in the south or southeast to north or northwest direction. The population differentiation found among Galapagos Nazca booby colonies, but not great frigatebird colonies, is most likely due to differences in natal and breeding philopatry. PMID:23170212
Quantitative assessment model for gastric cancer screening
Chen, Kun; Yu, Wei-Ping; Song, Liang; Zhu, Yi-Min
2005-01-01
AIM: To set up a mathematic model for gastric cancer screening and to evaluate its function in mass screening for gastric cancer. METHODS: A case control study was carried on in 66 patients and 198 normal people, then the risk and protective factors of gastric cancer were determined, including heavy manual work, foods such as small yellow-fin tuna, dried small shrimps, squills, crabs, mothers suffering from gastric diseases, spouse alive, use of refrigerators and hot food, etc. According to some principles and methods of probability and fuzzy mathematics, a quantitative assessment model was established as follows: first, we selected some factors significant in statistics, and calculated weight coefficient for each one by two different methods; second, population space was divided into gastric cancer fuzzy subset and non gastric cancer fuzzy subset, then a mathematic model for each subset was established, we got a mathematic expression of attribute degree (AD). RESULTS: Based on the data of 63 patients and 693 normal people, AD of each subject was calculated. Considering the sensitivity and specificity, the thresholds of AD values calculated were configured with 0.20 and 0.17, respectively. According to these thresholds, the sensitivity and specificity of the quantitative model were about 69% and 63%. Moreover, statistical test showed that the identification outcomes of these two different calculation methods were identical (P>0.05). CONCLUSION: The validity of this method is satisfactory. It is convenient, feasible, economic and can be used to determine individual and population risks of gastric cancer. PMID:15655813
Eljamel, M Sam; Mahboob, Syed Osama
2016-12-01
Surgical resection of high-grade gliomas (HGG) is standard therapy because it imparts significant progression free (PFS) and overall survival (OS). However, HGG-tumor margins are indistinguishable from normal brain during surgery. Hence intraoperative technology such as fluorescence (ALA, fluorescein) and intraoperative ultrasound (IoUS) and MRI (IoMRI) has been deployed. This study compares the effectiveness and cost-effectiveness of these technologies. Critical literature review and meta-analyses, using MEDLINE/PubMed service. The list of references in each article was double-checked for any missing references. We included all studies that reported the use of ALA, fluorescein (FLCN), IoUS or IoMRI to guide HGG-surgery. The meta-analyses were conducted according to statistical heterogeneity between studies. If there was no heterogeneity, fixed effects model was used; otherwise, a random effects model was used. Statistical heterogeneity was explored by χ 2 and inconsistency (I 2 ) statistics. To assess cost-effectiveness, we calculated the incremental cost per quality-adjusted life-year (QALY). Gross total resection (GTR) after ALA, FLCN, IoUS and IoMRI was 69.1%, 84.4%, 73.4% and 70% respectively. The differences were not statistically significant. All four techniques led to significant prolongation of PFS and tended to prolong OS. However none of these technologies led to significant prolongation of OS compared to controls. The cost/QALY was $16,218, $3181, $6049 and $32,954 for ALA, FLCN, IoUS and IoMRI respectively. ALA, FLCN, IoUS and IoMRI significantly improve GTR and PFS of HGG. Their incremental cost was below the threshold for cost-effectiveness of HGG-therapy, denoting that each intraoperative technology was cost-effective on its own. Copyright © 2016 Elsevier B.V. All rights reserved.
Effects of illness and disability on job separation.
Magee, William
2004-03-01
Effects of illness and disability on job separation result from both voluntary and involuntary processes. Voluntary processes range from the reasoned actions of workers who weigh illness and disability in their decision-making, to reactive stress-avoidance responses. Involuntary processes include employer discrimination against ill or disabled workers. Analyses of the effects of illness and disability that differentiate reasons for job separation can illuminate the processes involved. This paper reports on an evaluation of effects of illness and disability on job separation predicted by theories of reasoned action, stress, and employer discrimination against ill and disabled workers. Effects of four illness/disability conditions on the rate of job separation for 12 reasons are estimated using data from a longitudinal study of a representative sample of the Canadian population-the Survey of Labour and Income Dynamics (SLID). Two of the four effects that are statistically significant (under conservative Bayesian criteria for statistical significance) are consistent with the idea that workers weigh illness and disability as costs, and calculate the costs and benefits of continuing to work with an illness or disability: (1) disabling illness increases the hazard of leaving a job in order to engage in caregiving, and (2) work-related disability increases the hazard of leaving a job due to poor pay. The other two significant effects indicate that: (3) disabling illness decreases the hazard of layoff, and (4) non-work disability increases the hazard of leaving one job to take a different job. This last effect is consistent with a stress-interruption process. Other effects are statistically significant under conventional criteria for statistical significance, and most of these effects are also consistent with cost-benefit and stress theories. Some effects of illness and disability are sex and age-specific, and reasons for the specificity of these effects are discussed.
A novel measure and significance testing in data analysis of cell image segmentation.
Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L
2017-03-14
Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.
Stula, N
1992-01-01
This prospective clinical study shows the results of the adjuvant cytostatic therapy (ACT) in breast cancer applied to patients in the premenopausal age. Cyclophosphamide, methotrexate, 5-fluorouracil (CMF) group (70 patients): after operative and radiotherapeutic treatment the ACT is applied over the period of six months (six cycles). Control group (71 patients): only operative and radiotherapeutic treatment. Protocol of the ACT: cyclophosphamide, methotrexate, 5-fluorouracil (CMF) over 5 days with a 4-week break. Total 6 cycles. Control period: 10 years. Stratification of patients was made on the basis of the following risk factors: size of the tumour, number of positive lymph nodes of ipsilateral axilla, grade of the differentiation of the tumour, hormonal dependence of the tumour. Statistical method of analysis: actuary calculation, the Hi square test. The results show that the application of the ACT is statistically significant (P < 0.05) in regard to the disease-free interval. However, concerning the survival, the usefulness of its application is present but not statistically significant on the significance level of 5%. The usefulness of the ACT application as regards high risk factors (T3, T4 > or = 4 lymph nodes, grade of differentiation II, III, ER-PR-) is statistically significant (P < 0.05) both in regard to the DFI and survival. Regarding low risk factors the ACT application adversely influenced the results in the control group. This is probably the result of the ACT toxicity. The patients have a favourable prognosis in this subgroup in regard to the staging and biological nature of the tumour. The ACT in the premenopausal age of patients with high risk factors gives a significantly better results concerning the procrastination of relapse and the length of the survival period.
NASA Technical Reports Server (NTRS)
Roth, D. J.; Swickard, S. M.; Stang, D. B.; Deguire, M. R.
1991-01-01
A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semiempirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produces predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis of fully-dense materials are in good agreement with those calculated from elastic properties.
NASA Technical Reports Server (NTRS)
Labovitz, M. L.; Masuoka, E. J. (Principal Investigator)
1981-01-01
The presence of positive serial correlation (autocorrelation) in remotely sensed data results in an underestimate of the variance-covariance matrix when calculated using contiguous pixels. This underestimate produces an inflation in F statistics. For a set of Thematic Mapper Simulator data (TMS), used to test the ability to discriminate a known geobotanical anomaly from its background, the inflation in F statistics related to serial correlation is between 7 and 70 times. This means that significance tests of means of the spectral bands initially appear to suggest that the anomalous site is very different in spectral reflectance and emittance from its background sites. However, this difference often disappears and is always dramatically reduced when compared to frequency distributions of test statistics produced by the comparison of simulated training sets possessing equal means, but which are composed of autocorrelated observations.
NASA Technical Reports Server (NTRS)
Roth, D. J.; Swickard, S. M.; Stang, D. B.; Deguire, M. R.
1990-01-01
A review and statistical analysis of the ultrasonic velocity method for estimating the porosity fraction in polycrystalline materials is presented. Initially, a semi-empirical model is developed showing the origin of the linear relationship between ultrasonic velocity and porosity fraction. Then, from a compilation of data produced by many researchers, scatter plots of velocity versus percent porosity data are shown for Al2O3, MgO, porcelain-based ceramics, PZT, SiC, Si3N4, steel, tungsten, UO2,(U0.30Pu0.70)C, and YBa2Cu3O(7-x). Linear regression analysis produced predicted slope, intercept, correlation coefficient, level of significance, and confidence interval statistics for the data. Velocity values predicted from regression analysis for fully-dense materials are in good agreement with those calculated from elastic properties.
Accuracy and Landmark Error Calculation Using Cone-Beam Computed Tomography–Generated Cephalograms
Grauer, Dan; Cevidanes, Lucia S. H.; Styner, Martin A.; Heulfe, Inam; Harmon, Eric T.; Zhu, Hongtu; Proffit, William R.
2010-01-01
Objective To evaluate systematic differences in landmark position between cone-beam computed tomography (CBCT)–generated cephalograms and conventional digital cephalograms and to estimate how much variability should be taken into account when both modalities are used within the same longitudinal study. Materials and Methods Landmarks on homologous cone-beam computed tomographic–generated cephalograms and conventional digital cephalograms of 46 patients were digitized, registered, and compared via the Hotelling T2 test. Results There were no systematic differences between modalities in the position of most landmarks. Three landmarks showed statistically significant differences but did not reach clinical significance. A method for error calculation while combining both modalities in the same individual is presented. Conclusion In a longitudinal follow-up for assessment of treatment outcomes and growth of one individual, the error due to the combination of the two modalities might be larger than previously estimated. PMID:19905853
Nie, Z Q; Ou, Y Q; Zhuang, J; Qu, Y J; Mai, J Z; Chen, J M; Liu, X Q
2016-05-01
Conditional logistic regression analysis and unconditional logistic regression analysis are commonly used in case control study, but Cox proportional hazard model is often used in survival data analysis. Most literature only refer to main effect model, however, generalized linear model differs from general linear model, and the interaction was composed of multiplicative interaction and additive interaction. The former is only statistical significant, but the latter has biological significance. In this paper, macros was written by using SAS 9.4 and the contrast ratio, attributable proportion due to interaction and synergy index were calculated while calculating the items of logistic and Cox regression interactions, and the confidence intervals of Wald, delta and profile likelihood were used to evaluate additive interaction for the reference in big data analysis in clinical epidemiology and in analysis of genetic multiplicative and additive interactions.
Cortes, Aneg L; Montiel, Enrique R; Gimeno, Isabel M
2009-12-01
The use of Flinders Technology Associates (FTA) filter cards to quantify Marek's disease virus (MDV) DNA for the diagnosis of Marek's disease (MD) and to monitor MD vaccines was evaluated. Samples of blood (43), solid tumors (14), and feather pulp (FP; 36) collected fresh and in FTA cards were analyzed. MDV DNA load was quantified by real-time PCR. Threshold cycle (Ct) ratios were calculated for each sample by dividing the Ct value of the internal control gene (glyceraldehyde-3-phosphate dehydrogenase) by the Ct value of the MDV gene. Statistically significant correlation (P < 0.05) within Ct ratios was detected between samples collected fresh and in FTA cards by using Pearson's correlation test. Load of serotype 1 MDV DNA was quantified in 24 FP, 14 solid tumor, and 43 blood samples. There was a statistically significant correlation between FP (r = 0.95), solid tumor (r = 0.94), and blood (r = 0.9) samples collected fresh and in FTA cards. Load of serotype 2 MDV DNA was quantified in 17 FP samples, and the correlation between samples collected fresh and in FTA cards was also statistically significant (Pearson's coefficient, r = 0.96); load of serotype 3 MDV DNA was quantified in 36 FP samples, and correlation between samples taken fresh and in FTA cards was also statistically significant (r = 0.84). MDV DNA samples extracted 3 days (t0) and 8 months after collection (t1) were used to evaluate the stability of MDV DNA in archived samples collected in FTA cards. A statistically significant correlation was found for serotype 1 (r = 0.96), serotype 2 (r = 1), and serotype 3 (r = 0.9). The results show that FTA cards are an excellent media to collect, transport, and archive samples for MD diagnosis and to monitor MD vaccines. In addition, FTA cards are widely available, inexpensive, and adequate for the shipment of samples nationally and internationally.
Cross-modality PET/CT and contrast-enhanced CT imaging for pancreatic cancer
Zhang, Jian; Zuo, Chang-Jing; Jia, Ning-Yang; Wang, Jian-Hua; Hu, Sheng-Ping; Yu, Zhong-Fei; Zheng, Yuan; Zhang, An-Yu; Feng, Xiao-Yuan
2015-01-01
AIM: To explore the diagnostic value of the cross-modality fusion images provided by positron emission tomography/computed tomography (PET/CT) and contrast-enhanced CT (CECT) for pancreatic cancer (PC). METHODS: Data from 70 patients with pancreatic lesions who underwent CECT and PET/CT examinations at our hospital from August 2010 to October 2012 were analyzed. PET/CECT for the cross-modality image fusion was obtained using TureD software. The diagnostic efficiencies of PET/CT, CECT and PET/CECT were calculated and compared with each other using a χ2 test. P < 0.05 was considered to indicate statistical significance. RESULTS: Of the total 70 patients, 50 had PC and 20 had benign lesions. The differences in the sensitivity, negative predictive value (NPV), and accuracy between CECT and PET/CECT in detecting PC were statistically significant (P < 0.05 for each). In 15 of the 31 patients with PC who underwent a surgical operation, peripancreatic vessel invasion was verified. The differences in the sensitivity, positive predictive value, NPV, and accuracy of CECT vs PET/CT and PET/CECT vs PET/CT in diagnosing peripancreatic vessel invasion were statistically significant (P < 0.05 for each). In 19 of the 31 patients with PC who underwent a surgical operation, regional lymph node metastasis was verified by postsurgical histology. There was no statistically significant difference among the three methods in detecting regional lymph node metastasis (P > 0.05 for each). In 17 of the 50 patients with PC confirmed by histology or clinical follow-up, distant metastasis was confirmed. The differences in the sensitivity and NPV between CECT and PET/CECT in detecting distant metastasis were statistically significant (P < 0.05 for each). CONCLUSION: Cross-modality image fusion of PET/CT and CECT is a convenient and effective method that can be used to diagnose and stage PC, compensating for the defects of PET/CT and CECT when they are conducted individually. PMID:25780297
Bertolaccini, Luca; Viti, Andrea; Cavallo, Antonio; Terzi, Alberto
2014-04-01
The role of electro-thermal bipolar tissue sealing system (LigaSure(®), (LS); Covidien, Inc., CO, USA) in thoracic surgery is still undefined. Reports of its use are still limited. The objective of the trial was to evaluate the cost and benefits of LS in major lung resection surgery. A randomized blinded study of a consecutive series of 100 patients undergoing lobectomy was undertaken. After muscle-sparing thoracotomy and classification of lung fissures according to Craig-Walker, patients with fissure Grade 2-4 were randomized to Stapler group or LS group fissure completion. Recorded parameters were analysed for differences in selected intraoperative and postoperative outcomes. Statistical analysis was performed with the bootstrap method. Pearson's χ(2) test and Fisher's exact test were used to calculate probability value for dichotomous variables comparison. Cost-benefit evaluation was performed using Pareto optimal analysis. There were no significant differences between groups, regarding demographic and baseline characteristics. No patient was withdrawn from the study; no adverse effect was recorded. There was no mortality or major complications in both groups. There were no statistically significant differences as to operative time or morbidity between patients in the LS group compared with the Stapler group. In the LS group, there was a not statistically significant increase of postoperative air leaks in the first 24 postoperative hours, while a statistically significant increase of drainage amount was observed in the LS group. No statistically significant difference in hospital length of stay was observed. Overall, the LS group had a favourable multi-criteria analysis of cost/benefit ratio with a good 'Pareto optimum'. LS is a safe device for thoracic surgery and can be a valid alternative to Staplers. In this setting, LS allows functional lung tissue preservation. As to costs, LS seems equivalent to Staplers.
Kim, Sung-Min; Choi, Yosoon
2017-01-01
To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z-score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z-scores: high content with a high z-score (HH), high content with a low z-score (HL), low content with a high z-score (LH), and low content with a low z-score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1–4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required. PMID:28629168
Kim, Sung-Min; Choi, Yosoon
2017-06-18
To develop appropriate measures to prevent soil contamination in abandoned mining areas, an understanding of the spatial variation of the potentially toxic trace elements (PTEs) in the soil is necessary. For the purpose of effective soil sampling, this study uses hot spot analysis, which calculates a z -score based on the Getis-Ord Gi* statistic to identify a statistically significant hot spot sample. To constitute a statistically significant hot spot, a feature with a high value should also be surrounded by other features with high values. Using relatively cost- and time-effective portable X-ray fluorescence (PXRF) analysis, sufficient input data are acquired from the Busan abandoned mine and used for hot spot analysis. To calibrate the PXRF data, which have a relatively low accuracy, the PXRF analysis data are transformed using the inductively coupled plasma atomic emission spectrometry (ICP-AES) data. The transformed PXRF data of the Busan abandoned mine are classified into four groups according to their normalized content and z -scores: high content with a high z -score (HH), high content with a low z -score (HL), low content with a high z -score (LH), and low content with a low z -score (LL). The HL and LH cases may be due to measurement errors. Additional or complementary surveys are required for the areas surrounding these suspect samples or for significant hot spot areas. The soil sampling is conducted according to a four-phase procedure in which the hot spot analysis and proposed group classification method are employed to support the development of a sampling plan for the following phase. Overall, 30, 50, 80, and 100 samples are investigated and analyzed in phases 1-4, respectively. The method implemented in this case study may be utilized in the field for the assessment of statistically significant soil contamination and the identification of areas for which an additional survey is required.
A first principles calculation and statistical mechanics modeling of defects in Al-H system
NASA Astrophysics Data System (ADS)
Ji, Min; Wang, Cai-Zhuang; Ho, Kai-Ming
2007-03-01
The behavior of defects and hydrogen in Al was investigated by first principles calculations and statistical mechanics modeling. The formation energy of different defects in Al+H system such as Al vacancy, H in institution and multiple H in Al vacancy were calculated by first principles method. Defect concentration in thermodynamical equilibrium was studied by total free energy calculation including configuration entropy and defect-defect interaction from low concentration limit to hydride limit. In our grand canonical ensemble model, hydrogen chemical potential under different environment plays an important role in determing the defect concentration and properties in Al-H system.
Saps, M; Lavigne, J V
2015-06-01
The Food and Drug Administration (FDA) recommended ≥30% decrease on patient-reported outcomes for pain be considered clinically significant in clinical trials for adults with irritable bowel syndrome. This percent change approach may not be appropriate for children. We compared three alternate approaches to determining clinically significant reductions in pain among children. 80 children with functional abdominal pain participated in a study of the efficacy of amitriptyline. Endpoints included patient-reported estimates of feeling better, and pain Visual Analog Scale (VAS). The minimum clinically important difference in pain report was calculated as (i) mean change in VAS score for children reporting being 'better'; (ii) percent changes in pain (≥30% and ≥50%) on the VAS; and (iii) statistically reliable changes on the VAS for 68% and 95% confidence intervals. There was poor agreement between the three approaches. 43.6% of the children who met the FDA ≥30% criterion for clinically significant change did not achieve a reliable level of improvement (95% confidence interval). Children's self-reported ratings of being better may not be statistically reliable. A combined approach in which children must report improvement as better and achieve a statistically significant change may be more appropriate for outcomes in clinical trials. © 2015 John Wiley & Sons Ltd.
Rather, Shagufta; Keen, Abid; Sajad, Peerzada
2018-01-01
To evaluate the relationship between vitamin D levels and chronic spontaneous urticaria (CSU) and compare with healthy age and sex matched controls. This was a hospital-based cross-sectional study conducted over a period of 1 year, in which 110 patients with CSU were recruited along with an equal number of sex and age-matched healthy controls. For each patient, urticaria activity score (UAS) was calculated and autologous serum skin test (ASST) was performed. Plasma 25-hydroxyvitamin D [25-(OH)D] was analyzed by chemiluminescence method. A deficiency in vitamin D was defined as serum 25-(OH)D concentrations <30 ng/mL. The statistical analysis was carried out by using appropriate statistical tests. The mean serum 25-(OH)D levels of CSU patients was 19.6 ± 6.9 ng/mL, whereas in control group, the mean level was 38.5 ± 6.7, the difference being statistically significant ( P < 0.001). A significant negative correlation was found between vitamin D levels and UAS. ( P < 0.001). The number of patients with ASST positivity was 44 (40%). The patients with CSU had reduced levels of vitamin D when compared to healthy controls. Furthermore, there was a significant negative correlation between the levels of serum vitamin D and severity of CSU.
Radha, G; Swathi, V; Jha, Abhishek
2016-01-01
This study explores the association of disabilities and oral health. The aim of the study was to assess the salivary and plaque pH and oral health status of children with and without disabilities. A total of 100 schoolchildren (50 with disabilities and 50 without disabilities) were examined from 9 to 15 years age group. Saliva and plaque pH analysis were done to both the groups. Clinical data were collected on periodontal status, dental caries using WHO criteria. pH values of different groups, difference between the means were calculated using independent t-test, and frequency distribution was analyzed using Chi-square test. Statistical significance, P value was set at 0.05. Mean plaque and salivary pH scores were lesser (5.73 and 5.67) in children with intellectual disabilities (IDs) (P< 0.001). Subjects with disabilities had also statistically significant higher CPI scores and decayed, missing, and filled scores than their healthy counterparts (P< 0.001). There is a statistically significant difference in plaque and salivary pH among children with and without ID with lower plaque and salivary pH among children with ID. In addition to this, the oral health was also more compromised in children with ID, which confirms a need for preventive treatment for these children.
Koksal, Ayhan; Keskinkılıc, Cahit; Sozmen, Mehmet Vedat; Dirican, Ayten Ceyhan; Aysal, Fikret; Altunkaynak, Yavuz; Baybas, Sevim
2014-01-01
In this study, cognitive functions of 9 patients developing parkinsonism due to chronic manganese intoxication by intravenous methcathinone solution were investigated using detailed neuropsychometric tests. Attention deficit, verbal and nonverbal memory, visuospatial function, constructive ability, language, and executive (frontal) functions of 9 patients who were admitted to our clinic with manifestations of chronic manganese intoxication and 9 control subjects were assessed using neuropsychometric tests. Two years later, detailed repeat neuropsychometric tests were performed in the patient group. The results were evaluated using the χ(2) test, Fisher's exact probability test, Student's t test and the Mann-Whitney U test. While there was no statistically significant difference between the two groups in language functions, visuospatial functions and constructive ability, a statistically significant difference was noted between both groups regarding attention (p = 0.032), calculation (p = 0.004), recall and recognition domains of verbal memory, nonverbal memory (p = 0.021) and some domains of frontal functions (Stroop-5 and spontaneous recovery) (p = 0.022 and 0.012). Repeat neuropsychometric test results of the patients were not statistically significant 2 years later. It has been observed that cognitive dysfunction seen in parkinsonism secondary to chronic manganese intoxication may be long-lasting and may not recover as observed in motor dysfunction. © 2014 S. Karger AG, Basel.
Better prognostic marker in ICU - APACHE II, SOFA or SAP II!
Naqvi, Iftikhar Haider; Mahmood, Khalid; Ziaullaha, Syed; Kashif, Syed Mohammad; Sharif, Asim
2016-01-01
This study was designed to determine the comparative efficacy of different scoring system in assessing the prognosis of critically ill patients. This was a retrospective study conducted in medical intensive care unit (MICU) and high dependency unit (HDU) Medical Unit III, Civil Hospital, from April 2012 to August 2012. All patients over age 16 years old who have fulfilled the criteria for MICU admission were included. Predictive mortality of APACHE II, SAP II and SOFA were calculated. Calibration and discrimination were used for validity of each scoring model. A total of 96 patients with equal gender distribution were enrolled. The average APACHE II score in non-survivors (27.97+8.53) was higher than survivors (15.82+8.79) with statistically significant p value (<0.001). The average SOFA score in non-survivors (9.68+4.88) was higher than survivors (5.63+3.63) with statistically significant p value (<0.001). SAP II average score in non-survivors (53.71+19.05) was higher than survivors (30.18+16.24) with statistically significant p value (<0.001). All three tested scoring models (APACHE II, SAP II and SOFA) would be accurate enough for a general description of our ICU patients. APACHE II has showed better calibration and discrimination power than SAP II and SOFA.
Vriesendorp, Pieter A; Schinkel, Arend F L; Liebregts, Max; Theuns, Dominic A M J; van Cleemput, Johan; Ten Cate, Folkert J; Willems, Rik; Michels, Michelle
2015-08-01
The recently released 2014 European Society of Cardiology guidelines of hypertrophic cardiomyopathy (HCM) use a new clinical risk prediction model for sudden cardiac death (SCD), based on the HCM Risk-SCD study. Our study is the first external and independent validation of this new risk prediction model. The study population consisted of a consecutive cohort of 706 patients with HCM without prior SCD event, from 2 tertiary referral centers. The primary end point was a composite of SCD and appropriate implantable cardioverter-defibrillator therapy, identical to the HCM Risk-SCD end point. The 5-year SCD risk was calculated using the HCM Risk-SCD formula. Receiver operating characteristic curves and C-statistics were calculated for the 2014 European Society of Cardiology guidelines, and risk stratification methods of the 2003 American College of Cardiology/European Society of Cardiology guidelines and 2011 American College of Cardiology Foundation/American Heart Association guidelines. During follow-up of 7.7±5.3 years, SCD occurred in 42 (5.9%) of 706 patients (ages 49±16 years; 34% women). The C-statistic of the new model was 0.69 (95% CI, 0.57-0.82; P=0.008), which performed significantly better than the conventional risk factor models based on the 2003 guidelines (C-statistic of 0.55: 95% CI, 0.47-0.63; P=0.3), and 2011 guidelines (C-statistic of 0.60: 95% CI, 0.50-0.70; P=0.07). The HCM Risk-SCD model improves the risk stratification of patients with HCM for primary prevention of SCD, and calculating an individual risk estimate contributes to the clinical decision-making process. Improved risk stratification is important for the decision making before implantable cardioverter-defibrillator implantation for the primary prevention of SCD. © 2015 American Heart Association, Inc.
Zygner, Wojciech; Gójska-Zygner, Olga; Wesołowska, Agnieszka; Wędrychowicz, Halina
2013-09-01
Urinary creatinine to serum creatinine (UCr/SCr) ratio and renal failure index (RFI) are useful indices of renal damage. Both UCr/SCr ratio and RFI are used in differentiation between prerenal azotaemia and acute tubular necrosis. In this work the authors calculated the UCr/SCr ratio and RFI in dogs infected with Babesia canis and the values of these indices in azotaemic dogs infected with the parasite. The results of this study showed significantly lower UCr/SCr ratio in dogs infected with B. canis than in healthy dogs. Moreover, in azotaemic dogs infected with B. canis the UCr/SCr ratio was significantly lower and the RFI was significantly higher than in non-azotaemic dogs infected with B. canis. The calculated correlation between RFI and duration of the disease before diagnosis and treatment was high, positive and statistically significant (r = 0.89, p < 0.001). The results of this study showed that during the course of canine babesiosis caused by B. canis in Poland acute tubular necrosis may develop.
Effects of abutment screw coating on implant preload.
Park, Jae-Kyoung; Choi, Jin-Uk; Jeon, Young-Chan; Choi, Kyung-Soo; Jeong, Chang-Mo
2010-08-01
The aim of the present study was to investigate the effects of tungsten carbide carbon (WC/CTa) screw surface coating on abutment screw preload in three implant connection systems in comparison to noncoated titanium alloy (Ta) screws. Preload of WC/CTa abutment screws was compared to noncoated Ta screws in three implant connection systems. The differences in preloads were measured in tightening rotational angle, compression force, initial screw removal torque, and postload screw removal torque after 1 million cyclic loads. Preload loss percent was calculated to determine the efficacy of maintaining the preload of two abutment screw types in relation to implant connection systems. WC/CTa screws provided 10 degrees higher tightening rotational angle than Ta screws in all three connection systems. This difference was statistically significant (p < 0.05). External-hex butt joint implant connections had a higher compression force than the two internal conical implant connections. WC/CTa screws provided a statistically significantly higher compression force than Ta screws in all three implant connections (p < 0.05). Ta screws required statistically higher removal torque than WC/CTa screws in all three implant connections (p < 0.05); however, Ta screws needed statistically lower postload removal torque than WC/CTa screws in all three implant connections (p < 0.05). Ta screws had a statistically higher preload loss percent than WC/CTa screws in all three implant connections (p < 0.05), indicating that WC/CTa screws were superior in maintaining the preload than Ta screws. Within the limits of present study, the following conclusions were made: (1) WC/CTa screws provided higher preload than noncoated Ta screws in all three implant connection systems. (2) The initial removal torque for Ta screws required higher force than WC/CTa screws, whereas postload removal torque for Ta screws was lower than WC/CTa screws. Calculated Ta screw preload loss percent was higher than for WC/CTa screws, suggesting that WC/CTa screws were more effective in maintaining the preload than Ta screws. (3) Internal conical connections were more effective in maintaining the screw preload in cyclic loads than external-hex butt joint connections.
Clare, Brian W; Supuran, Claudiu T
2005-03-15
A QSAR based almost entirely on quantum theoretically calculated descriptors has been developed for a large and heterogeneous group of aromatic and heteroaromatic carbonic anhydrase inhibitors, using orbital energies, nodal angles, atomic charges, and some other intuitively appealing descriptors. Most calculations have been done at the B3LYP/6-31G* level of theory. For the first time we have treated five-membered rings by the same means that we have used for benzene rings in the past. Our flip regression technique has been expanded to encompass automatic variable selection. The statistical quality of the results, while not equal to those we have had with benzene derivatives, is very good considering the noncongeneric nature of the compounds. The most significant correlation was with charge on the atoms of the sulfonamide group, followed by the nodal orientation and the solvation energy calculated by COSMO and the charge polarization of the molecule calculated as the mean absolute Mulliken charge over all atoms.
Irrigation water use in Kansas, 2013
Lanning-Rush, Jennifer L.
2016-03-22
This report, prepared by the U.S. Geological Survey in cooperation with the Kansas Department of Agriculture, Division of Water Resources, presents derivative statistics of 2013 irrigation water use in Kansas. The published regional and county-level statistics from the previous 4 years (2009–12) are shown with the 2013 statistics and are used to calculate a 5-year average. An overall Kansas average and regional averages also are calculated and presented. Total reported irrigation water use in 2013 was 3.3 million acre-feet of water applied to 3.0 million irrigated acres.
Calculating weighted estimates of peak streamflow statistics
Cohn, Timothy A.; Berenbrock, Charles; Kiang, Julie E.; Mason, Jr., Robert R.
2012-01-01
According to the Federal guidelines for flood-frequency estimation, the uncertainty of peak streamflow statistics, such as the 1-percent annual exceedance probability (AEP) flow at a streamgage, can be reduced by combining the at-site estimate with the regional regression estimate to obtain a weighted estimate of the flow statistic. The procedure assumes the estimates are independent, which is reasonable in most practical situations. The purpose of this publication is to describe and make available a method for calculating a weighted estimate from the uncertainty or variance of the two independent estimates.
Jaikuna, Tanwiwat; Khadsiri, Phatchareewan; Chawapun, Nisa; Saekho, Suwit; Tharavichitkul, Ekkasit
2017-02-01
To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL) model. The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR), and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD 2 ) was calculated using biological effective dose (BED) based on the LQL model. The software calculation and the manual calculation were compared for EQD 2 verification with pair t -test statistical analysis using IBM SPSS Statistics version 22 (64-bit). Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS) in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV) determined by D 90% , 0.56% in the bladder, 1.74% in the rectum when determined by D 2cc , and less than 1% in Pinnacle. The difference in the EQD 2 between the software calculation and the manual calculation was not significantly different with 0.00% at p -values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT) and 0.240, 0.320, and 0.849 for brachytherapy (BT) in HR-CTV, bladder, and rectum, respectively. The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Comparison of thermal signatures of a mine buried in mineral and organic soils
NASA Astrophysics Data System (ADS)
Lamorski, K.; Pregowski, Piotr; Swiderski, Waldemar; Usowicz, B.; Walczak, R. T.
2001-10-01
Values of thermal signature of a mine buried in soils, which ave different properties, were compared using mathematical- statistical modeling. There was applied a model of transport phenomena in the soil, which takes into consideration water and energy transfer. The energy transport is described using Fourier's equation. Liquid phase transport of water is calculated using Richard's model of water flow in porous medium. For the comparison, there were selected two soils: mineral and organic, which differs significantly in thermal and hydrological properties. The heat capacity of soil was estimated using de Vries model. The thermal conductivity was calculated using a statistical model, which incorprates fundamental soil physical properties. The model of soil thermal conductivity was built on the base of heat resistance, two Kirchhoff's laws and polynomial distribution. Soil hydrological properties were described using Mualem-van Genuchten model. The impact of thermal properties of the medium in which a mien had been placed on its thermal signature in the conditions of heat input was presented. The dependence was stated between observed thermal signature of a mine and thermal parameters of the medium.
NASA Technical Reports Server (NTRS)
Semler, T. T.
1973-01-01
The method of pseudo-resonance cross sections is used to analyze published temperature-dependent neutron transmission and self-indication measurements on tantalum in the unresolved region. In the energy region analyzed, 1825.0 to 2017.0 eV, a direct application of the pseudo-resonance approach using a customary average strength function will not provide effective cross sections which fit the measured cross section behavior. Rather a local value of the strength function is required, and a set of resonances which model the measured behavior of the effective cross sections is derived. This derived set of resonance parameters adequately represents the observed resonance hehavior in this local energy region. Similar analyses for the measurements in other unresolved energy regions are necessary to obtain local resonance parameters for improved reactor calculations. This study suggests that Doppler coefficients calculated by sampling from grand average statistical distributions over the entire unresolved resonance region can be in error, since significant local variations in the statistical distributions are not taken into consideration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Merrill, D.W.; Selvin, S.; Close, E.R.
In studying geographic disease distributions, one normally compares rates of arbitrarily defined geographic subareas (e.g. census tracts), thereby sacrificing the geographic detail of the original data. The sparser the data, the larger the subareas must be in order to calculate stable rates. This dilemma is avoided with the technique of Density Equalizing Map Projections (DEMP). Boundaries of geographic subregions are adjusted to equalize population density over the entire study area. Case locations plotted on the transformed map should have a uniform distribution if the underlying disease-rates are constant. On the transformed map, the statistical analysis of the observed distribution ismore » greatly simplified. Even for sparse distributions, the statistical significance of a supposed disease cluster can be reliably calculated. The present report describes the first successful application of the DEMP technique to a sizeable ``real-world`` data set of epidemiologic interest. An improved DEMP algorithm [GUSE93, CLOS94] was applied to a data set previously analyzed with conventional techniques [SATA90, REYN91]. The results from the DEMP analysis and a conventional analysis are compared.« less
Improving stochastic estimates with inference methods: calculating matrix diagonals.
Selig, Marco; Oppermann, Niels; Ensslin, Torsten A
2012-02-01
Estimating the diagonal entries of a matrix, that is not directly accessible but only available as a linear operator in the form of a computer routine, is a common necessity in many computational applications, especially in image reconstruction and statistical inference. Here, methods of statistical inference are used to improve the accuracy or the computational costs of matrix probing methods to estimate matrix diagonals. In particular, the generalized Wiener filter methodology, as developed within information field theory, is shown to significantly improve estimates based on only a few sampling probes, in cases in which some form of continuity of the solution can be assumed. The strength, length scale, and precise functional form of the exploited autocorrelation function of the matrix diagonal is determined from the probes themselves. The developed algorithm is successfully applied to mock and real world problems. These performance tests show that, in situations where a matrix diagonal has to be calculated from only a small number of computationally expensive probes, a speedup by a factor of 2 to 10 is possible with the proposed method. © 2012 American Physical Society
Knudsen, Anders Dahl; Bennike, Tue; Kjeldal, Henrik; Birkelund, Svend; Otzen, Daniel Erik; Stensballe, Allan
2014-05-30
We describe Condenser, a freely available, comprehensive open-source tool for merging multidimensional quantitative proteomics data from the Matrix Science Mascot Distiller Quantitation Toolbox into a common format ready for subsequent bioinformatic analysis. A number of different relative quantitation technologies, such as metabolic (15)N and amino acid stable isotope incorporation, label-free and chemical-label quantitation are supported. The program features multiple options for curative filtering of the quantified peptides, allowing the user to choose data quality thresholds appropriate for the current dataset, and ensure the quality of the calculated relative protein abundances. Condenser also features optional global normalization, peptide outlier removal, multiple testing and calculation of t-test statistics for highlighting and evaluating proteins with significantly altered relative protein abundances. Condenser provides an attractive addition to the gold-standard quantitative workflow of Mascot Distiller, allowing easy handling of larger multi-dimensional experiments. Source code, binaries, test data set and documentation are available at http://condenser.googlecode.com/. Copyright © 2014 Elsevier B.V. All rights reserved.
Slump, Jelena; Ferguson, Peter C; Wunder, Jay S; Griffin, Anthony; Hoekstra, Harald J; Bagher, Shaghayegh; Zhong, Toni; Hofer, Stefan O P; O'Neill, Anne C
2016-10-01
The ACS-NSQIP surgical risk calculator is an open-access on-line tool that estimates the risk of adverse post-operative outcomes for a wide range of surgical procedures. Wide surgical resection of soft tissue sarcoma (STS) often requires complex reconstructive procedures that can be associated with relatively high rates of complications. This study evaluates the ability of this calculator to identify patients with STS at risk for post-operative complications following flap reconstruction. Clinical details of 265 patients who underwent flap reconstruction following STS resection were entered into the online calculator. The predicted rates of complications were compared to the observed rates. The calculator model was validated using measures of prediction and discrimination. The mean predicted rate of any complication was 15.35 ± 5.6% which differed significantly from the observed rate of 32.5% (P = 0.009). The c-statistic was relatively low at 0.626 indicating poor discrimination between patients who are at risk of complications and those who are not. The Brier's score of 0.242 was significantly different from 0 (P < 0.001) indicating poor correlation between the predicted and actual probability of complications. The ACS-NSQIP universal risk calculator did not maintain its predictive value in patients undergoing flap reconstruction following STS resection. J. Surg. Oncol. 2016;114:570-575. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Holmes, Sean T; Iuliucci, Robbie J; Mueller, Karl T; Dybowski, Cecil
2015-11-10
Calculations of the principal components of magnetic-shielding tensors in crystalline solids require the inclusion of the effects of lattice structure on the local electronic environment to obtain significant agreement with experimental NMR measurements. We assess periodic (GIPAW) and GIAO/symmetry-adapted cluster (SAC) models for computing magnetic-shielding tensors by calculations on a test set containing 72 insulating molecular solids, with a total of 393 principal components of chemical-shift tensors from 13C, 15N, 19F, and 31P sites. When clusters are carefully designed to represent the local solid-state environment and when periodic calculations include sufficient variability, both methods predict magnetic-shielding tensors that agree well with experimental chemical-shift values, demonstrating the correspondence of the two computational techniques. At the basis-set limit, we find that the small differences in the computed values have no statistical significance for three of the four nuclides considered. Subsequently, we explore the effects of additional DFT methods available only with the GIAO/cluster approach, particularly the use of hybrid-GGA functionals, meta-GGA functionals, and hybrid meta-GGA functionals that demonstrate improved agreement in calculations on symmetry-adapted clusters. We demonstrate that meta-GGA functionals improve computed NMR parameters over those obtained by GGA functionals in all cases, and that hybrid functionals improve computed results over the respective pure DFT functional for all nuclides except 15N.
Nomogram for sample size calculation on a straightforward basis for the kappa statistic.
Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo
2014-09-01
Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.
New estimates of asymmetric decomposition of racemic mixtures by natural beta-radiation sources
NASA Technical Reports Server (NTRS)
Hegstrom, R. A.; Rich, A.; Van House, J.
1985-01-01
Some recent calculations that appeared to invalidate the Vester-Ulbricht hypothesis, which suggests that the chirality of biological molecules originates from the beta-radiolysis of prebiotic racemic mixtures, are reexamined. These calculations apparently showed that the radiolysis-induced chiral polarization can never exceed the chiral polarization produced by statistical fluctuations. It is here shown that several overly restrictive conditions were imposed on these calculations which, when relaxed, allow the radiolysis-induced polarization to exceed that produced by statistical fluctuations, in accordance with the Vester-Ulbricht hypothesis.
Son, Seok Hyun; Kang, Young Nam; Ryu, Mi-Ryeong
2012-01-01
The aim of this study was to evaluate the effect of metallic implants on the dose calculation for radiation therapy in patients with metallic implants and to find a way to reduce the error of dose calculation. We made a phantom in which titanium implants were inserted into positions similar to the implant positions in spinal posterior/posterolateral fusion. We compared the calculated dose of the treatment planning systems with the measured dose in the treatment equipment. We used 3 kinds of computed tomography (CT) (kilovoltage CT, extended-scaled kilovoltage CT, and megavoltage CT) and 3 kinds of treatment equipment (ARTISTE, TomoTherapy Hi-Art, and Cyberknife). For measurement of doses, we used an ionization chamber and Gafchromic external beam therapy film. The absolute doses that were measured using an ionization chamber at the isocenter in the titanium phantom were on average 1.9% lower than those in the reference phantom (p = 0.002). There was no statistically significant difference according to the kinds of CT images, the treatment equipment, and the size of the targets. As the distance from the surface of the titanium implants became closer, the measured doses tended to decrease (p < 0.001), and this showed a statistically significant difference among the kinds of CT images: the effect of metallic implants was less in the megavoltage CT than in the kilovoltage CT or the extended-scaled kilovoltage CT. The error caused by the titanium implants was beyond a clinically acceptable range. To reduce the error of dose calculation, we suggest that the megavoltage CT be used for planning. In addition, it is necessary to consider the distance between the titanium implants and the targets or the organs at risk to prescribe the dose for the target and the dose constraint for the organs at risk. Copyright © 2012 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Son, Seok Hyun; Kang, Young Nam; Ryu, Mi-Ryeong, E-mail: mrryu@catholic.ac.kr
2012-04-01
The aim of this study was to evaluate the effect of metallic implants on the dose calculation for radiation therapy in patients with metallic implants and to find a way to reduce the error of dose calculation. We made a phantom in which titanium implants were inserted into positions similar to the implant positions in spinal posterior/posterolateral fusion. We compared the calculated dose of the treatment planning systems with the measured dose in the treatment equipment. We used 3 kinds of computed tomography (CT) (kilovoltage CT, extended-scaled kilovoltage CT, and megavoltage CT) and 3 kinds of treatment equipment (ARTISTE, TomoTherapymore » Hi-Art, and Cyberknife). For measurement of doses, we used an ionization chamber and Gafchromic external beam therapy film. The absolute doses that were measured using an ionization chamber at the isocenter in the titanium phantom were on average 1.9% lower than those in the reference phantom (p = 0.002). There was no statistically significant difference according to the kinds of CT images, the treatment equipment, and the size of the targets. As the distance from the surface of the titanium implants became closer, the measured doses tended to decrease (p < 0.001), and this showed a statistically significant difference among the kinds of CT images: the effect of metallic implants was less in the megavoltage CT than in the kilovoltage CT or the extended-scaled kilovoltage CT. The error caused by the titanium implants was beyond a clinically acceptable range. To reduce the error of dose calculation, we suggest that the megavoltage CT be used for planning. In addition, it is necessary to consider the distance between the titanium implants and the targets or the organs at risk to prescribe the dose for the target and the dose constraint for the organs at risk.« less
The Application of a Technique for Vector Correlation to Problems in Meteorology and Oceanography.
NASA Astrophysics Data System (ADS)
Breaker, L. C.; Gemmill, W. H.; Crosby, D. S.
1994-11-01
In a recent study, Crosby et al. proposed a definition for vector correlation that has not been commonly used in meteorology or oceanography. This definition has both a firm theoretical basis and a rather complete set of desirable statistical properties. In this study, the authors apply the definition to practical problems arising in meteorology and oceanography. In the first of two case studies, vector correlations were calculated between subsurface currents for five locations along the southeastern shore of Lake Erie. Vector correlations for one sample size were calculated for all current meter combinations, first including the seiche frequency and then with the seiche frequency removed. Removal of the seiche frequency, which was easily detected in the current spectra, had only a small effect on the vector correlations. Under reasonable assumptions, the vector correlations were in most cases statistically significant and revealed considerable fine structure in the vector correlation sequences. In some cases, major variations in vector correlation coincided with changes in surface wind. The vector correlations for the various current meter combinations decreased rapidly with increasing spatial separation. For one current meter combination, canonical correlations were also calculated; the first canonical correlation tended to retain the underlying trend, whereas the second canonical correlation retained the peaks in the vector correlations.In the second case study, vector correlations were calculated between marine surface winds derived from the National Meteorological Center's Global Data Assimilation System and observed winds acquired from the network of National Data Buoy Center buoys that are located off the continental United States and in the Gulf of Alaska. Results of this comparison indicated that 1) there was a significant decrease in correlation between the predicted and observed winds with increasing forecast interval out to 72 h, 2) the technique provides a sensitive indicator for detecting bad buoy reports, and 3) there was no obvious seasonal cycle in the monthly vector correlations for the period of observation.
Quantitative Assessment of Knee Progression Angle During Gait in Children With Cerebral Palsy.
Davids, Jon R; Cung, Nina Q; Pomeroy, Robin; Schultz, Brooke; Torburn, Leslie; Kulkarni, Vedant A; Brown, Sean; Bagley, Anita M
2018-04-01
Abnormal hip rotation is a common deviation in children with cerebral palsy (CP). Clinicians typically assess hip rotation during gait by observing the direction that the patella points relative to the path of walking, which is referred to as the knee progression angle (KPA). Two kinematic methods for calculating the KPA are compared with each other. Video-based qualitative assessment of KPA is compared with the quantitative methods to determine reliability and validity. The KPA was calculated by both direct and indirect methods for 32 typically developing (TD) children and a convenience cohort of 43 children with hemiplegic type CP. An additional convenience cohort of 26 children with hemiplegic type CP was selected for qualitative assessment of KPA, performed by 3 experienced clinicians, using 3 categories (internal, >10 degrees; neutral, -10 to 10 degrees; and external, >-10 degrees). Root mean square (RMS) analysis comparing the direct and indirect KPAs was 1.14+0.43 degrees for TD children, and 1.75+1.54 degrees for the affected side of children with CP. The difference in RMS among the 2 groups was statistically, but not clinically, significant (P=0.019). Intraclass correlation coefficient revealed excellent agreement between the direct and indirect methods of KPA for TD and CP children (0.996 and 0.992, respectively; P<0.001).For the qualitative assessment of KPA there was complete agreement among all examiners for 17 of 26 cases (65%). Direct KPA matched for 49 of 78 observations (63%) and indirect KPA matched for 52 of 78 observations (67%). The RMS analysis of direct and indirect methods for KPA was statistically but not clinically significant, which supports the use of either method based upon availability. Video-based qualitative assessment of KPA showed moderate reliability and validity. The differences between observed and calculated KPA indicate the need for caution when relying on visual assessments for clinical interpretation, and demonstrate the value of adding KPA calculation to standard kinematic analysis. Level II-diagnostic test.
Kissling, Grace E; Haseman, Joseph K; Zeiger, Errol
2015-09-02
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP's statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP, 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800×0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP's decision making process, overstates the number of statistical comparisons made, and ignores the fact that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus' conclusion that such obvious responses merely "generate a hypothesis" rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. Published by Elsevier Ireland Ltd.
Kissling, Grace E.; Haseman, Joseph K.; Zeiger, Errol
2014-01-01
A recent article by Gaus (2014) demonstrates a serious misunderstanding of the NTP’s statistical analysis and interpretation of rodent carcinogenicity data as reported in Technical Report 578 (Ginkgo biloba) (NTP 2013), as well as a failure to acknowledge the abundant literature on false positive rates in rodent carcinogenicity studies. The NTP reported Ginkgo biloba extract to be carcinogenic in mice and rats. Gaus claims that, in this study, 4800 statistical comparisons were possible, and that 209 of them were statistically significant (p<0.05) compared with 240 (4800 × 0.05) expected by chance alone; thus, the carcinogenicity of Ginkgo biloba extract cannot be definitively established. However, his assumptions and calculations are flawed since he incorrectly assumes that the NTP uses no correction for multiple comparisons, and that significance tests for discrete data operate at exactly the nominal level. He also misrepresents the NTP’s decision making process, overstates the number of statistical comparisons made, and ignores that fact that that the mouse liver tumor effects were so striking (e.g., p<0.0000000000001) that it is virtually impossible that they could be false positive outcomes. Gaus’ conclusion that such obvious responses merely “generate a hypothesis” rather than demonstrate a real carcinogenic effect has no scientific credibility. Moreover, his claims regarding the high frequency of false positive outcomes in carcinogenicity studies are misleading because of his methodological misconceptions and errors. PMID:25261588
Heads Up! a Calculation- & Jargon-Free Approach to Statistics
ERIC Educational Resources Information Center
Giese, Alan R.
2012-01-01
Evaluating the strength of evidence in noisy data is a critical step in scientific thinking that typically relies on statistics. Students without statistical training will benefit from heuristic models that highlight the logic of statistical analysis. The likelihood associated with various coin-tossing outcomes gives students such a model. There…
NASA Astrophysics Data System (ADS)
Monaghan, Kari L.
The problem addressed was the concern for aircraft safety rates as they relate to the rate of maintenance outsourcing. Data gathered from 14 passenger airlines: AirTran, Alaska, America West, American, Continental, Delta, Frontier, Hawaiian, JetBlue, Midwest, Northwest, Southwest, United, and USAir covered the years 1996 through 2008. A quantitative correlational design, utilizing Pearson's correlation coefficient, and the coefficient of determination were used in the present study to measure the correlation between variables. Elements of passenger airline aircraft maintenance outsourcing and aircraft accidents, incidents, and pilot deviations within domestic passenger airline operations were analyzed, examined, and evaluated. Rates of maintenance outsourcing were analyzed to determine the association with accident, incident, and pilot deviation rates. Maintenance outsourcing rates used in the evaluation were the yearly dollar expenditure of passenger airlines for aircraft maintenance outsourcing as they relate to the total airline aircraft maintenance expenditures. Aircraft accident, incident, and pilot deviation rates used in the evaluation were the yearly number of accidents, incidents, and pilot deviations per miles flown. The Pearson r-values were calculated to measure the linear relationship strength between the variables. There were no statistically significant correlation findings for accidents, r(174)=0.065, p=0.393, and incidents, r(174)=0.020, p=0.793. However, there was a statistically significant correlation for pilot deviation rates, r(174)=0.204, p=0.007 thus indicating a statistically significant correlation between maintenance outsourcing rates and pilot deviation rates. The calculated R square value of 0.042 represents the variance that can be accounted for in aircraft pilot deviation rates by examining the variance in aircraft maintenance outsourcing rates; accordingly, 95.8% of the variance is unexplained. Suggestions for future research include replication of the present study with the inclusion of maintenance outsourcing rate data for all airlines differentiated between domestic and foreign repair station utilization. Replication of the present study every five years is also encouraged to continue evaluating the impact of maintenance outsourcing practices on passenger airline safety.
Taoka, Toshiaki; Kawai, Hisashi; Nakane, Toshiki; Hori, Saeka; Ochi, Tomoko; Miyasaka, Toshiteru; Sakamoto, Masahiko; Kichikawa, Kimihiko; Naganawa, Shinji
2016-09-01
The "K2" value is a factor that represents the vascular permeability of tumors and can be calculated from datasets obtained with the dynamic susceptibility contrast (DSC) method. The purpose of the current study was to correlate K2 with Ktrans, which is a well-established permeability parameter obtained with the dynamic contrast enhance (DCE) method, and determine the usefulness of K2 for glioma grading with histogram analysis. The subjects were 22 glioma patients (Grade II: 5, III: 6, IV: 11) who underwent DSC studies, including eight patients in which both DSC and DCE studies were performed on separate days within 10days. We performed histogram analysis of regions of interest of the tumors and acquired 20th percentile values for leakage-corrected cerebral blood volume (rCBV20%ile), K2 (K220%ile), and for patients who underwent a DCE study, Ktrans (Ktrans20%ile). We evaluated the correlation between K220%ile and Ktrans20%ile and the statistical difference between rCBV20%ile and K220%ile. We found a statistically significant correlation between K220%ile and Ktrans20%ile (r=0.717, p<0.05). rCBV20%ile showed a significant difference between Grades II and III and between Grades II and IV, whereas K220%ile showed a statistically significant (p<0.05) difference between Grades II and IV and between Grades III and IV. The K2 value calculated from the DSC dataset, which can be obtained with a short acquisition time, showed a correlation with Ktrans obtained with the DCE method and may be useful for glioma grading when analyzed with histogram analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-01-01
Context: Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. Objective: To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Data Sources: Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. Study Selection: The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Data Extraction: Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Results: Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, –0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Conclusions: Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up. PMID:23016017
Perser, Karen; Godfrey, David; Bisson, Leslie
2011-05-01
Double-row rotator cuff repair methods have improved biomechanical performance when compared with single-row repairs. To review clinical outcomes of single-row versus double-row rotator cuff repair with the hypothesis that double-row rotator cuff repair will result in better clinical and radiographic outcomes. Published literature from January 1980 to April 2010. Key terms included rotator cuff, prospective studies, outcomes, and suture techniques. The literature was systematically searched, and 5 level I and II studies were found comparing clinical outcomes of single-row and double-row rotator cuff repair. Coleman methodology scores were calculated for each article. Meta-analysis was performed, with treatment effect between single row and double row for clinical outcomes and with odds ratios for radiographic results. The sample size necessary to detect a given difference in clinical outcome between the 2 methods was calculated. Three level I studies had Coleman scores of 80, 74, and 81, and two level II studies had scores of 78 and 73. There were 156 patients with single-row repairs and 147 patients with double-row repairs, both with an average follow-up of 23 months (range, 12-40 months). Double-row repairs resulted in a greater treatment effect for each validated outcome measure in 4 studies, but the differences were not clinically or statistically significant (range, 0.4-2.2 points; 95% confidence interval, -0.19, 4.68 points). Double-row repairs had better radiographic results, but the differences were also not statistically significant (P = 0.13). Two studies had adequate power to detect a 10-point difference between repair methods using the Constant score, and 1 study had power to detect a 5-point difference using the UCLA (University of California, Los Angeles) score. Double-row rotator cuff repair does not show a statistically significant improvement in clinical outcome or radiographic healing with short-term follow-up.
Ohno, Yoshiharu; Nishio, Mizuho; Koyama, Hisanobu; Fujisawa, Yasuko; Yoshikawa, Takeshi; Matsumoto, Sumiaki; Sugimura, Kazuro
2013-06-01
The objective of our study was to prospectively compare the capability of dynamic area-detector CT analyzed with different mathematic methods and PET/CT in the management of pulmonary nodules. Fifty-two consecutive patients with 96 pulmonary nodules underwent dynamic area-detector CT, PET/CT, and microbacterial or pathologic examinations. All nodules were classified into the following groups: malignant nodules (n = 57), benign nodules with low biologic activity (n = 15), and benign nodules with high biologic activity (n = 24). On dynamic area-detector CT, the total, pulmonary arterial, and systemic arterial perfusions were calculated using the dual-input maximum slope method; perfusion was calculated using the single-input maximum slope method; and extraction fraction and blood volume (BV) were calculated using the Patlak plot method. All indexes were statistically compared among the three nodule groups. Then, receiver operating characteristic analyses were used to compare the diagnostic capabilities of the maximum standardized uptake value (SUVmax) and each perfusion parameter having a significant difference between malignant and benign nodules. Finally, the diagnostic performances of the indexes were compared by means of the McNemar test. No adverse effects were observed in this study. All indexes except extraction fraction and BV, both of which were calculated using the Patlak plot method, showed significant differences among the three groups (p < 0.05). Areas under the curve of total perfusion calculated using the dual-input method, pulmonary arterial perfusion calculated using the dual-input method, and perfusion calculated using the single-input method were significantly larger than that of SUVmax (p < 0.05). The accuracy of total perfusion (83.3%) was significantly greater than the accuracy of the other indexes: pulmonary arterial perfusion (72.9%, p < 0.05), systemic arterial perfusion calculated using the dual-input method (69.8%, p < 0.05), perfusion (66.7%, p < 0.05), and SUVmax (60.4%, p < 0.05). Dynamic area-detector CT analyzed using the dual-input maximum slope method has better potential for the diagnosis of pulmonary nodules than dynamic area-detector CT analyzed using other methods and than PET/CT.
Manterola, Carlos; Busquets, Juli; Pascual, Marta; Grande, Luis
2006-02-01
The aim of this study was to determine the methodological quality of articles on therapeutic procedures published in Cirugía Española and to study its association with the publication year, center, and subject-matter. A bibliometric study that included all articles on therapeutic procedures published in Cirugía Española between 2001 and 2004 was performed. All kinds of clinical designs were considered, excluding editorials, review articles, letters to editor, and experimental studies. The variables analyzed were: year of publication, center, design, and methodological quality. Methodological quality was determined by a valid and reliable scale. Descriptive statistics (calculation of means, standard deviation and medians) and analytical statistics (Pearson's chi2, nonparametric, ANOVA and Bonferroni tests) were used. A total of 244 articles were studied (197 case series [81%], 28 cohort studies [12%], 17 clinical trials [7%], 1 cross sectional study and 1 case-control study [0.8%]). The studies were performed mainly in Catalonia and Murcia (22% and 16%, respectively). The most frequent subject areas were soft tissue and hepatobiliopancreatic surgery (23% and 19%, respectively). The mean and median of the methodological quality score calculated for the entire series was 10.2 +/- 3.9 points and 9.5 points, respectively. Methodological quality significantly increased by publication year (p < 0.001). An association between methodological quality and subject area was observed but no association was detected with the center performing the study. The methodological quality of articles on therapeutic procedures published in Cirugía Española between 2001 and 2004 is low. However, a statistically significant trend toward improvement was observed.
Factors leading to the computer vision syndrome: an issue at the contemporary workplace.
Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J
2007-01-01
Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan-1(D/A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degree to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p < 0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that the most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.
Factors leading to the Computer Vision Syndrome: an issue at the contemporary workplace.
Izquierdo, Juan C; García, Maribel; Buxó, Carmen; Izquierdo, Natalio J
2004-01-01
Vision and eye related problems are common among computer users, and have been collectively called the Computer Vision Syndrome (CVS). An observational study in order to identify the risk factors leading to the CVS was done. Twenty-eight participants answered a validated questionnaire, and had their workstations examined. The questionnaire evaluated personal, environmental, ergonomic factors, and physiologic response of computer users. The distance from the eye to the computers' monitor (A), the computers' monitor height (B), and visual axis height (C) were measured. The difference between B and C was calculated and labeled as D. Angles of gaze to the computer monitor were calculated using the formula: angle=tan(-1)(D/ A). Angles were divided into two groups: participants with angles of gaze ranging from 0 degrees to 13.9 degrees were included in Group 1; and participants gazing at angles larger than 14 degrees were included in Group 2. Statistical analysis of the evaluated variables was made. Computer users in both groups used more tear supplements (as part of the syndrome) than expected. This association was statistically significant (p<0.10). Participants in Group 1 reported more pain than participants in Group 2. Associations between the CVS and other personal or ergonomic variables were not statistically significant. Our findings show that most important factor leading to the syndrome is the angle of gaze at the computer monitor. Pain in computer users is diminished when gazing downwards at angles of 14 degrees or more. The CVS remains an under estimated and poorly understood issue at the workplace. The general public, health professionals, the government, and private industries need to be educated about the CVS.
Mandell, Jacob C; Rhodes, Jeffrey A; Shah, Nehal; Gaviola, Glenn C; Gomoll, Andreas H; Smith, Stacy E
2017-11-01
Accurate assessment of knee articular cartilage is clinically important. Although 3.0 Tesla (T) MRI is reported to offer improved diagnostic performance, literature regarding the clinical impact of MRI field strength is lacking. The purpose of this study is to compare the diagnostic performance of clinical MRI reports for assessment of cartilage at 1.5 and 3.0 T in comparison to arthroscopy. This IRB-approved retrospective study consisted of 300 consecutive knees in 297 patients who had routine clinical MRI and arthroscopy. Descriptions of cartilage from MRI reports of 165 knees at 1.5 T and 135 at 3.0 T were compared with arthroscopy. The sensitivity, specificity, percent of articular surfaces graded concordantly, and percent of articular surfaces graded within one grade of the arthroscopic grading were calculated for each articular surface at 1.5 and 3.0 T. Agreement between MRI and arthroscopy was calculated with the weighted-kappa statistic. Significance testing was performed utilizing the z-test after bootstrapping to obtain the standard error. The sensitivity, specificity, percent of articular surfaces graded concordantly, and percent of articular surfaces graded within one grade were 61.4%, 82.7%, 62.2%, and 77.5% at 1.5 T and 61.8%, 80.6%, 59.5%, and 75.6% at 3.0 T, respectively. The weighted kappa statistic was 0.56 at 1.5 T and 0.55 at 3.0 T. There was no statistically significant difference in any of these parameters between 1.5 and 3.0 T. Factors potentially contributing to the lack of diagnostic advantage of 3.0 T MRI are discussed.
Effects of Topic Simvastatin for the Treatment of Chronic Vascular Cutaneous Ulcers: A Pilot Study.
Raposio, Edoardo; Libondi, Guido; Bertozzi, Nicolò; Grignaffini, Eugenio; Grieco, Michele P
2015-12-01
Recent research suggests that statins might be useful in the process of wound healing, playing a positive immune-modulatory role, improving microvascular function and reducing oxidative stress. The aim of this pilot study was to evaluate the efficacy of topic application of Simvastatin-based cream in the treatment of chronic vascular cutaneous ulcers, comparing this type of treatment to a collagen-based dressing, proven to be effective for ulcer treatment. A total of 20 ulcers were studied in 2 Groups of randomly-chosen patients for a period of one month. In the first Group a 0.5% Simvastatin-based cream was topically administered, while the second Group (control) was treated with an absorbable type I bovine collagen-based medication. Each week, wound healing progress was observed in both Groups, and the ulcers photographed. Wound healing rate was calculated by considering the absolute change in area and by the formula "healing ratio (%) = [(Area 0 - Area t4 )/Area 0 ] × 100," both sets of data being related to the days comprised in the study in order to calculate healing rate per day. Statistical analysis was performed by Student t test. Study endpoint equaling the time-course changes of ulcer areas. At the end of the study, when considering absolute change in area, the experimental Group appeared to heal better and faster than the control Group although differences between the Groups were not statistically significant. Conversely, rates of wound healing in the experimental and control Groups were 46.88% and 64% respectively, revealing statistically significant differences. ( P < 0.05). In conclusion, topic application of a simvastatin-based cream proved to be well- tolerated but not effective in the management of vascular leg ulcers in a 4 week-period.
Dall'Agnol, Cristina; Hartmann, Mateus Silveira Martins; Barletta, Fernando Branco
2008-01-01
This study evaluated the efficiency of different techniques for removal of filling material from root canals, using computed tomography (CT). Sixty mesial roots from extracted human mandibular molars were used. Root canals were filled and, after 6 months, the teeth were randomly assigned to 3 groups, according to the root-filling removal technique: Group A - hand instrumentation with K-type files; Group B - reciprocating instrumentation with engine-driven K-type files; and Group C rotary instrumentation with engine-driven ProTaper system. CT scans were used to assess the volume of filling material inside the root canals before and after the removal procedure. In both moments, the area of filling material was outlined by an experienced radiologist and the volume of filling material was automatically calculated by the CT software program. Based on the volume of initial and residual filling material of each specimen, the percentage of filling material removed from the root canals by the different techniques was calculated. Data were analyzed statistically by ANOVA and chi-square test for linear trend (?=0.05). No statistically significant difference (p=0.36) was found among the groups regarding the percent means of removed filling material. The analysis of the association between the percentage of filling material removal (high or low) and the proposed techniques by chi-square test showed statistically significant difference (p=0.015), as most cases in group B (reciprocating technique) presented less than 50% of filling material removed (low percent removal). In conclusion, none of the techniques evaluated in this study was effective in providing complete removal of filling material from the root canals.
Developing chemical criteria for wildlife: The benchmark dose versus NOAEL approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linder, G.
1995-12-31
Wildlife may be exposed to a wide variety of chemicals in their environment, and various strategies for evaluating wildlife risk for these chemicals have been developed. One, a ``no-observable-adverse-effects-level`` or NOAEL-approach has increasingly been applied to develop chemical criteria for wildlife. In this approach, the NOAEL represents the highest experimental concentration at which there is no statistically significant change in some toxicity endpoint relative to a control. Another, the ``benchmark dose`` or BMD-approach relies on the lower confidence limit for a concentration that corresponds to a small, but statistically significant, change in effect over some reference condition. Rather than correspondingmore » to a single experimental concentration as does the NOAEL, the BMD-approach considers the full concentration response curve for derivation of the BMD. Here, using a variety of vertebrates and an assortment of chemicals (including carbofuran, paraquat, methylmercury, cadmium, zinc, and copper), the NOAEL-approach will be critically evaluated relative to the BMD approach. Statistical models used in the BMD approach suggest these methods are potentially available for eliminating safety factors in risk calculations. A reluctance to recommend this, however, stems from the uncertainty associated with the shape of concentration-response curves at low concentrations. Also, with existing data the derivation of BMDs has shortcomings when sample size is small (10 or fewer animals per treatment). The success of BMD models clearly depends upon the continued collection of wildlife data in the field and laboratory, the design of toxicity studies sufficient for BMD calculations, and complete reporting of these results in the literature. Overall, the BMD approach for developing chemical criteria for wildlife should be given further consideration, since it more fully evaluates concentration-response data.« less
Effectiveness of the training material in drug-dose calculation skills.
Basak, Tulay; Aslan, Ozlem; Unver, Vesile; Yildiz, Dilek
2016-07-01
The aim of study was to evaluate the effectiveness of the training material based on low-level environmental fidelity simulation in drug-dose calculation skills in senior nursing students. A quasi-experimental design with one group. The sample included senior nursing students attending a nursing school in Turkey in the period December 2012-January 2013. Eighty-two senior nursing students were included in the sample. Data were obtained using a data collection form which was developed by the researchers. A paired-sample t-test was used to compare the pretest and post-test scores. The difference between the mean pretest score and the mean post-test score was statistically significant (P < 0.05). This study revealed that the training material based on low-level environmental fidelity simulation positively impacted accurate drug-dose calculation skills in senior nursing students. © 2016 Japan Academy of Nursing Science.
Charge transfer of O3+ ions with atomic hydrogen
NASA Astrophysics Data System (ADS)
Wang, J. G.; Stancil, P. C.; Turner, A. R.; Cooper, D. L.
2003-01-01
Charge transfer processes due to collisions of ground state O3+(2s22p 2P) ions with atomic hydrogen are investigated using the quantum-mechanical molecular-orbital close-coupling (MOCC) method. The MOCC calculations utilize ab initio adiabatic potentials and nonadiabatic radial and rotational coupling matrix elements obtained with the spin-coupled valence-bond approach. Total and state-selective cross sections and rate coefficients are presented. Comparison with existing experimental and theoretical data shows our results to be in better agreement with the measurements than the previous calculations, although problems with some of the state-selective measurements are noted. Our calculations demonstrate that rotational coupling is not important for the total cross section, but for state-selective cross sections, its relevance increases with energy. For the ratios of triplet to singlet cross sections, significant departures from a statistical value are found, generally in harmony with experiment.
NASA Astrophysics Data System (ADS)
Garber, E. A.; Diligenskii, E. V.; Antonov, P. V.; Shalaevskii, D. L.; Dyatlov, I. A.
2017-09-01
The factors of the process of production of cold-rolled steel strips that promote and hinder the appearance of a coil lap welding defect upon annealing in bell-type furnaces are analyzed using statistical methods. The works dealing with this problem are analytically reviewed to reveal the problems to be studied and refined. The ranking of the technological factors according to the significance of their influence on the probability of appearance of this defect is determined and supported by industrial data, and a regression equation is derived to calculate this probability. The process of production is improved to minimize the rejection of strips caused by the welding of coil laps.
Association between the electromyographic fatigue threshold and ventilatory threshold.
Camata, T V; Lacerda, T R; Altimari, L R; Bortolloti, H; Fontes, E B; Dantas, J L; Nakamura, F Y; Abrão, T; Chacon-Mikahil, M P T; Moraes, A C
2009-01-01
The objective of this study is to verify the coincidence between the occurrence of the electromyographic fatigue threshold (EMGth) and the ventilatory threshold (Vth) in an incremental test in the cyclosimulator, as well as to compare the calculation of the RMS from the EMG signal using different time windows. Thirteen male cyclists (73.7 +/- 12.4 kg and 174.3 +/- 6.2 cm) performed a ramp incremental test (TI) in a cyclosimulator until voluntary exhaustion. Before the start of each TI subjects had the active bipolar electrodes placed over the superficial muscles of the quadriceps femoris (QF) of the right leg: rectus femoris (RF), vastus medialis (VM) and vastus lateralis (VL). The paired student's t test, pearson's correlation coefficient and the analysis method described by Bland and Altman for the determination of the concordance level were used for statistical analysis. The significance level adopted was P < 0.05. Although no significant differences were found between Vth and the EMGth calculated from windows of 2, 5, 10, 30 and 60 seconds in the studied muscles, it is suggested that the EMGth values determined from the calculation of the RMS curve with windows of 5 and 10 seconds seem to be more appropriate for the calculation of the RMS curve and determination of EMGth from visual inspection.
Effect size and statistical power in the rodent fear conditioning literature - A systematic review.
Carneiro, Clarissa F D; Moulin, Thiago C; Macleod, Malcolm R; Amaral, Olavo B
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science.
Reimold, Matthias; Slifstein, Mark; Heinz, Andreas; Mueller-Schauenburg, Wolfgang; Bares, Roland
2006-06-01
Voxelwise statistical analysis has become popular in explorative functional brain mapping with fMRI or PET. Usually, results are presented as voxelwise levels of significance (t-maps), and for clusters that survive correction for multiple testing the coordinates of the maximum t-value are reported. Before calculating a voxelwise statistical test, spatial smoothing is required to achieve a reasonable statistical power. Little attention is being given to the fact that smoothing has a nonlinear effect on the voxel variances and thus the local characteristics of a t-map, which becomes most evident after smoothing over different types of tissue. We investigated the related artifacts, for example, white matter peaks whose position depend on the relative variance (variance over contrast) of the surrounding regions, and suggest improving spatial precision with 'masked contrast images': color-codes are attributed to the voxelwise contrast, and significant clusters (e.g., detected with statistical parametric mapping, SPM) are enlarged by including contiguous pixels with a contrast above the mean contrast in the original cluster, provided they satisfy P < 0.05. The potential benefit is demonstrated with simulations and data from a [11C]Carfentanil PET study. We conclude that spatial smoothing may lead to critical, sometimes-counterintuitive artifacts in t-maps, especially in subcortical brain regions. If significant clusters are detected, for example, with SPM, the suggested method is one way to improve spatial precision and may give the investigator a more direct sense of the underlying data. Its simplicity and the fact that no further assumptions are needed make it a useful complement for standard methods of statistical mapping.
Effect size and statistical power in the rodent fear conditioning literature – A systematic review
Macleod, Malcolm R.
2018-01-01
Proposals to increase research reproducibility frequently call for focusing on effect sizes instead of p values, as well as for increasing the statistical power of experiments. However, it is unclear to what extent these two concepts are indeed taken into account in basic biomedical science. To study this in a real-case scenario, we performed a systematic review of effect sizes and statistical power in studies on learning of rodent fear conditioning, a widely used behavioral task to evaluate memory. Our search criteria yielded 410 experiments comparing control and treated groups in 122 articles. Interventions had a mean effect size of 29.5%, and amnesia caused by memory-impairing interventions was nearly always partial. Mean statistical power to detect the average effect size observed in well-powered experiments with significant differences (37.2%) was 65%, and was lower among studies with non-significant results. Only one article reported a sample size calculation, and our estimated sample size to achieve 80% power considering typical effect sizes and variances (15 animals per group) was reached in only 12.2% of experiments. Actual effect sizes correlated with effect size inferences made by readers on the basis of textual descriptions of results only when findings were non-significant, and neither effect size nor power correlated with study quality indicators, number of citations or impact factor of the publishing journal. In summary, effect sizes and statistical power have a wide distribution in the rodent fear conditioning literature, but do not seem to have a large influence on how results are described or cited. Failure to take these concepts into consideration might limit attempts to improve reproducibility in this field of science. PMID:29698451
The Web as an educational tool for/in learning/teaching bioinformatics statistics.
Oliver, J; Pisano, M E; Alonso, T; Roca, P
2005-12-01
Statistics provides essential tool in Bioinformatics to interpret the results of a database search or for the management of enormous amounts of information provided from genomics, proteomics and metabolomics. The goal of this project was the development of a software tool that would be as simple as possible to demonstrate the use of the Bioinformatics statistics. Computer Simulation Methods (CSMs) developed using Microsoft Excel were chosen for their broad range of applications, immediate and easy formula calculation, immediate testing and easy graphics representation, and of general use and acceptance by the scientific community. The result of these endeavours is a set of utilities which can be accessed from the following URL: http://gmein.uib.es/bioinformatica/statistics. When tested on students with previous coursework with traditional statistical teaching methods, the general opinion/overall consensus was that Web-based instruction had numerous advantages, but traditional methods with manual calculations were also needed for their theory and practice. Once having mastered the basic statistical formulas, Excel spreadsheets and graphics were shown to be very useful for trying many parameters in a rapid fashion without having to perform tedious calculations. CSMs will be of great importance for the formation of the students and professionals in the field of bioinformatics, and for upcoming applications of self-learning and continuous formation.
Kovac, Christine M; Brown, Jennifer A; Apodaca, Christina C; Napolitano, Peter G; Pierce, Brian; Patience, Troy; Hume, Roderick F; Calhoun, Byron C
2002-07-01
To determine whether current methods for detecting Down syndrome based on fetal femur length calculations are influenced by ethnicity. The study population consisted of all fetuses scanned between 14 and 20 completed weeks' gestation from April 1, 1997, to January 1, 2000. The expected femur length was calculated from the biparietal diameter. The variance from the expected femur length, compared with the biparietal diameter, was calculated, and the mean variations were compared by maternal race. Ethnic-specific formulas for expected femur length were derived by simple regression. There was a statistically significant difference in femur length in the Asian group compared with all other groups, as well as the white group compared with the black and Asian groups (P < .05). However, there was no significant difference between the black and Hispanic groups or the white and Hispanic groups. The Asian group had the largest variation, with the measured femur length being less than the expected femur length. All groups studied had a mean expected femur length less than the mean measured femur length. On the basis of the ethnic-specific formulas for femur length, there was a significant decrease in patients that would undergo further evaluation for Down syndrome. There is a significant difference in the mean expected femur length by biparietal diameter among fetuses in the second trimester with regard to ethnicity. Using ethnic-specific formulas for expected femur length can have a considerable impact on the use of sonographic risk factors for Down syndrome screening. Further data are required for use of femur length as a screening tool in the genetic sonogram.
Pfeifle, Mark; Ma, Yong-Tao; Jasper, Ahren W; Harding, Lawrence B; Hase, William L; Klippenstein, Stephen J
2018-05-07
Ozonolysis produces chemically activated carbonyl oxides (Criegee intermediates, CIs) that are either stabilized or decompose directly. This branching has an important impact on atmospheric chemistry. Prior theoretical studies have employed statistical models for energy partitioning to the CI arising from dissociation of the initially formed primary ozonide (POZ). Here, we used direct dynamics simulations to explore this partitioning for decomposition of c-C 2 H 4 O 3 , the POZ in ethylene ozonolysis. A priori estimates for the overall stabilization probability were then obtained by coupling the direct dynamics results with master equation simulations. Trajectories were initiated at the concerted cycloreversion transition state, as well as the second transition state of a stepwise dissociation pathway, both leading to a CI (H 2 COO) and formaldehyde (H 2 CO). The resulting CI energy distributions were incorporated in master equation simulations of CI decomposition to obtain channel-specific stabilized CI (sCI) yields. Master equation simulations of POZ formation and decomposition, based on new high-level electronic structure calculations, were used to predict yields for the different POZ decomposition channels. A non-negligible contribution of stepwise POZ dissociation was found, and new mechanistic aspects of this pathway were elucidated. By combining the trajectory-based channel-specific sCI yields with the channel branching fractions, an overall sCI yield of (48 ± 5)% was obtained. Non-statistical energy release was shown to measurably affect sCI formation, with statistical models predicting significantly lower overall sCI yields (∼30%). Within the range of experimental literature values (35%-54%), our trajectory-based calculations favor those clustered at the upper end of the spectrum.
NASA Astrophysics Data System (ADS)
Pfeifle, Mark; Ma, Yong-Tao; Jasper, Ahren W.; Harding, Lawrence B.; Hase, William L.; Klippenstein, Stephen J.
2018-05-01
Ozonolysis produces chemically activated carbonyl oxides (Criegee intermediates, CIs) that are either stabilized or decompose directly. This branching has an important impact on atmospheric chemistry. Prior theoretical studies have employed statistical models for energy partitioning to the CI arising from dissociation of the initially formed primary ozonide (POZ). Here, we used direct dynamics simulations to explore this partitioning for decomposition of c-C2H4O3, the POZ in ethylene ozonolysis. A priori estimates for the overall stabilization probability were then obtained by coupling the direct dynamics results with master equation simulations. Trajectories were initiated at the concerted cycloreversion transition state, as well as the second transition state of a stepwise dissociation pathway, both leading to a CI (H2COO) and formaldehyde (H2CO). The resulting CI energy distributions were incorporated in master equation simulations of CI decomposition to obtain channel-specific stabilized CI (sCI) yields. Master equation simulations of POZ formation and decomposition, based on new high-level electronic structure calculations, were used to predict yields for the different POZ decomposition channels. A non-negligible contribution of stepwise POZ dissociation was found, and new mechanistic aspects of this pathway were elucidated. By combining the trajectory-based channel-specific sCI yields with the channel branching fractions, an overall sCI yield of (48 ± 5)% was obtained. Non-statistical energy release was shown to measurably affect sCI formation, with statistical models predicting significantly lower overall sCI yields (˜30%). Within the range of experimental literature values (35%-54%), our trajectory-based calculations favor those clustered at the upper end of the spectrum.
NASA Technical Reports Server (NTRS)
Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.
1990-01-01
This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vesna, V. A.; Gledenov, Yu. M.; Nesvizhevsky, V. V., E-mail: nesvizhevsky@ill.eu
The paper presents results of preliminarymeasurements of the left–right asymmetry in integral spectra of γ-quanta emitted in the interaction of polarized thermal neutrons with nuclei. These results indicate that for all cases of measured statistically significant P-odd asymmetry, the left–right asymmetry coefficient is much smaller than the P-odd asymmetry coefficient. This observation is not consistent with the predictions of theoretical calculations.
Lee, Tai-Sung; Hu, Yuan; Sherborne, Brad; Guo, Zhuyan; York, Darrin M
2017-07-11
We report the implementation of the thermodynamic integration method on the pmemd module of the AMBER 16 package on GPUs (pmemdGTI). The pmemdGTI code typically delivers over 2 orders of magnitude of speed-up relative to a single CPU core for the calculation of ligand-protein binding affinities with no statistically significant numerical differences and thus provides a powerful new tool for drug discovery applications.
Mandibular trabecular bone as fracture indicator in 80-year-old men and women.
Hassani-Nejad, Azar; Ahlqwist, Margareta; Hakeberg, Magnus; Jonasson, Grethe
2013-12-01
The objective of the present study was to compare assessments of the mandibular bone as fracture risk indicators for 277 men and women. The mandibular trabecular bone was evaluated in periapical radiographs, using a visual index, as dense, mixed dense and sparse, or sparse. Bone texture was analysed using a computer-based method in which the number of transitions from trabeculae to intertrabecular spaces was calculated. The sum of the sizes and intensities of the spaces between the trabeculae was calculated using Jaw-X software. Women had a statistically significantly greater number of fractures and a higher frequency of sparse mandibular bone. The OR for having suffered a fracture with visually sparse trabecular bone was highest for the male group (OR = 5.55) and lowest for the female group (OR = 3.35). For bone texture as an indicator of previous fracture, the OR was significant for the female group (OR = 2.61) but not for the male group, whereas the Jaw-X calculations did not differentiate between fractured and non-fractured groups. In conclusion, all bone-quality assessments showed that women had a higher incidence of sparse trabecular bone than did men. Only the methods of visual assessment and trabecular texture were significantly correlated with previous bone fractures. © 2013 Eur J Oral Sci.
Gunasekara, Chathura; Zhang, Kui; Deng, Wenping; Brown, Laura
2018-01-01
Abstract Despite their important roles, the regulators for most metabolic pathways and biological processes remain elusive. Presently, the methods for identifying metabolic pathway and biological process regulators are intensively sought after. We developed a novel algorithm called triple-gene mutual interaction (TGMI) for identifying these regulators using high-throughput gene expression data. It first calculated the regulatory interactions among triple gene blocks (two pathway genes and one transcription factor (TF)), using conditional mutual information, and then identifies significantly interacted triple genes using a newly identified novel mutual interaction measure (MIM), which was substantiated to reflect strengths of regulatory interactions within each triple gene block. The TGMI calculated the MIM for each triple gene block and then examined its statistical significance using bootstrap. Finally, the frequencies of all TFs present in all significantly interacted triple gene blocks were calculated and ranked. We showed that the TFs with higher frequencies were usually genuine pathway regulators upon evaluating multiple pathways in plants, animals and yeast. Comparison of TGMI with several other algorithms demonstrated its higher accuracy. Therefore, TGMI will be a valuable tool that can help biologists to identify regulators of metabolic pathways and biological processes from the exploded high-throughput gene expression data in public repositories. PMID:29579312
Estimation of the brain stem volume by stereological method on magnetic resonance imaging.
Erbagci, Hulya; Keser, Munevver; Kervancioglu, Selim; Kizilkan, Nese
2012-11-01
Neuron loss that occurs in some neurodegenerative diseases can lead to volume alterations by causing atrophy in the brain stem. The aim of this study was to determine the brain stem volume and the volume ratio of the brain stem to total brain volume related to gender and age using new Stereo Investigator system in normal subjects. For this purpose, MR images of 72 individuals who have no pathologic condition were evaluated. The total brain volumes of female and male were calculated as 966.81 ± 77.44 and 1,074.06 ± 111.75 cm3, respectively. Brain stem volumes of female and male were determined as 18.99 ± 2.36 and 22.05 ± 4.01 cm3, respectively. The ratios of brain stem volume to total brain volume were 1.96 ± 0.17 in female and 2.05 ± 0.29 in male. The total brain and brain stem volumes were observed smaller in female and it is statistically significant. Among the individuals whose ages are between 20 and 40, total brain and brain stem volume measurements with aging were not statistically significant. As a result, we believe that the measurement of brain stem volume with an objective and efficient calculation method will contribute to the early diagnosis of neurodegenerative diseases, as well as to determine the rate of disease progression, and the outcomes of treatment.
[Range of Hip Joint Motion and Weight of Lower Limb Function under 3D Dynamic Marker].
Xia, Q; Zhang, M; Gao, D; Xia, W T
2017-12-01
To explore the range of reasonable weight coefficient of hip joint in lower limb function. When the hip joints of healthy volunteers under normal conditions or fixed at three different positions including functional, flexed and extension positions, the movements of lower limbs were recorded by LUKOtronic motion capture and analysis system. The degree of lower limb function loss was calculated using Fugl-Meyer lower limb function assessment form when the hip joints were fixed at the aforementioned positions. One-way analysis of variance and Tamhane's T2 method were used to proceed statistics analysis and calculate the range of reasonable weight coefficient of hip joint. There were significant differences between the degree of lower limb function loss when the hip joints fixed at flexed and extension positions and at functional position. While the differences between the degree of lower limb function loss when the hip joints fixed at flexed position and extension position had no statistical significance. In 95% confidence interval, the reasonable weight coefficient of hip joint in lower limb function was between 61.05% and 73.34%. Expect confirming the reasonable weight coefficient, the effects of functional and non-functional positions on the degree of lower limb function loss should also be considered for the assessment of hip joint function loss. Copyright© by the Editorial Department of Journal of Forensic Medicine
Effect of long-term proton pump inhibitor administration on gastric mucosal atrophy: A meta-analysis
Li, Zhong; Wu, Cong; Li, Ling; Wang, Zhaoming; Xie, Haibin; He, Xiaozhou; Feng, Jin
2017-01-01
Background/Aims: Proton pump inhibitors (PPIs) are widely used for the treatment of acid-related gastrointestinal diseases. Recently, some studies have reported that PPIs can alter the gastric mucosal architecture; however, the relationship remains controversial. This meta-analysis study was designed to quantify the association between long-term PPI administration and gastric atrophy. Materials and Methods: A PubMed search was conducted to identify studies using the keywords proton pump inhibitors or PPI and gastric atrophy or atrophic gastritis; the timeframe of publication searched was up to May 2016. Heterogeneity among studies was tested with the Q test; odds ratios (OR) and 95% confidence intervals (CI) were calculated. P values were calculated by I2 tests and regarded as statistically significant when <0.05. Results: We identified 13 studies that included 1465 patients under long-term PPI therapy and 1603 controls, with a total gastric atrophy rate of 14.50%. There was a higher presence of gastric atrophy (15.84%; statistically significant) in PPI group compared to the control group (13.29%) (OR: 1.55, 95% CI: 1.00–2.41). Conclusions: The pooled data suggest that long-term PPI use is associated with increased rates of gastric atrophy. Large-scale multicenter studies should be conducted to further investigate the relationship between acid suppressants and precancerous diseases. PMID:28721975
Calculation of the detection limit in radiation measurements with systematic uncertainties
NASA Astrophysics Data System (ADS)
Kirkpatrick, J. M.; Russ, W.; Venkataraman, R.; Young, B. M.
2015-06-01
The detection limit (LD) or Minimum Detectable Activity (MDA) is an a priori evaluation of assay sensitivity intended to quantify the suitability of an instrument or measurement arrangement for the needs of a given application. Traditional approaches as pioneered by Currie rely on Gaussian approximations to yield simple, closed-form solutions, and neglect the effects of systematic uncertainties in the instrument calibration. These approximations are applicable over a wide range of applications, but are of limited use in low-count applications, when high confidence values are required, or when systematic uncertainties are significant. One proposed modification to the Currie formulation attempts account for systematic uncertainties within a Gaussian framework. We have previously shown that this approach results in an approximation formula that works best only for small values of the relative systematic uncertainty, for which the modification of Currie's method is the least necessary, and that it significantly overestimates the detection limit or gives infinite or otherwise non-physical results for larger systematic uncertainties where such a correction would be the most useful. We have developed an alternative approach for calculating detection limits based on realistic statistical modeling of the counting distributions which accurately represents statistical and systematic uncertainties. Instead of a closed form solution, numerical and iterative methods are used to evaluate the result. Accurate detection limits can be obtained by this method for the general case.
Park, Jong In; Park, Jong Min; Kim, Jung-In; Park, So-Yeon; Ye, Sung-Joon
2015-12-01
The aim of this study was to investigate the sensitivity of the gamma-index method according to various gamma criteria for volumetric modulated arc therapy (VMAT). Twenty head and neck (HN) and twenty prostate VMAT plans were retrospectively selected for this study. Both global and local 2D gamma evaluations were performed with criteria of 3%/3 mm, 2%/2 mm, 1%/2 mm and 2%/1 mm. In this study, the global and local gamma-index calculated the differences in doses relative to the maximum dose and the dose at the current measurement point, respectively. Using log files acquired during delivery, the differences in parameters at every control point between the VMAT plans and the log files were acquired. The differences in dose-volumetric parameters between reconstructed VMAT plans using the log files and the original VMAT plans were calculated. The Spearman's rank correlation coefficients (rs) were calculated between the passing rates and those differences. Considerable correlations with statistical significances were observed between global 1%/2 mm, local 1%/2 mm and local 2%/1 mm and the MLC position differences (rs = -0.712, -0.628 and -0.581). The numbers of rs values with statistical significance between the passing rates and the changes in dose-volumetric parameters were largest in global 2%/2 mm (n = 16), global 2%/1 mm (n = 15) and local 2%/1 mm (n = 13) criteria. Local gamma-index method with 2%/1 mm generally showed higher sensitivity to detect deviations between a VMAT plan and the delivery of the VMAT plan. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Heath, Anna; Manolopoulou, Ioanna; Baio, Gianluca
2016-10-15
The Expected Value of Perfect Partial Information (EVPPI) is a decision-theoretic measure of the 'cost' of parametric uncertainty in decision making used principally in health economic decision making. Despite this decision-theoretic grounding, the uptake of EVPPI calculations in practice has been slow. This is in part due to the prohibitive computational time required to estimate the EVPPI via Monte Carlo simulations. However, recent developments have demonstrated that the EVPPI can be estimated by non-parametric regression methods, which have significantly decreased the computation time required to approximate the EVPPI. Under certain circumstances, high-dimensional Gaussian Process (GP) regression is suggested, but this can still be prohibitively expensive. Applying fast computation methods developed in spatial statistics using Integrated Nested Laplace Approximations (INLA) and projecting from a high-dimensional into a low-dimensional input space allows us to decrease the computation time for fitting these high-dimensional GP, often substantially. We demonstrate that the EVPPI calculated using our method for GP regression is in line with the standard GP regression method and that despite the apparent methodological complexity of this new method, R functions are available in the package BCEA to implement it simply and efficiently. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Feature selection from a facial image for distinction of sasang constitution.
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun; Kim, Keun Ho
2009-09-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here.
Reynolds, Richard J; Fenster, Charles B
2008-05-01
Pollinator importance, the product of visitation rate and pollinator effectiveness, is a descriptive parameter of the ecology and evolution of plant-pollinator interactions. Naturally, sources of its variation should be investigated, but the SE of pollinator importance has never been properly reported. Here, a Monte Carlo simulation study and a result from mathematical statistics on the variance of the product of two random variables are used to estimate the mean and confidence limits of pollinator importance for three visitor species of the wildflower, Silene caroliniana. Both methods provided similar estimates of mean pollinator importance and its interval if the sample size of the visitation and effectiveness datasets were comparatively large. These approaches allowed us to determine that bumblebee importance was significantly greater than clearwing hawkmoth, which was significantly greater than beefly. The methods could be used to statistically quantify temporal and spatial variation in pollinator importance of particular visitor species. The approaches may be extended for estimating the variance of more than two random variables. However, unless the distribution function of the resulting statistic is known, the simulation approach is preferable for calculating the parameter's confidence limits.
Tooth-size discrepancy: A comparison between manual and digital methods
Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge
2014-01-01
Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529
Feature Selection from a Facial Image for Distinction of Sasang Constitution
Koo, Imhoi; Kim, Jong Yeol; Kim, Myoung Geun
2009-01-01
Recently, oriental medicine has received attention for providing personalized medicine through consideration of the unique nature and constitution of individual patients. With the eventual goal of globalization, the current trend in oriental medicine research is the standardization by adopting western scientific methods, which could represent a scientific revolution. The purpose of this study is to establish methods for finding statistically significant features in a facial image with respect to distinguishing constitution and to show the meaning of those features. From facial photo images, facial elements are analyzed in terms of the distance, angle and the distance ratios, for which there are 1225, 61 250 and 749 700 features, respectively. Due to the very large number of facial features, it is quite difficult to determine truly meaningful features. We suggest a process for the efficient analysis of facial features including the removal of outliers, control for missing data to guarantee data confidence and calculation of statistical significance by applying ANOVA. We show the statistical properties of selected features according to different constitutions using the nine distances, 10 angles and 10 rates of distance features that are finally established. Additionally, the Sasang constitutional meaning of the selected features is shown here. PMID:19745013
Three-Dimensional Eyeball and Orbit Volume Modification After LeFort III Midface Distraction.
Smektala, Tomasz; Nysjö, Johan; Thor, Andreas; Homik, Aleksandra; Sporniak-Tutak, Katarzyna; Safranow, Krzysztof; Dowgierd, Krzysztof; Olszewski, Raphael
2015-07-01
The aim of our study was to evaluate orbital volume modification with LeFort III midface distraction in patients with craniosynostosis and its influence on eyeball volume and axial diameter modification. Orbital volume was assessed by the semiautomatic segmentation method based on deformable surface models and on 3-dimensional (3D) interaction with haptics. The eyeball volumes and diameters were automatically calculated after manual segmentation of computed tomographic scans with 3D slicer software. The mean, minimal, and maximal differences as well as the standard deviation and intraclass correlation coefficient (ICC) for intraobserver and interobserver measurements reliability were calculated. The Wilcoxon signed rank test was used to compare measured values before and after surgery. P < 0.05 was considered statistically significant. Intraobserver and interobserver ICC for haptic-aided semiautomatic orbital volume measurements were 0.98 and 0.99, respectively. The intraobserver and interobserver ICC values for manual segmentation of the eyeball volume were 0.87 and 0.86, respectively. The orbital volume increased significantly after surgery: 30.32% (mean, 5.96 mL) for the left orbit and 31.04% (mean, 6.31 mL) for the right orbit. The mean increase in eyeball volume was 12.3%. The mean increases in the eyeball axial dimensions were 7.3%, 9.3%, and 4.4% for the X-, Y-, and Z-axes, respectively. The Wilcoxon signed rank test showed that preoperative and postoperative eyeball volumes, as well as the diameters along the X- and Y-axes, were statistically significant. Midface distraction in patients with syndromic craniostenosis results in a significant increase (P < 0.05) in the orbit and eyeball volumes. The 2 methods (haptic-aided semiautomatic segmentation and manual 3D slicer segmentation) are reproducible techniques for orbit and eyeball volume measurements.
Dynamic evolution of nearby galaxy clusters
NASA Astrophysics Data System (ADS)
Biernacka, M.; Flin, P.
2011-06-01
A study of the evolution of 377 rich ACO clusters with redshift z<0.2 is presented. The data concerning galaxies in the investigated clusters were obtained using FOCAS packages applied to Digital Sky Survey I. The 377 galaxy clusters constitute a statistically uniform sample to which visual galaxy/star reclassifications were applied. Cluster shape within 2.0 h-1 Mpc from the adopted cluster centre (the mean and the median of all galaxy coordinates, the position of the brightest and of the third brightest galaxy in the cluster) was determined through its ellipticity calculated using two methods: the covariance ellipse method (hereafter CEM) and the method based on Minkowski functionals (hereafter MFM). We investigated ellipticity dependence on the radius of circular annuli, in which ellipticity was calculated. This was realized by varying the radius from 0.5 to 2 Mpc in steps of 0.25 Mpc. By performing Monte Carlo simulations, we generated clusters to which the two ellipticity methods were applied. We found that the covariance ellipse method works better than the method based on Minkowski functionals. We also found that ellipticity distributions are different for different methods used. Using the ellipticity-redshift relation, we investigated the possibility of cluster evolution in the low-redshift Universe. The correlation of cluster ellipticities with redshifts is undoubtly an indicator of structural evolution. Using the t-Student statistics, we found a statistically significant correlation between ellipticity and redshift at the significance level of α = 0.95. In one of the two shape determination methods we found that ellipticity grew with redshift, while the other method gave opposite results. Monte Carlo simulations showed that only ellipticities calculated at the distance of 1.5 Mpc from cluster centre in the Minkowski functional method are robust enough to be taken into account, but for that radius we did not find any relation between e and z. Since CEM pointed towards the existence of the e(z) relation, we conclude that such an effect is real though rather weak. A detailed study of the e(z) relation showed that the observed relation is nonlinear, and the number of elongated structures grows rapidly for z>0.14.
2000 Iowa crash facts : a summary of motor vehicle crash statistics on Iowa roadways
DOT National Transportation Integrated Search
2000-01-01
All statistics are gathered and calculated by the Iowa Department of Transportations Office of Driver Services. National statistics : are obtained from Traffic Safety Facts 2000 published by the U.S. Department of Transportations National...
Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic
ERIC Educational Resources Information Center
Satorra, Albert; Bentler, Peter M.
2010-01-01
A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…
Descriptive and inferential statistical methods used in burns research.
Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars
2010-05-01
Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals in the fields of biostatistics and epidemiology when using more advanced statistical techniques. Copyright 2009 Elsevier Ltd and ISBI. All rights reserved.
Riffel, Philipp; Michaely, Henrik J; Morelli, John N; Paul, Dominik; Kannengiesser, Stephan; Schoenberg, Stefan O; Haneder, Stefan
2015-04-01
The purpose of this study was to evaluate the feasibility and technical quality of a zoomed three-dimensional (3D) turbo spin-echo (TSE) sampling perfection with application optimized contrasts using different flip-angle evolutions (SPACE) sequence of the lumbar spine. In this prospective feasibility study, nine volunteers underwent a 3-T magnetic resonance examination of the lumbar spine including 1) a conventional 3D T2-weighted (T2w) SPACE sequence with generalized autocalibrating partially parallel acquisition technique acceleration factor 2 and 2) a zoomed 3D T2w SPACE sequence with a reduced field of view (reduction factor 2). Images were evaluated with regard to image sharpness, signal homogeneity, and the presence of artifacts by two experienced radiologists. For quantitative analysis, signal-to-noise ratio (SNR) values were calculated. Image sharpness of anatomic structures was statistically significantly greater with zoomed SPACE (P < .0001), whereas the signal homogeneity was statistically significantly greater with conventional SPACE (cSPACE; P = .0003). There were no statistically significant differences in extent of artifacts. Acquisition times were 8:20 minutes for cSPACE and 6:30 minutes for zoomed SPACE. Readers 1 and 2 selected zSPACE as the preferred sequence in five of nine cases. In two of nine cases, both sequences were rated as equally preferred by both the readers. SNR values were statistically significantly greater with cSPACE. In comparison to a cSPACE sequences, zoomed SPACE imaging of the lumbar spine provides sharper images in conjunction with a 25% reduction in acquisition time. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Chaikh, Abdulhamid; Balosso, Jacques
2016-12-01
This study proposes a statistical process to compare different treatment plans issued from different irradiation techniques or different treatment phases. This approach aims to provide arguments for discussion about the impact on clinical results of any condition able to significantly alter dosimetric or ballistic related data. The principles of the statistical investigation are presented in the framework of a clinical example based on 40 fields of radiotherapy for lung cancers. Two treatment plans were generated for each patient making a change of dose distribution due to variation of lung density correction. The data from 2D gamma index (γ) including the pixels having γ≤1 were used to determine the capability index (Cp) and the acceptability index (Cpk) of the process. To measure the strength of the relationship between the γ passing rates and the Cp and Cpk indices, the Spearman's rank non-parametric test was used to calculate P values. The comparison between reference and tested plans showed that 95% of pixels have γ≤1 with criteria (6%, 6 mm). The values of the Cp and Cpk indices were lower than one showing a significant dose difference. The data showed a strong correlation between γ passing rates and the indices with P>0.8. The statistical analysis using Cp and Cpk, show the significance of dose differences resulting from two plans in radiotherapy. These indices can be used for adaptive radiotherapy to measure the difference between initial plan and daily delivered plan. The significant changes of dose distribution could raise the question about the continuity to treat the patient with the initial plan or the need for adjustments.
Tankevicius, Gediminas; Lankaite, Doanata; Krisciunas, Aleksandras
2013-08-01
The lack of knowledge about isometric ankle testing indicates the need for research in this area. to assess test-retest reliability and to determine the optimal position for isometric ankle-eversion and -inversion testing. Test-retest reliability study. Isometric ankle eversion and inversion were assessed in 3 different dynamometer foot-plate positions: 0°, 7°, and 14° of inversion. Two maximal repetitions were performed at each angle. Both limbs were tested (40 ankles in total). The test was performed 2 times with a period of 7 d between the tests. University hospital. The study was carried out on 20 healthy athletes with no history of ankle sprains. Reliability was assessed using intraclass correlation coefficient (ICC2,1); minimal detectable change (MDC) was calculated using a 95% confidence interval. Paired t test was used to measure statistically significant changes, and P <.05 was considered statistically significant. Eversion and inversion peak torques showed high ICCs in all 3 angles (ICC values .87-.96, MDC values 3.09-6.81 Nm). Eversion peak torque was the smallest when testing at the 0° angle and gradually increased, reaching maximum values at 14° angle. The increase of eversion peak torque was statistically significant at 7 ° and 14° of inversion. Inversion peak torque showed an opposite pattern-it was the smallest when measured at the 14° angle and increased at the other 2 angles; statistically significant changes were seen only between measures taken at 0° and 14°. Isometric eversion and inversion testing using the Biodex 4 Pro system is a reliable method. The authors suggest that the angle of 7° of inversion is the best for isometric eversion and inversion testing.
An analysis of science versus pseudoscience
NASA Astrophysics Data System (ADS)
Hooten, James T.
2011-12-01
This quantitative study identified distinctive features in archival datasets commissioned by the National Science Foundation (NSF) for Science and Engineering Indicators reports. The dependent variables included education level, and scores for science fact knowledge, science process knowledge, and pseudoscience beliefs. The dependent variables were aggregated into nine NSF-defined geographic regions and examined for the years 2004 and 2006. The variables were also examined over all years available in the dataset. Descriptive statistics were determined and tests for normality and homogeneity of variances were performed using Statistical Package for the Social Sciences. Analysis of Variance was used to test for statistically significant differences between the nine geographic regions for each of the four dependent variables. Statistical significance of 0.05 was used. Tukey post-hoc analysis was used to compute practical significance of differences between regions. Post-hoc power analysis using G*Power was used to calculate the probability of Type II errors. Tests for correlations across all years of the dependent variables were also performed. Pearson's r was used to indicate the strength of the relationship between the dependent variables. Small to medium differences in science literacy and education level were observed between many of the nine U.S. geographic regions. The most significant differences occurred when the West South Central region was compared to the New England and the Pacific regions. Belief in pseudoscience appeared to be distributed evenly across all U.S. geographic regions. Education level was a strong indicator of science literacy regardless of a respondent's region of residence. Recommendations for further study include more in-depth investigation to uncover the nature of the relationship between education level and belief in pseudoscience.
NASA Astrophysics Data System (ADS)
Yerlikaya, Emrah; Karageçili, Hasan; Aydin, Ruken Zeynep
2016-04-01
Obesity is a key risk for the development of hyperglycemia, hypertension, hyperlipidemia, insulin resistance and is totally referred to as the metabolic disorders. Diabetes mellitus, a metabolic disorder, is related with hyperglycemia, altered metabolism of lipids, carbohydrates and proteins. The minimum defining characteristic feature to identify diabetes mellitus is chronic and substantiated elevation of circulating glucose concentration. In this study, it is aimed to determine the body composition analyze of obese and (obese+diabetes) patients.We studied the datas taken from three independent groups with the body composition analyzer instrument. The body composition analyzer calculates body parameters, such as body fat ratio, body fat mass, fat free mass, estimated muscle mass, and base metabolic rate on the basis of data obtained by Dual Energy X-ray Absorptiometry using Bioelectrical Impedance Analysis. All patients and healthy subjects applied to Siirt University Medico and their datas were taken. The Statistical Package for Social Sciences version 21 was used for descriptive data analysis. When we compared and analyzed three groups datas, we found statistically significant difference between obese, (obese+diabetes) and control groups values. Anova test and tukey test are used to analyze the difference between groups and to do multiple comparisons. T test is also used to analyze the difference between genders. We observed the statistically significant difference in age and mineral amount p<0.00 between (diabetes+obese) and obese groups. Besides, when these patient groups and control group were analyzed, there were significant difference between most parameters. In terms of education level among the illiterate and university graduates; fat mass kg, fat percentage, internal lubrication, body mass index, water percentage, protein mass percentage, mineral percentage p<0.05, significant statistically difference were observed. This difference especially may result of a sedentary lifestyle.
Quantitative impact of pediatric sinus surgery on facial growth.
Senior, B; Wirtschafter, A; Mai, C; Becker, C; Belenky, W
2000-11-01
To quantitatively evaluate the long-term impact of sinus surgery on paranasal sinus development in the pediatric patient. Longitudinal review of eight pediatric patients treated with unilateral sinus surgery for periorbital or orbital cellulitis with an average follow-up of 6.9 years. Control subjects consisted of two groups, 9 normal adult patients with no computed tomographic evidence of sinusitis and 10 adult patients with scans consistent with sinusitis and a history of sinus-related symptoms extending to childhood. Application of computed tomography (CT) volumetrics, a technique allowing for precise calculation of volumes using thinly cut CT images, to the study and control groups. Paired Student t test analyses of side-to-side volume comparisons in the normal patients, patients with sinusitis, and patients who had surgery revealed no statistically significant differences. Comparisons between the orbital volumes of patients who did and did not have surgery revealed a statistically significant increase in orbital volume in patients who had surgery. Only minimal changes in facial volume measurements have been found, confirming clinical impressions that sinus surgery in children is safe and without significant cosmetic sequelae.
On statistical analysis of factors affecting anthocyanin extraction from Ixora siamensis
NASA Astrophysics Data System (ADS)
Mat Nor, N. A.; Arof, A. K.
2016-10-01
This study focused on designing an experimental model in order to evaluate the influence of operative extraction parameters employed for anthocyanin extraction from Ixora siamensis on CIE color measurements (a*, b* and color saturation). Extractions were conducted at temperatures of 30, 55 and 80°C, soaking time of 60, 120 and 180 min using acidified methanol solvent with different trifluoroacetic acid (TFA) contents of 0.5, 1.75 and 3% (v/v). The statistical evaluation was performed by running analysis of variance (ANOVA) and regression calculation to investigate the significance of the generated model. Results show that the generated regression models adequately explain the data variation and significantly represented the actual relationship between the independent variables and the responses. Analysis of variance (ANOVA) showed high coefficient determination values (R2) of 0.9687 for a*, 0.9621 for b* and 0.9758 for color saturation, thus ensuring a satisfactory fit of the developed models with the experimental data. Interaction between TFA content and extraction temperature exhibited to the highest significant influence on CIE color parameter.
Modelling the Effects of Land-Use Changes on Climate: a Case Study on Yamula DAM
NASA Astrophysics Data System (ADS)
Köylü, Ü.; Geymen, A.
2016-10-01
Dams block flow of rivers and cause artificial water reservoirs which affect the climate and the land use characteristics of the river basin. In this research, the effect of the huge water body obtained by Yamula Dam in Kızılırmak Basin is analysed over surrounding spatial's land use and climate change. Mann Kendal non-parametrical statistical test, Theil&Sen Slope method, Inverse Distance Weighting (IDW), Soil Conservation Service-Curve Number (SCS-CN) methods are integrated for spatial and temporal analysis of the research area. For this research humidity, temperature, wind speed, precipitation observations which are collected in 16 weather stations nearby Kızılırmak Basin are analyzed. After that these statistical information is combined by GIS data over years. An application is developed for GIS analysis in Python Programming Language and integrated with ArcGIS software. Statistical analysis calculated in the R Project for Statistical Computing and integrated with developed application. According to the statistical analysis of extracted time series of meteorological parameters, statistical significant spatiotemporal trends are observed for climate change and land use characteristics. In this study, we indicated the effect of big dams in local climate on semi-arid Yamula Dam.
Not a Copernican observer: biased peculiar velocity statistics in the local Universe
NASA Astrophysics Data System (ADS)
Hellwing, Wojciech A.; Nusser, Adi; Feix, Martin; Bilicki, Maciej
2017-05-01
We assess the effect of the local large-scale structure on the estimation of two-point statistics of the observed radial peculiar velocities of galaxies. A large N-body simulation is used to examine these statistics from the perspective of random observers as well as 'Local Group-like' observers conditioned to reside in an environment resembling the observed Universe within 20 Mpc. The local environment systematically distorts the shape and amplitude of velocity statistics with respect to ensemble-averaged measurements made by a Copernican (random) observer. The Virgo cluster has the most significant impact, introducing large systematic deviations in all the statistics. For a simple 'top-hat' selection function, an idealized survey extending to ˜160 h-1 Mpc or deeper is needed to completely mitigate the effects of the local environment. Using shallower catalogues leads to systematic deviations of the order of 50-200 per cent depending on the scale considered. For a flat redshift distribution similar to the one of the CosmicFlows-3 survey, the deviations are even more prominent in both the shape and amplitude at all separations considered (≲100 h-1 Mpc). Conclusions based on statistics calculated without taking into account the impact of the local environment should be revisited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lavinto, Mikko; Räsänen, Syksy, E-mail: mikko.lavinto@helsinki.fi, E-mail: syksy.rasanen@iki.fi
We consider a Swiss Cheese model with a random arrangement of Lemaȋtre-Tolman-Bondi holes in ΛCDM cheese. We study two kinds of holes with radius r{sub b}=50 h{sup −1} Mpc, with either an underdense or an overdense centre, called the open and closed case, respectively. We calculate the effect of the holes on the temperature, angular diameter distance and, for the first time in Swiss Cheese models, shear of the CMB . We quantify the systematic shift of the mean and the statistical scatter, and calculate the power spectra. In the open case, the temperature power spectrum is three orders of magnitude belowmore » the linear ISW spectrum. It is sensitive to the details of the hole, in the closed case the amplitude is two orders of magnitude smaller. In contrast, the power spectra of the distance and shear are more robust, and agree with perturbation theory and previous Swiss Cheese results. We do not find a statistically significant mean shift in the sky average of the angular diameter distance, and obtain the 95% limit |Δ D{sub A}/ D-bar {sub A}|∼< 10{sup −4}. We consider the argument that areas of spherical surfaces are nearly unaffected by perturbations, which is often invoked in light propagation calculations. The closed case is consistent with this at 1σ, whereas in the open case the probability is only 1.4%.« less
CMB seen through random Swiss Cheese
NASA Astrophysics Data System (ADS)
Lavinto, Mikko; Räsänen, Syksy
2015-10-01
We consider a Swiss Cheese model with a random arrangement of Lemaȋtre-Tolman-Bondi holes in ΛCDM cheese. We study two kinds of holes with radius rb=50 h-1 Mpc, with either an underdense or an overdense centre, called the open and closed case, respectively. We calculate the effect of the holes on the temperature, angular diameter distance and, for the first time in Swiss Cheese models, shear of the CMB . We quantify the systematic shift of the mean and the statistical scatter, and calculate the power spectra. In the open case, the temperature power spectrum is three orders of magnitude below the linear ISW spectrum. It is sensitive to the details of the hole, in the closed case the amplitude is two orders of magnitude smaller. In contrast, the power spectra of the distance and shear are more robust, and agree with perturbation theory and previous Swiss Cheese results. We do not find a statistically significant mean shift in the sky average of the angular diameter distance, and obtain the 95% limit |Δ DA/bar DA|lesssim 10-4. We consider the argument that areas of spherical surfaces are nearly unaffected by perturbations, which is often invoked in light propagation calculations. The closed case is consistent with this at 1σ, whereas in the open case the probability is only 1.4%.
NASA Astrophysics Data System (ADS)
Maffucci, Irene; Hu, Xiao; Fumagalli, Valentina; Contini, Alessandro
2018-03-01
Nwat-MMGBSA is a variant of MM-PB/GBSA based on the inclusion of a number of explicit water molecules that are the closest to the ligand in each frame of a molecular dynamics trajectory. This method demonstrated improved correlations between calculated and experimental binding energies in both protein-protein interactions and ligand-receptor complexes, in comparison to the standard MM-GBSA. A protocol optimization, aimed to maximize efficacy and efficiency, is discussed here considering penicillopepsin, HIV1-protease, and BCL-XL as test cases. Calculations were performed in triplicates on both classic HPC environments and on standard workstations equipped by a GPU card, evidencing no statistical differences in the results. No relevant differences in correlation to experiments were also observed when performing Nwat-MMGBSA calculations on 4 ns or 1 ns long trajectories. A fully automatic workflow for structure-based virtual screening, performing from library set-up to docking and Nwat-MMGBSA rescoring, has then been developed. The protocol has been tested against no rescoring or standard MM-GBSA rescoring within a retrospective virtual screening of inhibitors of AmpC β-lactamase and of the Rac1-Tiam1 protein-protein interaction. In both cases, Nwat-MMGBSA rescoring provided a statistically significant increase in the ROC AUCs of between 20% and 30%, compared to docking scoring or to standard MM-GBSA rescoring.
Rapid extraction of image texture by co-occurrence using a hybrid data structure
NASA Astrophysics Data System (ADS)
Clausi, David A.; Zhao, Yongping
2002-07-01
Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.
NASA Astrophysics Data System (ADS)
Lopez, Benjamin; Croiset, Nolwenn; Laurence, Gourcy
2014-05-01
The Water Framework Directive 2006/11/CE (WFD) on the protection of groundwater against pollution and deterioration asks Member States to identify significant and sustained upward trends in all bodies or groups of bodies of groundwater that are characterised as being at risk in accordance with Annex II to Directive 2000/60/EC. The Directive indicates that the procedure for the identification of significant and sustained upward trends must be based on a statistical method. Moreover, for significant increases of concentrations of pollutants, trend reversals are identified as being necessary. This means to be able to identify significant trend reversals. A specific tool, named HYPE, has been developed in order to help stakeholders working on groundwater trend assessment. The R encoded tool HYPE provides statistical analysis of groundwater time series. It follows several studies on the relevancy of the use of statistical tests on groundwater data series (Lopez et al., 2011) and other case studies on the thematic (Bourgine et al., 2012). It integrates the most powerful and robust statistical tests for hydrogeological applications. HYPE is linked to the French national database on groundwater data (ADES). So monitoring data gathered by the Water Agencies can be directly processed. HYPE has two main modules: - a characterisation module, which allows to visualize time series. HYPE calculates the main statistical characteristics and provides graphical representations; - a trend module, which identifies significant breaks, trends and trend reversals in time series, providing result table and graphical representation (cf figure). Additional modules are also implemented to identify regional and seasonal trends and to sample time series in a relevant way. HYPE has been used successfully in 2012 by the French Water Agencies to satisfy requirements of the WFD, concerning characterization of groundwater bodies' qualitative status and evaluation of the risk of non-achievement of good status. Bourgine B. et al. 2012, Ninth International Geostatistics Congress, Oslo, Norway June 11 - 15. Lopez B. et al. 2011, Final Report BRGM/RP-59515-FR. 166p.
Szyda, Joanna; Liu, Zengting; Zatoń-Dobrowolska, Magdalena; Wierzbicki, Heliodor; Rzasa, Anna
2008-01-01
We analysed data from a selective DNA pooling experiment with 130 individuals of the arctic fox (Alopex lagopus), which originated from 2 different types regarding body size. The association between alleles of 6 selected unlinked molecular markers and body size was tested by using univariate and multinomial logistic regression models, applying odds ratio and test statistics from the power divergence family. Due to the small sample size and the resulting sparseness of the data table, in hypothesis testing we could not rely on the asymptotic distributions of the tests. Instead, we tried to account for data sparseness by (i) modifying confidence intervals of odds ratio; (ii) using a normal approximation of the asymptotic distribution of the power divergence tests with different approaches for calculating moments of the statistics; and (iii) assessing P values empirically, based on bootstrap samples. As a result, a significant association was observed for 3 markers. Furthermore, we used simulations to assess the validity of the normal approximation of the asymptotic distribution of the test statistics under the conditions of small and sparse samples.
NASA Astrophysics Data System (ADS)
Jalali, Mohammad; Ramazi, Hamidreza
2018-06-01
Earthquake catalogues are the main source of statistical seismology for the long term studies of earthquake occurrence. Therefore, studying the spatiotemporal problems is important to reduce the related uncertainties in statistical seismology studies. A statistical tool, time normalization method, has been determined to revise time-frequency relationship in one of the most active regions of Asia, Eastern Iran and West of Afghanistan, (a and b were calculated around 8.84 and 1.99 in the exponential scale, not logarithmic scale). Geostatistical simulation method has been further utilized to reduce the uncertainties in the spatial domain. A geostatistical simulation produces a representative, synthetic catalogue with 5361 events to reduce spatial uncertainties. The synthetic database is classified using a Geographical Information System, GIS, based on simulated magnitudes to reveal the underlying seismicity patterns. Although some regions with highly seismicity correspond to known faults, significantly, as far as seismic patterns are concerned, the new method highlights possible locations of interest that have not been previously identified. It also reveals some previously unrecognized lineation and clusters in likely future strain release.
Experimental design, power and sample size for animal reproduction experiments.
Chapman, Phillip L; Seidel, George E
2008-01-01
The present paper concerns statistical issues in the design of animal reproduction experiments, with emphasis on the problems of sample size determination and power calculations. We include examples and non-technical discussions aimed at helping researchers avoid serious errors that may invalidate or seriously impair the validity of conclusions from experiments. Screen shots from interactive power calculation programs and basic SAS power calculation programs are presented to aid in understanding statistical power and computing power in some common experimental situations. Practical issues that are common to most statistical design problems are briefly discussed. These include one-sided hypothesis tests, power level criteria, equality of within-group variances, transformations of response variables to achieve variance equality, optimal specification of treatment group sizes, 'post hoc' power analysis and arguments for the increased use of confidence intervals in place of hypothesis tests.
A Comparative Study of Shaping Ability of four Rotary Systems.
Rubio, Jorge; Zarzosa, José Ignacio; Pallarés, Antonio
2015-12-01
This study compared the cutting area, instrumentation time, root canal anatomy preservation and non-instrumented areas obtained by F360(®), Mtwo(®), RaCe(®) and Hyflex(®) files with ISO size 35. 120 teeth with a single straight root and root canal were divided into 4 groups. Working length was calculated by using X-rays. The teeth were sectioned with a handpiece and a diamond disc, and the sections were observed with Nikon SMZ-2T stereoscopic microscope and an Intralux 4000-1 light source. The groups were adjusted with a preoperative analysis with AutoCAD. The teeth were reconstructed by a #10 K-File and epoxy glue. Each group was instrumented with one of the four file systems. The instrumentation time was calculated with a 1/100 second chronometer. The area of the thirds and root canal anatomy preservation were analyzed with AutoCAD 2013 and the non-instrumented areas with AutoCAD 2013 and SMZ-2T stereoscopic microscope. The statistical analysis was made with Levene's Test, ANOVA, Bonferroni Test and Pearson´s Chi-square. Equal variances were shown by Levene's Test (P > 0.05). ANOVA (P > 0.05) showed the absence of significant differences. There were significant differences in the instrumentation time (P < 0.05). For root canal anatomy preservation and non-instrumented areas, there were no significant differences between all systems (P > 0.05). The 4 different rotary systems produced similar cutting area, root canal anatomy preservation and non-instrumented areas. Regarding instrumentation time, F360(®) was the fastest system statistically.
Comparison of the optical depth of total ozone and atmospheric aerosols in Poprad-Gánovce, Slovakia
NASA Astrophysics Data System (ADS)
Hrabčák, Peter
2018-06-01
The amount of ultraviolet solar radiation reaching the Earth's surface is significantly affected by atmospheric ozone along with aerosols. The present paper is focused on a comparison of the total ozone and atmospheric aerosol optical depth in the area of Poprad-Gánovce, which is situated at the altitude of 706 m a. s. l. in the vicinity of the highest mountain in the Carpathian mountains. The direct solar ultraviolet radiation has been measured here continuously since August 1993 using a Brewer MKIV ozone spectrophotometer. These measurements have been used to calculate the total amount of atmospheric ozone and, subsequently, its optical depth. They have also been used to determine the atmospheric aerosol optical depth (AOD) using the Langley plot method. Results obtained by this method were verified by means of comparison with a method that is part of the Brewer operating software, as well as with measurements made by a Cimel sun photometer. Diffuse radiation, the stray-light effect and polarization corrections were applied to calculate the AOD using the Langley plot method. In this paper, two factors that substantially attenuate the flow of direct ultraviolet solar radiation to the Earth's surface are compared. The paper presents results for 23 years of measurements, namely from 1994 to 2016. Values of optical depth were determined for the wavelengths of 306.3, 310, 313.5, 316.8 and 320 nm. A statistically significant decrease in the total optical depth of the atmosphere was observed with all examined wavelengths. Its root cause is the statistically significant decline in the optical depth of aerosols.
A Comparative Study of Shaping Ability of four Rotary Systems
Zarzosa, José Ignacio; Pallarés, Antonio
2015-01-01
Purpose This study compared the cutting area, instrumentation time, root canal anatomy preservation and non-instrumented areas obtained by F360®, Mtwo®, RaCe® and Hyflex® files with ISO size 35. Material and Methods 120 teeth with a single straight root and root canal were divided into 4 groups. Working length was calculated by using X-rays. The teeth were sectioned with a handpiece and a diamond disc, and the sections were observed with Nikon SMZ-2T stereoscopic microscope and an Intralux 4000-1 light source. The groups were adjusted with a preoperative analysis with AutoCAD. The teeth were reconstructed by a #10 K-File and epoxy glue. Each group was instrumented with one of the four file systems. The instrumentation time was calculated with a 1/100 second chronometer. The area of the thirds and root canal anatomy preservation were analyzed with AutoCAD 2013 and the non-instrumented areas with AutoCAD 2013 and SMZ-2T stereoscopic microscope. The statistical analysis was made with Levene’s Test, ANOVA, Bonferroni Test and Pearson´s Chi-square. Results Equal variances were shown by Levene’s Test (P > 0.05). ANOVA (P > 0.05) showed the absence of significant differences. There were significant differences in the instrumentation time (P < 0.05). For root canal anatomy preservation and non-instrumented areas, there were no significant differences between all systems (P > 0.05). Conclusions The 4 different rotary systems produced similar cutting area, root canal anatomy preservation and non-instrumented areas. Regarding instrumentation time, F360® was the fastest system statistically. PMID:27688412
Space, race, and poverty: Spatial inequalities in walkable neighborhood amenities?
Aldstadt, Jared; Whalen, John; White, Kellee; Castro, Marcia C.; Williams, David R.
2017-01-01
BACKGROUND Multiple and varied benefits have been suggested for increased neighborhood walkability. However, spatial inequalities in neighborhood walkability likely exist and may be attributable, in part, to residential segregation. OBJECTIVE Utilizing a spatial demographic perspective, we evaluated potential spatial inequalities in walkable neighborhood amenities across census tracts in Boston, MA (US). METHODS The independent variables included minority racial/ethnic population percentages and percent of families in poverty. Walkable neighborhood amenities were assessed with a composite measure. Spatial autocorrelation in key study variables were first calculated with the Global Moran’s I statistic. Then, Spearman correlations between neighborhood socio-demographic characteristics and walkable neighborhood amenities were calculated as well as Spearman correlations accounting for spatial autocorrelation. We fit ordinary least squares (OLS) regression and spatial autoregressive models, when appropriate, as a final step. RESULTS Significant positive spatial autocorrelation was found in neighborhood socio-demographic characteristics (e.g. census tract percent Black), but not walkable neighborhood amenities or in the OLS regression residuals. Spearman correlations between neighborhood socio-demographic characteristics and walkable neighborhood amenities were not statistically significant, nor were neighborhood socio-demographic characteristics significantly associated with walkable neighborhood amenities in OLS regression models. CONCLUSIONS Our results suggest that there is residential segregation in Boston and that spatial inequalities do not necessarily show up using a composite measure. COMMENTS Future research in other geographic areas (including international contexts) and using different definitions of neighborhoods (including small-area definitions) should evaluate if spatial inequalities are found using composite measures but also should use measures of specific neighborhood amenities. PMID:29046612
Quantum statistical mechanics of dense partially ionized hydrogen
NASA Technical Reports Server (NTRS)
Dewitt, H. E.; Rogers, F. J.
1972-01-01
The theory of dense hydrogen plasmas beginning with the two component quantum grand partition function is reviewed. It is shown that ionization equilibrium and molecular dissociation equilibrium can be treated in the same manner with proper consideration of all two-body states. A quantum perturbation expansion is used to give an accurate calculation of the equation of state of the gas for any degree of dissociation and ionization. The statistical mechanical calculation of the plasma equation of state is intended for stellar interiors. The general approach is extended to the calculation of the equation of state of the outer layers of large planets.
Fragility of Results in Ophthalmology Randomized Controlled Trials: A Systematic Review.
Shen, Carl; Shamsudeen, Isabel; Farrokhyar, Forough; Sabri, Kourosh
2018-05-01
Evidence-based medicine is guided by our interpretation of randomized controlled trials (RCTs) that address important clinical questions. Evaluation of the robustness of statistically significant outcomes adds a crucial element to the global assessment of trial findings. The purpose of this systematic review was to determine the robustness of ophthalmology RCTs through application of the Fragility Index (FI), a novel metric of the robustness of statistically significant outcomes. Systematic review. A literature search (MEDLINE) was performed for all RCTs published in top ophthalmology journals and ophthalmology-related RCTs published in high-impact journals in the past 10 years. Two reviewers independently screened 1811 identified articles for inclusion if they (1) were a human ophthalmology-related trial, (2) had a 1:1 prospective study design, and (3) reported a statistically significant dichotomous outcome in the abstract. All relevant data, including outcome, P value, number of patients in each group, number of events in each group, number of patients lost to follow-up, and trial characteristics, were extracted. The FI of each RCT was calculated and multivariate regression applied to determine predictive factors. The 156 trials had a median sample size of 91.5 (range, 13-2593) patients/eyes, and a median of 28 (range, 4-2217) events. The median FI of the included trials was 2 (range, 0-48), meaning that if 2 non-events were switched to events in the treatment group, the result would lose its statistical significance. A quarter of all trials had an FI of 1 or less, and 75% of trials had an FI of 6 or less. The FI was less than the number of missing data points in 52.6% of trials. Predictive factors for FI by multivariate regression included smaller P value (P < 0.001), larger sample size (P = 0.001), larger number of events (P = 0.011), and journal impact factor (P = 0.029). In ophthalmology trials, statistically significant dichotomous results are often fragile, meaning that a difference of only a couple of events can change the statistical significance. An application of the FI in RCTs may aid in the interpretation of results and assessment of quality of evidence. Copyright © 2017 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Wallace, G. R.; Weathers, G. D.; Graf, E. R.
1973-01-01
The statistics of filtered pseudorandom digital sequences called hybrid-sum sequences, formed from the modulo-two sum of several maximum-length sequences, are analyzed. The results indicate that a relation exists between the statistics of the filtered sequence and the characteristic polynomials of the component maximum length sequences. An analysis procedure is developed for identifying a large group of sequences with good statistical properties for applications requiring the generation of analog pseudorandom noise. By use of the analysis approach, the filtering process is approximated by the convolution of the sequence with a sum of unit step functions. A parameter reflecting the overall statistical properties of filtered pseudorandom sequences is derived. This parameter is called the statistical quality factor. A computer algorithm to calculate the statistical quality factor for the filtered sequences is presented, and the results for two examples of sequence combinations are included. The analysis reveals that the statistics of the signals generated with the hybrid-sum generator are potentially superior to the statistics of signals generated with maximum-length generators. Furthermore, fewer calculations are required to evaluate the statistics of a large group of hybrid-sum generators than are required to evaluate the statistics of the same size group of approximately equivalent maximum-length sequences.
Miccoli, Gabriele; Gaimari, Gianfranco; Seracchiani, Marco; Morese, Antonio; Khrenova, Tatyana; Di Nardo, Dario
2017-01-01
Aim of the study was to evaluate effectiveness of different heat treatments in improving Ni-Ti endodontic rotary instruments' resistance to fracture. 24 new NiTi instruments similar in length and shape: 12 M3 instruments, tip size 25 and .06 taper (United Dental, Shanghai, China), and 12 M3 Pro Gold instruments tip size 25 and .06 taper (United Dental, Shanghai, China), were tested in a 60° curved artificial root canal. Each group received a different heat treatment. Cycles to fracture were calculated for each instrument. Differences among groups were evaluated with an analysis of variance test (significance level was set at P<0.05.). Statistical analysis found significant differences (p<0.0213) between groups. The M3 Pro Gold instruments were significantly more resistant to fatigue (mean values = 1012, SD +/- 77) than M3 instruments (mean values = 748, SD +/- 62). No statistically significant differences were found between fragments' lengths (p>0,05). An increased flexibility and the reduction of internal defects produced by heat treatments during or after manufacturing processes, may be responsible for improving resistance to cyclic fatigue and flexural stresses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Penner, J.E.
The magnitude of the chlorofluoromethane (CFM) induced depletion of the ozone layer is considered a key problem in atmospheric research. The historical rise in the atmospheric concentrations of CFCl/sub 3/, and CF/sub 2/Cl/sub 2/, the major CFM species, is well documented. Atmospheric CO/sub 2/ has also been increasing. Instead of depleting O/sub 3/, the expected effect of CO/sub 2/ is to increase its concentration. The simultaneous effects of these perturbations were studied. Results indicate that increases in CO/sub 2/ can significantly alter the predicted ozone trend. This will complicate efforts to detect the trend in O/sub 3/ caused by increasesmore » in CFM's. Since the calculated effects of these perturbations are largest at high altitudes, one might expect to detect changes in high altitude O/sub 3/ sooner than those in total O/sub 3/. Therefore a comparison was made between the calculated change in O/sub 3/ at high altitude and statistical detection limits for abnormal change as developed from Umkehr data from Arosa, Switzerland. Its significance for trend detection is discussed. Finally, since CO/sub 2/ effects will be important in the next 50 to 100 years, the effects of temperature changes from CO/sub 2/ increase on O/sub 3/ loss rates from different families were examined. Significant changes in the NO/sub x/-catalyzed ozone loss rates that have not previously been discussed were found. It is concluded that the O/sub 3/ decrease at steady state from the coupled CFM and CO/sub 2/ perturbation is larger than the decrease calculated by summing the separate effects of these perturbations. The expected increase in CO/sub 2/ can significantly affect predicted O/sub 3/ trends in the next 50 to 100 years. O/sub 3/ changes in Umkehr level 7 are more detectible, in a statistical sense, than those at higher levels. The temperature effect of CO/sub 2/ on the NO/sub x-catalyzed O/sub 3/ destruction rate was found to be as large or larger than the effect of temperature on the pure oxygen loss rate.« less
2013-01-01
Background The benefits of stroke unit care in terms of reducing death, dependency and institutional care were demonstrated in a 2009 Cochrane review carried out by the Stroke Unit Trialists’ Collaboration. Methods As requested by the Belgian health authorities, a systematic review and meta-analysis of the effect of acute stroke units was performed. Clinical trials mentioned in the original Cochrane review were included. In addition, an electronic database search on Medline, Embase, the Cochrane Central Register of Controlled Trials, and Physiotherapy Evidence Database (PEDro) was conducted to identify trials published since 2006. Trials investigating acute stroke units compared to alternative care were eligible for inclusion. Study quality was appraised according to the criteria recommended by Scottish Intercollegiate Guidelines Network (SIGN) and the GRADE system. In the meta-analysis, dichotomous outcomes were estimated by calculating odds ratios (OR) and continuous outcomes were estimated by calculating standardized mean differences. The weight of a study was calculated based on inverse variance. Results Evidence from eight trials comparing acute stroke unit and conventional care (general medical ward) were retained for the main synthesis and analysis. The findings from this study were broadly in line with the original Cochrane review: acute stroke units can improve survival and independency, as well as reduce the chance of hospitalization and the length of inpatient stay. The improvement with stroke unit care on mortality was less conclusive and only reached borderline level of significance (OR 0.84, 95% CI 0.70 to 1.00, P = 0.05). This improvement became statistically non-significant (OR 0.87, 95% CI 0.74 to 1.03, P = 0.12) when data from two unpublished trials (Goteborg-Ostra and Svendborg) were added to the analysis. After further also adding two additional trials (Beijing, Stockholm) with very short observation periods (until discharge), the difference between acute stroke units and general medical wards on death remained statistically non-significant (OR 0.86, 95% CI 0.74 to 1.01, P = 0.06). Furthermore, based on figures reported by the clinical trials included in this study, a slightly higher proportion of patients became dependent after receiving care in stroke units than those treated in general medical wards – although the difference was not statistically significant. This result could have an impact on the future demand for healthcare services for individuals that survive a stroke but became dependent on their care-givers. Conclusions These findings demonstrate that a well-conducted meta-analysis can produce results that can be of value to policymakers but the choice of inclusion/exclusion criteria and outcomes in this context needs careful consideration. The financing of interventions such as stroke units that increase independency and reduce inpatient stays are worthwhile in a context of an ageing population with increasing care needs. One limitation of this study was the selection of trials published in only four languages: English, French, Dutch and German. This choice was pragmatic in the context of this study, where the objective was to support health authorities in their decision processes. PMID:24164771
Imprints of magnetic power and helicity spectra on radio polarimetry statistics
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Enßlin, T. A.
2011-06-01
The statistical properties of turbulent magnetic fields in radio-synchrotron sources should be imprinted on the statistics of polarimetric observables. In search of these imprints, i.e. characteristic modifications of the polarimetry statistics caused by magnetic field properties, we calculate correlation and cross-correlation functions from a set of observables that contain total intensity I, polarized intensity P, and Faraday depth φ. The correlation functions are evaluated for all combinations of observables up to fourth order in magnetic field B. We derive these analytically as far as possible and from first principles using only some basic assumptions, such as Gaussian statistics for the underlying magnetic field in the observed region and statistical homogeneity. We further assume some simplifications to reduce the complexity of the calculations, because for a start we were interested in a proof of concept. Using this statistical approach, we show that it is possible to gain information about the helical part of the magnetic power spectrum via the correlation functions < P(kperp) φ(k'_{perp)φ(k''perp)>B} and < I(kperp) φ(k'_{perp)φ(k''perp)>B}. Using this insight, we construct an easy-to-use test for helicity called LITMUS (Local Inference Test for Magnetic fields which Uncovers heliceS), which gives a spectrally integrated measure of helicity. For now, all calculations are given in a Faraday-free case, but set up so that Faraday rotational effects can be included later.
Human genetic variation and yellow fever mortality during 19th century U.S. epidemics.
Blake, Lauren E; Garcia-Blanco, Mariano A
2014-06-03
We calculated the incidence, mortality, and case fatality rates for Caucasians and non-Caucasians during 19th century yellow fever (YF) epidemics in the United States and determined statistical significance for differences in the rates in different populations. We evaluated nongenetic host factors, including socioeconomic, environmental, cultural, demographic, and acquired immunity status that could have influenced these differences. While differences in incidence rates were not significant between Caucasians and non-Caucasians, differences in mortality and case fatality rates were statistically significant for all epidemics tested (P < 0.01). Caucasians diagnosed with YF were 6.8 times more likely to succumb than non-Caucasians with the disease. No other major causes of death during the 19th century demonstrated a similar mortality skew toward Caucasians. Nongenetic host factors were examined and could not explain these large differences. We propose that the remarkably lower case mortality rates for individuals of non-Caucasian ancestry is the result of human genetic variation in loci encoding innate immune mediators. Different degrees of severity of yellow fever have been observed across diverse populations, but this study is the first to demonstrate a statistically significant association between ancestry and the outcome of yellow fever (YF). With the global burden of mosquito-borne flaviviral infections, such as YF and dengue, on the rise, identifying and characterizing host factors could prove pivotal in the prevention of epidemics and the development of effective treatments. Copyright © 2014 Blake and Garcia-Blanco.
Guo, Hao; Cao, Xiaohua; Liu, Zhifen; Li, Haifang; Chen, Junjie; Zhang, Kerang
2012-12-05
Resting state functional brain networks have been widely studied in brain disease research. However, it is currently unclear whether abnormal resting state functional brain network metrics can be used with machine learning for the classification of brain diseases. Resting state functional brain networks were constructed for 28 healthy controls and 38 major depressive disorder patients by thresholding partial correlation matrices of 90 regions. Three nodal metrics were calculated using graph theory-based approaches. Nonparametric permutation tests were then used for group comparisons of topological metrics, which were used as classified features in six different algorithms. We used statistical significance as the threshold for selecting features and measured the accuracies of six classifiers with different number of features. A sensitivity analysis method was used to evaluate the importance of different features. The result indicated that some of the regions exhibited significantly abnormal nodal centralities, including the limbic system, basal ganglia, medial temporal, and prefrontal regions. Support vector machine with radial basis kernel function algorithm and neural network algorithm exhibited the highest average accuracy (79.27 and 78.22%, respectively) with 28 features (P<0.05). Correlation analysis between feature importance and the statistical significance of metrics was investigated, and the results revealed a strong positive correlation between them. Overall, the current study demonstrated that major depressive disorder is associated with abnormal functional brain network topological metrics and statistically significant nodal metrics can be successfully used for feature selection in classification algorithms.
Wishaupt, Jérôme O; Ploeg, Tjeerd van der; Smeets, Leo C; Groot, Ronald de; Versteegh, Florens G A; Hartwig, Nico G
2017-05-01
The relation between viral load and disease severity in childhood acute respiratory tract infections (ARI) is not fully understood. To assess the clinical relevance of the relation between viral load, determined by cycle threshold (CT) value of real-time reverse transcription-polymerase chain reaction assays and disease severity in children with single- and multiple viral ARI. 582 children with ARI were prospectively followed and tested for 15 viruses. Correlations were calculated between CT values and clinical parameters. In single viral ARI, statistically significant correlations were found between viral loads of Respiratory Syncytial Virus (RSV) and hospitalization and between viral loads of Human Coronavirus (HCoV) and a disease severity score. In multiple-viral ARI, statistically significant correlations between viral load and clinical parameters were found. In RSV-Rhinovirus (RV) multiple infections, a low viral load of RV was correlated with a high length of hospital stay and a high duration of extra oxygen use. The mean CT value for RV, HCoV and Parainfluenza virus was significantly lower in single- versus multiple infections. Although correlations between CT values and clinical parameters in patients with single and multiple viral infection were found, the clinical importance of these findings is limited because individual differences in host-, viral and laboratory factors complicate the interpretation of statistically significant findings. In multiple infections, viral load cannot be used to differentiate between disease causing virus and innocent bystanders. Copyright © 2017 Elsevier B.V. All rights reserved.
Got power? A systematic review of sample size adequacy in health professions education research.
Cook, David A; Hatala, Rose
2015-03-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.
Argalji, Nina; Silva, Eduardo Moreira da; Cury-Saramago, Adriana; Mattos, Claudia Trindade
2017-08-21
The objective of this study was to compare coating dimensions and surface characteristics of two different esthetic covered nickel-titanium orthodontic rectangular archwires, as-received from the manufacturer and after oral exposure. The study was designed for comparative purposes. Both archwires, as-received from the manufacturer, were observed using a stereomicroscope to measure coating thickness and inner metallic dimensions. The wires were also exposed to oral environment in 11 orthodontic active patients for 21 days. After removing the samples, stereomicroscopy images were captured, coating loss was measured and its percentage was calculated. Three segments of each wire (one as-received and two after oral exposure) were observed using scanning electron microscopy for a qualitative analysis of the labial surface of the wires. The Lilliefors test and independent t-test were applied to verify normality of data and statistical differences between wires, respectively. The significance level adopted was 0.05. The results showed that the differences between the wires while comparing inner height and thickness were statistically significant (p < 0.0001). In average, the most recently launched wire presented a coating thickness twice that of the control wire, which was also a statistically significant difference. The coating loss percentage was also statistically different (p = 0.0346) when the latest launched wire (13.27%) was compared to the control (29.63%). In conclusion, the coating of the most recent wire was thicker and more uniform, whereas the control had a thinner coating on the edges. After oral exposure, both tested wires presented coating loss, but the most recently launched wire exhibited better results.
NASA Astrophysics Data System (ADS)
Sun, Taohua; Zhang, Xinhui; Miao, Ying; Zhou, Yang; Shi, Jie; Yan, Meixing; Chen, Anjin
2018-06-01
The antiviral activity in vitro and in vivo and the effect of the immune system of two fucoidan fractions with low molecular weight and different sulfate content from Laminaria japonica (LMW fucoidans) were investigated in order to examine the possible mechanism. In vitro, I-type influenza virus, adenovirus and Parainfluenza virus I were used to infect Hep-2, Hela and MDCK cells, respectively. And 50% tissue culture infective dose was calculated to detect the antiviral activity of two LMW fucoidans. The results indicated that compared with the control group, 2 kinds of LMW fucoidans had remarkable antiviral activity in vitro in middle and high doses, while at low doses, the antiviral activity of 2 kinds of LMW fucoidans was not statistically different from that in the blank control group. And there was no statistically difference between two LMW fucoidans in antiviral activity. In vivo, LMW fucoidans could prolong the survival time of virus-infected mice, and could improve the lung index of virus-infected mice significantly, which have statistical differences with the control group significantly ( p < 0.01). However, the survival time of the two LMW fucoidans was not statistically significant ( p > 0.05). In this study, it was shown that both of two LMW fucoidans (LF1, LF2) could increase the thymus index, spleen index, phagocytic index, phagocytosis coefficient and half hemolysin value in middle and high doses, which suggested that LMW fucoidans could play an antiviral role by improving the quality of immune organs, improving immune cell phagocytosis and humoral immunity.
NASA Astrophysics Data System (ADS)
Dennison, Andrew G.
Classification of the seafloor substrate can be done with a variety of methods. These methods include Visual (dives, drop cameras); mechanical (cores, grab samples); acoustic (statistical analysis of echosounder returns). Acoustic methods offer a more powerful and efficient means of collecting useful information about the bottom type. Due to the nature of an acoustic survey, larger areas can be sampled, and by combining the collected data with visual and mechanical survey methods provide greater confidence in the classification of a mapped region. During a multibeam sonar survey, both bathymetric and backscatter data is collected. It is well documented that the statistical characteristic of a sonar backscatter mosaic is dependent on bottom type. While classifying the bottom-type on the basis on backscatter alone can accurately predict and map bottom-type, i.e a muddy area from a rocky area, it lacks the ability to resolve and capture fine textural details, an important factor in many habitat mapping studies. Statistical processing of high-resolution multibeam data can capture the pertinent details about the bottom-type that are rich in textural information. Further multivariate statistical processing can then isolate characteristic features, and provide the basis for an accurate classification scheme. The development of a new classification method is described here. It is based upon the analysis of textural features in conjunction with ground truth sampling. The processing and classification result of two geologically distinct areas in nearshore regions of Lake Superior; off the Lester River,MN and Amnicon River, WI are presented here, using the Minnesota Supercomputer Institute's Mesabi computing cluster for initial processing. Processed data is then calibrated using ground truth samples to conduct an accuracy assessment of the surveyed areas. From analysis of high-resolution bathymetry data collected at both survey sites is was possible to successfully calculate a series of measures that describe textural information about the lake floor. Further processing suggests that the features calculated capture a significant amount of statistical information about the lake floor terrain as well. Two sources of error, an anomalous heave and refraction error significantly deteriorated the quality of the processed data and resulting validate results. Ground truth samples used to validate the classification methods utilized for both survey sites, however, resulted in accuracy values ranging from 5 -30 percent at the Amnicon River, and between 60-70 percent for the Lester River. The final results suggest that this new processing methodology does adequately capture textural information about the lake floor and does provide an acceptable classification in the absence of significant data quality issues.
Mathias, Jane L; Wheaton, Patricia
2007-03-01
Deficits in attention are frequently reported following severe traumatic brain injury (TBI). However, methodological differences make it difficult to reconcile inconsistencies in the research findings in order to undertake an evidence-based assessment of attention. The current study therefore undertook a meta-analytic review of research examining attention following severe TBI. A search of the PsycINFO and PubMed databases spanning the years 1980 to 2005 was undertaken with 24 search terms. Detailed inclusion and exclusion criteria were used to screen all articles, leaving 41 studies that were included in the current meta-analysis. Weighted Cohen's d effect sizes, percentage overlap statistics, and confidence intervals were calculated for the different tests of attention. Fail-safe Ns were additionally calculated to address the bias introduced by the tendency to publish significant results. Large and significant deficits were found in specific measures of information-processing speed, attention span, focused/selective attention, sustained attention, and supervisory attentional control following severe TBI. Finally, age, education, and postinjury interval were not significantly related to these deficits in attention.
Estrogen receptor (ER) and progesterone receptor (PgR) in breast cancer of Indian women
Patil, Amit V; Bhamre, Rahul S; Singhai, Rajeev; Tayade, Mukund B; Patil, Vinayak W
2011-01-01
Objective To determine the expressions and relationship between estrogen receptors (ERs) and progesterone receptors (PgRs) in breast cancer in Indian women. Participants Surgically removed breast cancer tissues were collected from Grant Medical College and Sir JJ Group of Hospitals, Mumbai, India, taking (n = 300) cases of infiltrating duct cancer of Indian women after radical mastectomy and lumpectomy; the age- and menopausal-related subgroups satisfied this requirement. Measurements Statistical significance was calculated by the likelihood ratio test; relative risk served to check for significant differences. Relapse-free interval probabilities were calculated according to Kaplan and Meier, with Cox–Mantel test comparing survival functions and P values. Results We observed that only in middle-aged postmenopausal patients bearing pT2 tumors were ER and PgR receptors shown to have a prognostic significance with the lowest tested cutoff value being 5 fmol/mg. Conclusion Immunohistochemistry analysis has been shown to be a prognostic factor for patients with breast cancer; the major aim of determining the ER receptor status is to assess predictive response to hormonal therapy. PMID:24367174
A circular dichroism and structural study of the inclusion complex artemisinin-β-cyclodextrin
NASA Astrophysics Data System (ADS)
Marconi, Giancarlo; Monti, Sandra; Manoli, Francesco; Degli Esposti, Alessandra; Mayer, Bernd
2004-01-01
The inclusion complex between the powerful antimalarial agent Artemisinin and β-cyclodextrin has been studied by means of Circular Dichroism and elucidated by Density Functional Theory calculations on the isolated molecule combined to a statistical Monte Carlo search of the most stable geometry of the complex. The results evidence a host-guest structure in full agreement with the almost unaffected functionality of the drug, which is found to experience a significant hydrophilic environment when complexed.
The effect of foot reflexology on pain in patients with metastatic cancer.
Stephenson, Nancy; Dalton, Jo Ann; Carlson, John
2003-11-01
Thirty-six oncology inpatients participated in this third pilot study investigating the effects of foot reflexology in which equianalgesic dosing was calculated. Foot reflexology was found to have a positive immediate effect for patients with metastatic cancer who report pain, although there was no statistically significant effect at 3 hours after intervention or at 24 hours after intervention. Further study is suggested for foot reflexology delivered by family in the homes for management of cancer pain.
A Model for Understanding the Relationship Between Transaction Costs and Acquisition Cost Breaches
2014-04-30
an assistant professor and received a BA in anthropology and a BA and MA in economics (2004) and a PhD in political economy and public policy (2008...between transaction costs and cost overruns. Biggs (2013) showed that as the EAC SE/PM cost ratio rises there is a statistically significant corresponding...Estimate at Completion ( EAC ) is the sum of the ACWP and the estimate to completion (ETC) for the remaining work. The ETC can be calculated using the cost
Counting your chickens before they're hatched: power analysis.
Jupiter, Daniel C
2014-01-01
How does an investigator know that he has enough subjects in his study design to have the predicted outcomes appear statistically significant? In this Investigators' Corner I discuss why such planning is necessary, give an intuitive introduction to the calculations needed to determine required sample sizes, and hint at some of the more technical difficulties inherent in this aspect of study planning. Copyright © 2014 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
The effect of neck dissection on quality of life after chemoradiation.
Donatelli-Lassig, Amy Anne; Duffy, Sonia A; Fowler, Karen E; Ronis, David L; Chepeha, Douglas B; Terrell, Jeffrey E
2008-10-01
To determine differences in quality of life (QOL) between patients with head and neck cancer who receive chemoradiation versus chemoradiation and neck dissection. A prospective cohort study was conducted at two tertiary otolaryngology clinics and a Veterans Administration hospital. 103 oropharyngeal patients with Stage IV squamous cell carcinoma treated via chemoradiation +/- neck dissection. self-administered health survey to collect health, demographic, and QOL information pretreatment and 1 year later. QOL via SF-36 and HNQoL. Descriptive statistics were calculated for health/clinical characteristics, demographics, and QOL scores. t tests evaluated changes in QOL over time. Sixty-five patients underwent chemoradiation and 38 patients underwent chemoradiation and neck dissection. Only the pain index of the SF-36 showed a significant difference between groups (P < 0.05) with the neck dissection group reporting greater pain. After post-treatment neck dissection, patients experience statistically significant decrement in bodily pain domain scores, but other QOL scores are similar to those of patients who underwent chemoradiation alone.
The effect of neck dissection on quality of life after chemoradiation
Lassig, Amy Anne Donatelli; Duffy, Sonia A.; Fowler, Karen E.; Ronis, David L.; Chepeha, Douglas B.; Terrell, Jeffrey E.
2010-01-01
Objective To determine differences in QOL between head and neck cancer patients receiving chemoradiation versus chemoradiation and neck dissection. Methods A prospective cohort study was conducted at 2 tertiary otolaryngology clinics and a VA. Sample: 103 oropharyngeal Stage IV SCCA patients treated via chemoradiation +/− neck dissection. Intervention: self-administered health survey collecting health, demographic, and QOL information pretreatment and 1 year later. Main outcome measures: QOL via SF-36 and HNQoL. Descriptive statistics were calculated for health / clinical characteristics, demographics, and QOL scores. T-tests evaluated changes in QOL over time. Results 65 patients received chemoradiation and 38 chemoradiation + neck dissection. Only the pain index of the SF-36 showed a significant difference between groups (p<.05) with the neck dissection group reporting greater pain. Conclusions After post-treatment neck dissection, patients experience statistically significant decrement in bodily pain domain scores, but other QOL scores are similar to those of patients undergoing chemoradiation alone. PMID:18922336
[Teaching practices and the position concerning medical education].
Medina-Figueroa, Alda María; Espinosa-Alarcón, Patricia Atzimba; Viniegra-Velázquez, Leonardo
2008-01-01
Estimate the degree of development of a position concerning medical education, in a phisician population. We carried out a cross-sectional study at with 1580 physicians; we selected 395 participants by non-proportional stratified sampling of an IMSS health facility; 244 (62 %) was medical professors, included 15 physicians responsible for education. A previously validated instrument was applied to these participants. Three indicators were evaluated: agreement in general, most popular trend, and consequence. Group grading was done blindly. Kuder-Richardson test was utilized to calculate the value of internal instrument consistency and nonparametric statistics < 0.05. Answering tendency in agreement were similar among physicians; heads or managers were statistically significant. The most popular trend was participative. In terms of consequence in physicians, there were some without consequent sentences in pair. The most popular trend was participative, although it would appear that this has not been pondered, in that on exploring the indicator, that of consequence. Teaching practices do not have any significant influence on the development of a position concerning medical education.
Rather, Shagufta; Keen, Abid; Sajad, Peerzada
2018-01-01
Aim: To evaluate the relationship between vitamin D levels and chronic spontaneous urticaria (CSU) and compare with healthy age and sex matched controls. Material and Methods: This was a hospital-based cross-sectional study conducted over a period of 1 year, in which 110 patients with CSU were recruited along with an equal number of sex and age-matched healthy controls. For each patient, urticaria activity score (UAS) was calculated and autologous serum skin test (ASST) was performed. Plasma 25-hydroxyvitamin D [25-(OH)D] was analyzed by chemiluminescence method. A deficiency in vitamin D was defined as serum 25-(OH)D concentrations <30 ng/mL. The statistical analysis was carried out by using appropriate statistical tests. Results: The mean serum 25-(OH)D levels of CSU patients was 19.6 ± 6.9 ng/mL, whereas in control group, the mean level was 38.5 ± 6.7, the difference being statistically significant (P < 0.001). A significant negative correlation was found between vitamin D levels and UAS. (P < 0.001). The number of patients with ASST positivity was 44 (40%). Conclusion: The patients with CSU had reduced levels of vitamin D when compared to healthy controls. Furthermore, there was a significant negative correlation between the levels of serum vitamin D and severity of CSU. PMID:29854636
Kleanthous, Kleanthis; Dermitzaki, Eleni; Papadimitriou, Dimitrios T; Papaevangelou, Vassiliki; Papadimitriou, Anastasios
2016-02-01
We examined the weight status of Greek schoolchildren from November 2009 to May 2012, shortly before, and during the early years, of the Greek economic crisis. This was a mixed longitudinal study that formed part of the West Attica Growth Study and followed children at the ages of 6-7, 9-10, 12-13 and 15-16 years every six months for 2.5 years. Each child's height and weight were measured and their body mass index calculated. We were able to determine the weight status of 1327 children (53% boys) based on their first and last measurements. Overweight, obesity and underweight were defined using the International Obesity Task Force criteria. During the 2.5-year study period, there was a decrease in the total prevalence of overweight and obesity, which reached a statistical significance for both sexes. It decreased from 43% to 37.3% (p = 0.02) in boys and from 33.4% to 26.9% (p = 0.0056) in girls. There was also a statistically significant increase in normal weight children and a slight but insignificant increase in underweight children of both sexes. During the initial years of the Greek economic crisis, there was a statistically significant reduction in overweight and obesity in children from six to 16 years of age. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
Krystkowiak, Izabella; Manguy, Jean; Davey, Norman E
2018-06-05
There is a pressing need for in silico tools that can aid in the identification of the complete repertoire of protein binding (SLiMs, MoRFs, miniMotifs) and modification (moiety attachment/removal, isomerization, cleavage) motifs. We have created PSSMSearch, an interactive web-based tool for rapid statistical modeling, visualization, discovery and annotation of protein motif specificity determinants to discover novel motifs in a proteome-wide manner. PSSMSearch analyses proteomes for regions with significant similarity to a motif specificity determinant model built from a set of aligned motif-containing peptides. Multiple scoring methods are available to build a position-specific scoring matrix (PSSM) describing the motif specificity determinant model. This model can then be modified by a user to add prior knowledge of specificity determinants through an interactive PSSM heatmap. PSSMSearch includes a statistical framework to calculate the significance of specificity determinant model matches against a proteome of interest. PSSMSearch also includes the SLiMSearch framework's annotation, motif functional analysis and filtering tools to highlight relevant discriminatory information. Additional tools to annotate statistically significant shared keywords and GO terms, or experimental evidence of interaction with a motif-recognizing protein have been added. Finally, PSSM-based conservation metrics have been created for taxonomic range analyses. The PSSMSearch web server is available at http://slim.ucd.ie/pssmsearch/.
2013-01-01
Background The aim of this study was to investigate how perceived neighbourhood safety and area deprivation influenced the relationship between parklands and mental health. Methods Information about psychological distress, perceptions of safety, demographic and socio-economic background at the individual level was extracted from New South Wales Population Health Survey. The proportion of a postcode that was parkland was used as a proxy measure for access to parklands and was calculated for each individual. Generalized Estimating Equations logistic regression analyses were performed to account for correlation between participants within postcodes, and with controls for socio-demographic characteristics and socio-economic status at the area level. Results In areas where the residents reported perceiving their neighbourhood to be “safe” and controlling for area levels of socio-economic deprivation, there were no statistically significant associations between the proportion of parkland and high or very high psychological distress. In the most disadvantaged neighbourhoods which were perceived as unsafe by residents, those with greater proportions of parkland, over 20%, there was greater psychological distress, this association was statistically significant (20-40% parkland: OR=2.27, 95% CI=1.45-3.55; >40% parkland: OR=2.53, 95% CI=1.53-4.19). Conclusion Our study indicates that perceptions of neighbourhood safety and area deprivation were statistically significant effect modifiers of the association between parkland and psychological distress. PMID:23635303
Chong, Shanley; Lobb, Elizabeth; Khan, Rabia; Abu-Rayya, Hisham; Byun, Roy; Jalaludin, Bin
2013-05-01
The aim of this study was to investigate how perceived neighbourhood safety and area deprivation influenced the relationship between parklands and mental health. Information about psychological distress, perceptions of safety, demographic and socio-economic background at the individual level was extracted from New South Wales Population Health Survey. The proportion of a postcode that was parkland was used as a proxy measure for access to parklands and was calculated for each individual. Generalized Estimating Equations logistic regression analyses were performed to account for correlation between participants within postcodes, and with controls for socio-demographic characteristics and socio-economic status at the area level. In areas where the residents reported perceiving their neighbourhood to be "safe" and controlling for area levels of socio-economic deprivation, there were no statistically significant associations between the proportion of parkland and high or very high psychological distress. In the most disadvantaged neighbourhoods which were perceived as unsafe by residents, those with greater proportions of parkland, over 20%, there was greater psychological distress, this association was statistically significant (20-40% parkland: OR=2.27, 95% CI=1.45-3.55; >40% parkland: OR=2.53, 95% CI=1.53-4.19). Our study indicates that perceptions of neighbourhood safety and area deprivation were statistically significant effect modifiers of the association between parkland and psychological distress.
Comparison of AL-Scan and IOLMaster 500 Partial Coherence Interferometry Optical Biometers.
Hoffer, Kenneth J; Savini, Giacomo
2016-10-01
To investigate agreement between the ocular biometry measurements provided by a newer optical biometer, the AL-Scan (Nidek Co, Ltd., Gamagori, Japan) and those provided by the IOLMaster 500 (Carl Zeiss Meditec, Jena Germany), which are both based on partial coherence interferometry. Axial length, corneal power, and anterior chamber depth (corneal epithelium to lens) were measured in 86 eyes of 86 patients scheduled for cataract surgery using both biometers. All values were analyzed using a paired t test, the Pearson product moment correlation coefficient (r), and Bland-Altman plots. The mean axial length values of both instruments were exactly the same (23.46 ± 0.99 mm) for both) and showed excellent agreement and correlation. On the contrary, the AL-Scan measured a steeper mean corneal power by 0.08 diopters (D) at the 2.4-mm zone but by only 0.03 D at the 3.3-mm zone, only the former being statistically significant. The AL-Scan measured a deeper anterior chamber depth by 0.13 mm, which was statistically significant (P < .001). Agreement between the two units was good. However, the small but statistically significant difference in corneal power (at the IOLMaster-comparable 2.4-mm zone) and in the anterior chamber depth measurement make lens constant optimization necessary when calculating the intraocular lens power by means of theoretical formulas. [J Refract Surg. 2016;32(10):694-698.]. Copyright 2016, SLACK Incorporated.
Gong, Caixia; Yan, Miao; Jiang, Fei; Chen, Zehua; Long, Yuan; Chen, Lixian; Zheng, Qian; Shi, Bing
2014-06-01
This study aimed to observe the postoperative pain rate and degree of pain in preschool children with cleft lip and palate, and investigate the effect of nursing intervention on pain relief. A total of 120 hospitalized cases of three- to seven-year-old preschool children with cleft lip and palate were selected from May to October 2011. The subjects were randomly divided into the control group and experimental groups 1, 2, and 3. The control group used conventional nursing methods, experimental group 1 used analgesic drug treatment, experimental group 2 used psychological nursing interventions, and experimental group 3 used both psychological nursing intervention and analgesic drug treatment. After 6, 12, 24, and 48 h, pain self-assessment, pain parent-assessment, and pain nurse-assessment were calculated for the four groups using the pain assessment forms, and their ratings were compared. The postoperative pain rates of the four groups ranged from 50.0% to 73.3%. The difference among the four groups was statistically significant (P < 0.001). The differences among the control group and experimental groups 1 and 2 were not statistically significant (P = 0.871), whereas the differences among experimental group 3 and the other groups were statistically significant (P < 0.001). Postoperative pain in preschool children with cleft lip and palate is common. Psychological nursing intervention with analgesic treatment is effective in relieving postoperative pain.
Lehl, G; Bansal, K; Sekhon, R
1999-12-01
A preliminary study was conducted on 50 children in the age group of 4-12 years, who were divided into two groups on the basis of decayed, missing and filled teeth (DMFT) i.e. Group A (1-3) and Group B (> 3). A 5-day diet diary was evaluated and Sweet Score, Total Sugar Exposure, At Meal Sugar Exposures and Between Meal Sugar Exposure were calculated. There was statistically significant difference between the two groups in relation to Sweet Score and Total sugar Exposures. Between Meal Sugar Exposure and At Meal sugar exposure did not differ significantly.
The Role of Gender in Neuropsychological Assessment in Healthy Adolescents.
Mormile, Megan Elizabeth Evelyn; Langdon, Jody L; Hunt, Tamerah Nicole
2018-01-01
Research in college athletes has revealed significant gender differences in areas of verbal memory, visual memory, and reaction time. Additionally, research has focused on differences in neuropsychological components and gender in college populations; however, such differences in gender have not been documented in healthy adolescent populations. To identify potential differences between males and females using different components of a common computerized neuropsychological test. A computerized neuropsychological test battery (ImPACT®) was administered to 662 high-school age adolescent athletes (male: n = 451 female: n = 262). Differences between genders were calculated using a 1-way ANOVA. All statistical analyses were conducted using SPSS 23.0. Significance levels were set a priori at P < .05. A 1-way ANOVA revealed statistically significant differences between males and females for composite reaction time (F 1,660 = 10.68, P = .001) and total symptom score (F 1,660 = 81.20, P < .001). However, no statistically significant differences were found between males and females in composite verbal memory, visual memory, visual motor, or impulse control (P > .05). Significant differences between males and females were discovered for composite reaction time and total symptom scores, with females reporting more symptoms and slower reaction times at a baseline assessment. Increased symptom reporting by females may be attributed to both hormonal differences and increased honesty. Quicker reaction times in males may support theories that repetition of activities and quicker muscle contraction are gender dependent. However, additional research is necessary to understand gender differences in adolescent athletes during periods of cognitive and physical maturation.
Clinical calculators in hospital medicine: Availability, classification, and needs.
Dziadzko, Mikhail A; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly
2016-09-01
Clinical calculators are widely used in modern clinical practice, but are not generally applied to electronic health record (EHR) systems. Important barriers to the application of these clinical calculators into existing EHR systems include the need for real-time calculation, human-calculator interaction, and data source requirements. The objective of this study was to identify, classify, and evaluate the use of available clinical calculators for clinicians in the hospital setting. Dedicated online resources with medical calculators and providers of aggregated medical information were queried for readily available clinical calculators. Calculators were mapped by clinical categories, mechanism of calculation, and the goal of calculation. Online statistics from selected Internet resources and clinician opinion were used to assess the use of clinical calculators. One hundred seventy-six readily available calculators in 4 categories, 6 primary specialties, and 40 subspecialties were identified. The goals of calculation included prediction, severity, risk estimation, diagnostic, and decision-making aid. A combination of summation logic with cutoffs or rules was the most frequent mechanism of computation. Combined results, online resources, statistics, and clinician opinion identified 13 most utilized calculators. Although not an exhaustive list, a total of 176 validated calculators were identified, classified, and evaluated for usefulness. Most of these calculators are used for adult patients in the critical care or internal medicine settings. Thirteen of 176 clinical calculators were determined to be useful in our institution. All of these calculators have an interface for manual input. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Employing Deceptive Dynamic Network Topology Through Software-Defined Networking
2014-03-01
manage economies, banking, and businesses , to the way we gather intelligence and militaries wage war. With computer networks and the Internet, we have seen...space, along with other generated statistics , similar to that performed by the Ant Census project. As we have shown, there is an extensive and diverse...calculated RTT for each probe. In the ping statistics , we are presented the details of probes sent and responses received, and the calculated packet loss
Statistical analysis of QC data and estimation of fuel rod behaviour
NASA Astrophysics Data System (ADS)
Heins, L.; Groβ, H.; Nissen, K.; Wunderlich, F.
1991-02-01
The behaviour of fuel rods while in reactor is influenced by many parameters. As far as fabrication is concerned, fuel pellet diameter and density, and inner cladding diameter are important examples. Statistical analyses of quality control data show a scatter of these parameters within the specified tolerances. At present it is common practice to use a combination of superimposed unfavorable tolerance limits (worst case dataset) in fuel rod design calculations. Distributions are not considered. The results obtained in this way are very conservative but the degree of conservatism is difficult to quantify. Probabilistic calculations based on distributions allow the replacement of the worst case dataset by a dataset leading to results with known, defined conservatism. This is achieved by response surface methods and Monte Carlo calculations on the basis of statistical distributions of the important input parameters. The procedure is illustrated by means of two examples.
Thieler, E. Robert; Himmelstoss, Emily A.; Zichichi, Jessica L.; Ergul, Ayhan
2009-01-01
The Digital Shoreline Analysis System (DSAS) version 4.0 is a software extension to ESRI ArcGIS v.9.2 and above that enables a user to calculate shoreline rate-of-change statistics from multiple historic shoreline positions. A user-friendly interface of simple buttons and menus guides the user through the major steps of shoreline change analysis. Components of the extension and user guide include (1) instruction on the proper way to define a reference baseline for measurements, (2) automated and manual generation of measurement transects and metadata based on user-specified parameters, and (3) output of calculated rates of shoreline change and other statistical information. DSAS computes shoreline rates of change using four different methods: (1) endpoint rate, (2) simple linear regression, (3) weighted linear regression, and (4) least median of squares. The standard error, correlation coefficient, and confidence interval are also computed for the simple and weighted linear-regression methods. The results of all rate calculations are output to a table that can be linked to the transect file by a common attribute field. DSAS is intended to facilitate the shoreline change-calculation process and to provide rate-of-change information and the statistical data necessary to establish the reliability of the calculated results. The software is also suitable for any generic application that calculates positional change over time, such as assessing rates of change of glacier limits in sequential aerial photos, river edge boundaries, land-cover changes, and so on.
Statistics or How to Know Your Onions.
ERIC Educational Resources Information Center
Hawkins, Anne S.
1986-01-01
Using calculators (and computers) to develop an understanding and appreciation of statistical ideas is advocated. Manual computation as a prerequisite for developing concepts is negated through several examples. (MNS)
Conradi, Una; Joffe, Ari R
2017-07-07
To determine a direct measure of publication bias by determining subsequent full-paper publication (P) of studies reported in animal research abstracts presented at an international conference (A). We selected 100 random (using a random-number generator) A from the 2008 Society of Critical Care Medicine Conference. Using a data collection form and study manual, we recorded methodology and result variables from A. We searched PubMed and EMBASE to June 2015, and DOAJ and Google Scholar to May 2017 to screen for subsequent P. Methodology and result variables were recorded from P to determine changes in reporting from A. Predictors of P were examined using Fisher's Exact Test. 62% (95% CI 52-71%) of studies described in A were subsequently P after a median 19 [IQR 9-33.3] months from conference presentation. Reporting of studies in A was of low quality: randomized 27% (the method of randomization and allocation concealment not described), blinded 0%, sample-size calculation stated 0%, specifying the primary outcome 26%, numbers given with denominators 6%, and stating number of animals used 47%. Only being an orally presented (vs. poster presented) A (14/16 vs. 48/84, p = 0.025) predicted P. Reporting of studies in P was of poor quality: randomized 39% (the method of randomization and allocation concealment not described), likely blinded 6%, primary outcome specified 5%, sample size calculation stated 0%, numbers given with denominators 34%, and number of animals used stated 56%. Changes in reporting from A to P occurred: from non-randomized to randomized 19%, from non-blinded to blinded 6%, from negative to positive outcomes 8%, from having to not having a stated primary outcome 16%, and from non-statistically to statistically significant findings 37%. Post-hoc, using publication data, P was predicted by having positive outcomes (published 62/62, unpublished 33/38; p = 0.003), or statistically significant results (published 58/62, unpublished 20/38; p < 0.001). Only 62% (95% CI 52-71%) of animal research A are subsequently P; this was predicted by oral presentation of the A, finally having positive outcomes, and finally having statistically significant results. Publication bias is prevalent in critical care animal research.
Calculation of precise firing statistics in a neural network model
NASA Astrophysics Data System (ADS)
Cho, Myoung Won
2017-08-01
A precise prediction of neural firing dynamics is requisite to understand the function of and the learning process in a biological neural network which works depending on exact spike timings. Basically, the prediction of firing statistics is a delicate manybody problem because the firing probability of a neuron at a time is determined by the summation over all effects from past firing states. A neural network model with the Feynman path integral formulation is recently introduced. In this paper, we present several methods to calculate firing statistics in the model. We apply the methods to some cases and compare the theoretical predictions with simulation results.
2012-01-01
Background Low bone mineral density (BMD) and subsequent fractures are a major public health problem in postmenopausal women. The purpose of this study was to use the aggregate data meta-analytic approach to examine the effects of ground (for example, walking) and/or joint reaction (for example, strength training) exercise on femoral neck (FN) and lumbar spine (LS) BMD in postmenopausal women. Methods The a priori inclusion criteria were: (1) randomized controlled trials, (2) exercise intervention ≥ 24 weeks, (3) comparative control group, (4) postmenopausal women, (5) participants not regularly active, i.e., less than 150 minutes of moderate intensity (3.0 to 5.9 metabolic equivalents) weight bearing endurance activity per week, less than 75 minutes of vigorous intensity (> 6.0 metabolic equivalents) weight bearing endurance activity per week, resistance training < 2 times per week, (6) published and unpublished studies in any language since January 1, 1989, (7) BMD data available at the FN and/or LS. Studies were located by searching six electronic databases, cross-referencing, hand searching and expert review. Dual selection of studies and data abstraction were performed. Hedge’s standardized effect size (g) was calculated for each FN and LS BMD result and pooled using random-effects models. Z-score alpha values, 95%confidence intervals (CI) and number-needed-to-treat (NNT) were calculated for pooled results. Heterogeneity was examined using Q and I2. Mixed-effects ANOVA and simple meta-regression were used to examine changes in FN and LS BMD according to selected categorical and continuous variables. Statistical significance was set at an alpha value ≤0.05 and a trend at >0.05 to ≤ 0.10. Results Small, statistically significant exercise minus control group improvements were found for both FN (28 g’s, 1632 participants, g = 0.288, 95% CI = 0.102, 0.474, p = 0.002, Q = 90.5, p < 0.0001, I2 = 70.1%, NNT = 6) and LS (28 g’s, 1504 participants, g = 0.179, 95% CI = −0.003, 0.361, p = 0.05, Q = 77.7, p < 0.0001, I2 = 65.3%, NNT = 6) BMD. Clinically, it was estimated that the overall changes in FN and LS would reduce the 20-year relative risk of osteoporotic fracture at any site by approximately 11% and 10%, respectively. None of the mixed-effects ANOVA analyses were statistically significant. Statistically significant, or a trend for statistically significant, associations were observed for changes in FN and LS BMD and 20 different predictors. Conclusions The overall findings suggest that exercise may result in clinically relevant benefits to FN and LS BMD in postmenopausal women. Several of the observed associations appear worthy of further investigation in well-designed randomized controlled trials. PMID:22992273
Kelley, George A; Kelley, Kristi S; Kohrt, Wendy M
2012-09-20
Low bone mineral density (BMD) and subsequent fractures are a major public health problem in postmenopausal women. The purpose of this study was to use the aggregate data meta-analytic approach to examine the effects of ground (for example, walking) and/or joint reaction (for example, strength training) exercise on femoral neck (FN) and lumbar spine (LS) BMD in postmenopausal women. The a priori inclusion criteria were: (1) randomized controlled trials, (2) exercise intervention ≥ 24 weeks, (3) comparative control group, (4) postmenopausal women, (5) participants not regularly active, i.e., less than 150 minutes of moderate intensity (3.0 to 5.9 metabolic equivalents) weight bearing endurance activity per week, less than 75 minutes of vigorous intensity (> 6.0 metabolic equivalents) weight bearing endurance activity per week, resistance training < 2 times per week, (6) published and unpublished studies in any language since January 1, 1989, (7) BMD data available at the FN and/or LS. Studies were located by searching six electronic databases, cross-referencing, hand searching and expert review. Dual selection of studies and data abstraction were performed. Hedge's standardized effect size (g) was calculated for each FN and LS BMD result and pooled using random-effects models. Z-score alpha values, 95%confidence intervals (CI) and number-needed-to-treat (NNT) were calculated for pooled results. Heterogeneity was examined using Q and I2. Mixed-effects ANOVA and simple meta-regression were used to examine changes in FN and LS BMD according to selected categorical and continuous variables. Statistical significance was set at an alpha value ≤0.05 and a trend at >0.05 to ≤ 0.10. Small, statistically significant exercise minus control group improvements were found for both FN (28 g's, 1632 participants, g = 0.288, 95% CI = 0.102, 0.474, p = 0.002, Q = 90.5, p < 0.0001, I2 = 70.1%, NNT = 6) and LS (28 g's, 1504 participants, g = 0.179, 95% CI = -0.003, 0.361, p = 0.05, Q = 77.7, p < 0.0001, I2 = 65.3%, NNT = 6) BMD. Clinically, it was estimated that the overall changes in FN and LS would reduce the 20-year relative risk of osteoporotic fracture at any site by approximately 11% and 10%, respectively. None of the mixed-effects ANOVA analyses were statistically significant. Statistically significant, or a trend for statistically significant, associations were observed for changes in FN and LS BMD and 20 different predictors. The overall findings suggest that exercise may result in clinically relevant benefits to FN and LS BMD in postmenopausal women. Several of the observed associations appear worthy of further investigation in well-designed randomized controlled trials.
Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak
2013-01-01
The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.
Blum, Thomas; Chowdhury, Saumitra; Hayakawa, Masashi; Izubuchi, Taku
2015-01-09
The most compelling possibility for a new law of nature beyond the four fundamental forces comprising the standard model of high-energy physics is the discrepancy between measurements and calculations of the muon anomalous magnetic moment. Until now a key part of the calculation, the hadronic light-by-light contribution, has only been accessible from models of QCD, the quantum description of the strong force, whose accuracy at the required level may be questioned. A first principles calculation with systematically improvable errors is needed, along with the upcoming experiments, to decisively settle the matter. For the first time, the form factor that yields the light-by-light scattering contribution to the muon anomalous magnetic moment is computed in such a framework, lattice QCD+QED and QED. A nonperturbative treatment of QED is used and checked against perturbation theory. The hadronic contribution is calculated for unphysical quark and muon masses, and only the diagram with a single quark loop is computed for which statistically significant signals are obtained. Initial results are promising, and the prospect for a complete calculation with physical masses and controlled errors is discussed.
Educational audit on drug dose calculation learning in a Tanzanian school of nursing.
Savage, Angela Ruth
2015-06-01
Patient safety is a key concern for nurses; ability to calculate drug doses correctly is an essential skill to prevent and reduce medication errors. Literature suggests that nurses' drug calculation skills should be monitored. The aim of the study was to conduct an educational audit on drug dose calculation learning in a Tanzanian school of nursing. Specific objectives were to assess learning from targeted teaching, to identify problem areas in performance and to identify ways in which these problem areas might be addressed. A total of 268 registered nurses and nursing students in two year groups of a nursing degree programme were the subjects for the audit; they were given a pretest, then four hours of teaching, a post-test after two weeks and a second post-test after eight weeks. There was a statistically significant improvement in correct answers in the first post-test, but none between the first and second post-tests. Particular problems with drug calculations were identified by the nurses / students, and the teacher; these identified problems were not congruent. Further studies in different settings using different methods of teaching, planned continuing education for all qualified nurses, and appropriate pass marks for students in critical skills are recommended.
A Statistical Representation of Pyrotechnic Igniter Output
NASA Astrophysics Data System (ADS)
Guo, Shuyue; Cooper, Marcia
2017-06-01
The output of simplified pyrotechnic igniters for research investigations is statistically characterized by monitoring the post-ignition external flow field with Schlieren imaging. Unique to this work is a detailed quantification of all measurable manufacturing parameters (e.g., bridgewire length, charge cavity dimensions, powder bed density) and associated shock-motion variability in the tested igniters. To demonstrate experimental precision of the recorded Schlieren images and developed image processing methodologies, commercial exploding bridgewires using wires of different parameters were tested. Finally, a statistically-significant population of manufactured igniters were tested within the Schlieren arrangement resulting in a characterization of the nominal output. Comparisons between the variances measured throughout the manufacturing processes and the calculated output variance provide insight into the critical device phenomena that dominate performance. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under contract DE-AC04-94AL85000.
Using conventional F-statistics to study unconventional sex-chromosome differentiation.
Rodrigues, Nicolas; Dufresnes, Christophe
2017-01-01
Species with undifferentiated sex chromosomes emerge as key organisms to understand the astonishing diversity of sex-determination systems. Whereas new genomic methods are widening opportunities to study these systems, the difficulty to separately characterize their X and Y homologous chromosomes poses limitations. Here we demonstrate that two simple F -statistics calculated from sex-linked genotypes, namely the genetic distance ( F st ) between sexes and the inbreeding coefficient ( F is ) in the heterogametic sex, can be used as reliable proxies to compare sex-chromosome differentiation between populations. We correlated these metrics using published microsatellite data from two frog species ( Hyla arborea and Rana temporaria ), and show that they intimately relate to the overall amount of X-Y differentiation in populations. However, the fits for individual loci appear highly variable, suggesting that a dense genetic coverage will be needed for inferring fine-scale patterns of differentiation along sex-chromosomes. The applications of these F -statistics, which implies little sampling requirement, significantly facilitate population analyses of sex-chromosomes.
Monte-Carlo Method Application for Precising Meteor Velocity from TV Observations
NASA Astrophysics Data System (ADS)
Kozak, P.
2014-12-01
Monte-Carlo method (method of statistical trials) as an application for meteor observations processing was developed in author's Ph.D. thesis in 2005 and first used in his works in 2008. The idea of using the method consists in that if we generate random values of input data - equatorial coordinates of the meteor head in a sequence of TV frames - in accordance with their statistical distributions we get a possibility to plot the probability density distributions for all its kinematical parameters, and to obtain their mean values and dispersions. At that the theoretical possibility appears to precise the most important parameter - geocentric velocity of a meteor - which has the highest influence onto precision of meteor heliocentric orbit elements calculation. In classical approach the velocity vector was calculated in two stages: first we calculate the vector direction as a vector multiplication of vectors of poles of meteor trajectory big circles, calculated from two observational points. Then we calculated the absolute value of velocity independently from each observational point selecting any of them from some reasons as a final parameter. In the given method we propose to obtain a statistical distribution of velocity absolute value as an intersection of two distributions corresponding to velocity values obtained from different points. We suppose that such an approach has to substantially increase the precision of meteor velocity calculation and remove any subjective inaccuracies.
Understanding the Role of Electron-driven Processes in Atmospheric Behaviour
NASA Astrophysics Data System (ADS)
Brunger, M. J.; Campbell, L.; Jones, D. B.; Cartwright, D. C.
2004-12-01
Electron-impact excitation plays a major role in emission from aurora and a less significant but nonetheless crucial role in the dayglow and nightglow. For some molecules, such as N2, O2 and NO, electron-impact excitation can be followed by radiative cascade through many different sets of energy levels, producing emission with a large number of lines. We review the application of our statistical equilibrium program to predict this rich spectrum of radiation, and we compare results we have obtained against available independent measurements. In addition, we also review the calculation of energy transfer rates from electrons to N2, O2 and NO in the thermosphere. Energy transfer from electrons to neutral gases and ions is one of the dominant electron cooling processes in the ionosphere, and the role of vibrationally excited N2 and O2 in this is particularly significant. The importance of the energy dependence and magnitude of the electron-impact vibrational cross sections in the calculation of these rates is assessed.
Finite-Temperature Behavior of PdH x Elastic Constants Computed by Direct Molecular Dynamics
Zhou, X. W.; Heo, T. W.; Wood, B. C.; ...
2017-05-30
In this paper, robust time-averaged molecular dynamics has been developed to calculate finite-temperature elastic constants of a single crystal. We find that when the averaging time exceeds a certain threshold, the statistical errors in the calculated elastic constants become very small. We applied this method to compare the elastic constants of Pd and PdH 0.6 at representative low (10 K) and high (500 K) temperatures. The values predicted for Pd match reasonably well with ultrasonic experimental data at both temperatures. In contrast, the predicted elastic constants for PdH 0.6 only match well with ultrasonic data at 10 K; whereas, atmore » 500 K, the predicted values are significantly lower. We hypothesize that at 500 K, the facile hydrogen diffusion in PdH 0.6 alters the speed of sound, resulting in significantly reduced values of predicted elastic constants as compared to the ultrasonic experimental data. Finally, literature mechanical testing experiments seem to support this hypothesis.« less
López-Sanromán, F J; de la Riva Andrés, S; Holmbak-Petersen, R; Pérez-Nogués, M; Forés Jackson, P; Santos González, M
2014-10-01
The locomotor pattern alterations produced after the administration of a sublingual detomidine gel was measured by an accelerometric method in horses. Using a randomized two-way crossover design, all animals (n = 6) randomly received either detomidine gel or a placebo administered sublingually. A triaxial accelerometric device was used for gait assessment 15 minutes before (baseline) and every 10 minutes after each treatment for a period of 180 minutes. Eight different parameters were calculated, including speed, stride frequency, stride length, regularity, dorsoventral, propulsion, mediolateral, and total power. Force of acceleration and the three components of power were also calculated. Significant statistical differences were observed between groups in all the parameters but stride length. The majority of significant changes started between 30 and 70 minutes after drug administration and lasted for 160 minutes. This route of administration is definitely useful in horses in which a prolonged sedation is required, with stability being a major concern. Copyright © 2014 Elsevier Ltd. All rights reserved.
Health Disparities Calculator (HD*Calc) - SEER Software
Statistical software that generates summary measures to evaluate and monitor health disparities. Users can import SEER data or other population-based health data to calculate 11 disparity measurements.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-20
... rent and the core-based statistical area (CBSA) rent as applied to the 40th percentile FMR for that..., calculated on the basis of the core-based statistical area (CBSA) or the metropolitan Statistical Area (MSA... will be ranked according to each of the statistics specified above, and then a weighted average ranking...
Lin, Yuning; Li, Hui; Chen, Ziqian; Ni, Ping; Zhong, Qun; Huang, Huijuan; Sandrasegaran, Kumar
2015-05-01
The purpose of this study was to investigate the application of histogram analysis of apparent diffusion coefficient (ADC) in characterizing pathologic features of cervical cancer and benign cervical lesions. This prospective study was approved by the institutional review board, and written informed consent was obtained. Seventy-three patients with cervical cancer (33-69 years old; 35 patients with International Federation of Gynecology and Obstetrics stage IB cervical cancer) and 38 patients (38-61 years old) with normal cervix or cervical benign lesions (control group) were enrolled. All patients underwent 3-T diffusion-weighted imaging (DWI) with b values of 0 and 800 s/mm(2). ADC values of the entire tumor in the patient group and the whole cervix volume in the control group were assessed. Mean ADC, median ADC, 25th and 75th percentiles of ADC, skewness, and kurtosis were calculated. Histogram parameters were compared between different pathologic features, as well as between stage IB cervical cancer and control groups. Mean ADC, median ADC, and 25th percentile of ADC were significantly higher for adenocarcinoma (p = 0.021, 0.006, and 0.004, respectively), and skewness was significantly higher for squamous cell carcinoma (p = 0.011). Median ADC was statistically significantly higher for well or moderately differentiated tumors (p = 0.044), and skewness was statistically significantly higher for poorly differentiated tumors (p = 0.004). No statistically significant difference of ADC histogram was observed between lymphovascular space invasion subgroups. All histogram parameters differed significantly between stage IB cervical cancer and control groups (p < 0.05). Distribution of ADCs characterized by histogram analysis may help to distinguish early-stage cervical cancer from normal cervix or cervical benign lesions and may be useful for evaluating the different pathologic features of cervical cancer.
Bianchi, C; Botta, F; Conte, L; Vanoli, P; Cerizza, L
2008-10-01
This study was undertaken to compare the biological efficacy of different high-dose-rate (HDR) and low-dose-rate (LDR) treatments of gynaecological lesions, to identify the causes of possible nonuniformity and to optimise treatment through customised calculation. The study considered 110 patients treated between 2001 and 2006 with external beam radiation therapy and/or brachytherapy with either LDR (afterloader Selectron, (137)Cs) or HDR (afterloader microSelectron Classic, (192)Ir). The treatments were compared in terms of biologically effective dose (BED) to the tumour and to the rectum (linear-quadratic model) by using statistical tests for comparisons between independent samples. The difference between the two treatments was statistically significant in one case only. However, within each technique, we identified considerable nonuniformity in therapeutic efficacy due to differences in fractionation schemes and overall treatment time. To solve this problem, we created a Microsoft Excel spreadsheet allowing calculation of the optimal treatment for each patient: best efficacy (BED(tumour)) without exceeding toxicity threshold (BED(rectum)). The efficacy of a treatment may vary as a result of several factors. Customised radiobiological evaluation is a useful adjunct to clinical evaluation in planning equivalent treatments that satisfy all dosimetric constraints.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abrecht, David G.; Schwantes, Jon M.; Kukkadapu, Ravi K.
2015-02-01
Spectrum-processing software that incorporates a gaussian smoothing kernel within the statistics of first-order Kalman filtration has been developed to provide cross-channel spectral noise reduction for increased real-time signal-to-noise ratios for Mossbauer spectroscopy. The filter was optimized for the breadth of the gaussian using the Mossbauer spectrum of natural iron foil, and comparisons between the peak broadening, signal-to-noise ratios, and shifts in the calculated hyperfine parameters are presented. The results of optimization give a maximum improvement in the signal-to-noise ratio of 51.1% over the unfiltered spectrum at a gaussian breadth of 27 channels, or 2.5% of the total spectrum width. Themore » full-width half-maximum of the spectrum peaks showed an increase of 19.6% at this optimum point, indicating a relatively weak increase in the peak broadening relative to the signal enhancement, leading to an overall increase in the observable signal. Calculations of the hyperfine parameters showed no statistically significant deviations were introduced from the application of the filter, confirming the utility of this filter for spectroscopy applications.« less
Jauhari, Nidhi; Chopra, Deepak; Chaurasia, Rajan Kumar; Agarwal, Ashutosh
2014-01-01
To determine the surgically induced astigmatism (SIA) in Straight, Frown and Inverted V shape (Chevron) incisions in manual small incision cataract surgery (SICS). A prospective cross sectional study was done on a total of 75 patients aged 40y and above with senile cataract. The patients were randomly divided into three groups (25 each). Each group received a particular type of incision (Straight, Frown or Inverted V shape incisions). Manual SICS with intraocular lens (IOL) implantation was performed. The patients were compared 4wk post operatively for uncorrected visual acuity (UCVA), best corrected visual acuity (BCVA) and SIA. All calculations were performed using the SIA calculator version 2.1, a free software program. The study was analyzed using SPSS version 15.0 statistical analysis software. The study found that 89.5% of patients in Straight incision group, 94.2% in Frown incision group and 95.7% in Inverted V group attained BCVA post-operatively in the range of 6/6 to 6/18. Mean SIA was minimum (-0.88±0.61D×90 degrees) with Inverted V incision which was statistically significant. Inverted V (Chevron) incision gives minimal SIA.
NASA Astrophysics Data System (ADS)
Blum, T.; Boyle, P. A.; Izubuchi, T.; Jin, L.; Jüttner, A.; Lehner, C.; Maltman, K.; Marinkovic, M.; Portelli, A.; Spraggs, M.; Rbc; Ukqcd Collaborations
2016-06-01
We report the first lattice QCD calculation of the hadronic vacuum polarization (HVP) disconnected contribution to the muon anomalous magnetic moment at physical pion mass. The calculation uses a refined noise-reduction technique that enables the control of statistical uncertainties at the desired level with modest computational effort. Measurements were performed on the 483×96 physical-pion-mass lattice generated by the RBC and UKQCD Collaborations. We find the leading-order hadronic vacuum polarization aμHVP (LO )disc=-9.6 (3.3 )(2.3 )×10-10 , where the first error is statistical and the second systematic.
Blum, T; Boyle, P A; Izubuchi, T; Jin, L; Jüttner, A; Lehner, C; Maltman, K; Marinkovic, M; Portelli, A; Spraggs, M
2016-06-10
We report the first lattice QCD calculation of the hadronic vacuum polarization (HVP) disconnected contribution to the muon anomalous magnetic moment at physical pion mass. The calculation uses a refined noise-reduction technique that enables the control of statistical uncertainties at the desired level with modest computational effort. Measurements were performed on the 48^{3}×96 physical-pion-mass lattice generated by the RBC and UKQCD Collaborations. We find the leading-order hadronic vacuum polarization a_{μ}^{HVP(LO)disc}=-9.6(3.3)(2.3)×10^{-10}, where the first error is statistical and the second systematic.
1991-03-01
the A parameters; yhatf, to calculate the y-hat statistics; ssrf, to calculate the uncorrected SSR; sstof, to calculate the uncorrected SSTO ; matmulmm...DEGREES OF FREEDOM * int sstocdf, ssrcdf, ssecdf; float ssr, ssto , sse; /* SUM OF SQUARES * float ssrc, sstoc, ssec; float insr, insto, inse; float...Y-HAT STATSISTICS * yhatf(x,beta,stats,n,n); /* CALCULATE UNCORRECTED SSR * ssrf(beta, x, y, mn, n, ss); ssr = ss[l][l]; /* CALCULATE UNCORRECTED SSTO
Lukas, J M; Hawkins, D M; Kinsel, M L; Reneau, J K
2005-11-01
The objective of this study was to examine the relationship between monthly Dairy Herd Improvement (DHI) subclinical mastitis and new infection rate estimates and daily bulk tank somatic cell count (SCC) summarized by statistical process control tools. Dairy Herd Improvement Association test-day subclinical mastitis and new infection rate estimates along with daily or every other day bulk tank SCC data were collected for 12 mo of 2003 from 275 Upper Midwest dairy herds. Herds were divided into 5 herd production categories. A linear score [LNS = ln(BTSCC/100,000)/0.693147 + 3] was calculated for each individual bulk tank SCC. For both the raw SCC and the transformed data, the mean and sigma were calculated using the statistical quality control individual measurement and moving range chart procedure of Statistical Analysis System. One hundred eighty-three herds of the 275 herds from the study data set were then randomly selected and the raw (method 1) and transformed (method 2) bulk tank SCC mean and sigma were used to develop models for predicting subclinical mastitis and new infection rate estimates. Herd production category was also included in all models as 5 dummy variables. Models were validated by calculating estimates of subclinical mastitis and new infection rates for the remaining 92 herds and plotting them against observed values of each of the dependents. Only herd production category and bulk tank SCC mean were significant and remained in the final models. High R2 values (0.83 and 0.81 for methods 1 and 2, respectively) indicated a strong correlation between the bulk tank SCC and herd's subclinical mastitis prevalence. The standard errors of the estimate were 4.02 and 4.28% for methods 1 and 2, respectively, and decreased with increasing herd production. As a case study, Shewhart Individual Measurement Charts were plotted from the bulk tank SCC to identify shifts in mastitis incidence. Four of 5 charts examined signaled a change in bulk tank SCC before the DHI test day identified the change in subclinical mastitis prevalence. It can be concluded that applying statistical process control tools to daily bulk tank SCC can be used to estimate subclinical mastitis prevalence in the herd and observe for change in the subclinical mastitis status. Single DHI test day estimates of new infection rate were insufficient to accurately describe its dynamics.
Validation of Cross Sections for Monte Carlo Simulation of the Photoelectric Effect
NASA Astrophysics Data System (ADS)
Han, Min Cheol; Kim, Han Sung; Pia, Maria Grazia; Basaglia, Tullio; Batič, Matej; Hoff, Gabriela; Kim, Chan Hyeong; Saracco, Paolo
2016-04-01
Several total and partial photoionization cross section calculations, based on both theoretical and empirical approaches, are quantitatively evaluated with statistical analyses using a large collection of experimental data retrieved from the literature to identify the state of the art for modeling the photoelectric effect in Monte Carlo particle transport. Some of the examined cross section models are available in general purpose Monte Carlo systems, while others have been implemented and subjected to validation tests for the first time to estimate whether they could improve the accuracy of particle transport codes. The validation process identifies Scofield's 1973 non-relativistic calculations, tabulated in the Evaluated Photon Data Library (EPDL), as the one best reproducing experimental measurements of total cross sections. Specialized total cross section models, some of which derive from more recent calculations, do not provide significant improvements. Scofield's non-relativistic calculations are not surpassed regarding the compatibility with experiment of K and L shell photoionization cross sections either, although in a few test cases Ebel's parameterization produces more accurate results close to absorption edges. Modifications to Biggs and Lighthill's parameterization implemented in Geant4 significantly reduce the accuracy of total cross sections at low energies with respect to its original formulation. The scarcity of suitable experimental data hinders a similar extensive analysis for the simulation of the photoelectron angular distribution, which is limited to a qualitative appraisal.
Ruangsetakit, Varee
2015-11-01
To re-examine relative accuracy of intraocular lens (IOL) power calculation of immersion ultrasound biometry (IUB) and partial coherence interferometry (PCI) based on a new approach that limits its interest on the cases in which the IUB's IOL and PCI's IOL assignments disagree. Prospective observational study of 108 eyes that underwent cataract surgeries at Taksin Hospital. Two halves ofthe randomly chosen sample eyes were implanted with the IUB- and PCI-assigned lens. Postoperative refractive errors were measured in the fifth week. More accurate calculation was based on significantly smaller mean absolute errors (MAEs) and root mean squared errors (RMSEs) away from emmetropia. The distributions of the errors were examined to ensure that the higher accuracy was significant clinically as well. The (MAEs, RMSEs) were smaller for PCI of (0.5106 diopter (D), 0.6037D) than for IUB of (0.7000D, 0.8062D). The higher accuracy was principally contributedfrom negative errors, i.e., myopia. The MAEs and RMSEs for (IUB, PCI)'s negative errors were (0.7955D, 0.5185D) and (0.8562D, 0.5853D). Their differences were significant. The 72.34% of PCI errors fell within a clinically accepted range of ± 0.50D, whereas 50% of IUB errors did. PCI's higher accuracy was significant statistically and clinically, meaning that lens implantation based on PCI's assignments could improve postoperative outcomes over those based on IUB's assignments.
The Utility of Robust Means in Statistics
ERIC Educational Resources Information Center
Goodwyn, Fara
2012-01-01
Location estimates calculated from heuristic data were examined using traditional and robust statistical methods. The current paper demonstrates the impact outliers have on the sample mean and proposes robust methods to control for outliers in sample data. Traditional methods fail because they rely on the statistical assumptions of normality and…
An Examination of Statistical Power in Multigroup Dynamic Structural Equation Models
ERIC Educational Resources Information Center
Prindle, John J.; McArdle, John J.
2012-01-01
This study used statistical simulation to calculate differential statistical power in dynamic structural equation models with groups (as in McArdle & Prindle, 2008). Patterns of between-group differences were simulated to provide insight into how model parameters influence power approximations. Chi-square and root mean square error of…
An indirect approach to the extensive calculation of relationship coefficients
Colleau, Jean-Jacques
2002-01-01
A method was described for calculating population statistics on relationship coefficients without using corresponding individual data. It relied on the structure of the inverse of the numerator relationship matrix between individuals under investigation and ancestors. Computation times were observed on simulated populations and were compared to those incurred with a conventional direct approach. The indirect approach turned out to be very efficient for multiplying the relationship matrix corresponding to planned matings (full design) by any vector. Efficiency was generally still good or very good for calculating statistics on these simulated populations. An extreme implementation of the method is the calculation of inbreeding coefficients themselves. Relative performances of the indirect method were good except when many full-sibs during many generations existed in the population. PMID:12270102
Cross sections for the γp→K*+Λ and γp→K*+Σ0 reactions measured at CLAS
NASA Astrophysics Data System (ADS)
Tang, W.; Hicks, K.; Keller, D.; Kim, S. H.; Kim, H. C.; Adhikari, K. P.; Aghasyan, M.; Amaryan, M. J.; Anderson, M. D.; Anefalos Pereira, S.; Baltzell, N. A.; Battaglieri, M.; Bedlinskiy, I.; Biselli, A. S.; Bono, J.; Boiarinov, S.; Briscoe, W. J.; Burkert, V. D.; Carman, D. S.; Celentano, A.; Chandavar, S.; Charles, G.; Cole, P. L.; Collins, P.; Contalbrigo, M.; Cortes, O.; Crede, V.; D'Angelo, A.; Dashyan, N.; De Vita, R.; De Sanctis, E.; Deur, A.; Djalali, C.; Doughty, D.; Dupre, R.; Alaoui, A. El; Fassi, L. El; Eugenio, P.; Fedotov, G.; Fegan, S.; Fleming, J. A.; Gabrielyan, M. Y.; Gevorgyan, N.; Gilfoyle, G. P.; Giovanetti, K. L.; Girod, F. X.; Gohn, W.; Golovatch, E.; Gothe, R. W.; Griffioen, K. A.; Guidal, M.; Guo, L.; Hafidi, K.; Hakobyan, H.; Hanretty, C.; Harrison, N.; Heddle, D.; Ho, D.; Holtrop, M.; Hyde, C. E.; Ilieva, Y.; Ireland, D. G.; Ishkhanov, B. S.; Isupov, E. L.; Jo, H. S.; Joo, K.; Khandaker, M.; Khetarpal, P.; Kim, A.; Kim, W.; Klein, F. J.; Koirala, S.; Kubarovsky, A.; Kubarovsky, V.; Kuleshov, S. V.; Livingston, K.; Lu, H. Y.; MacGregor, I. J. D.; Mao, Y.; Markov, N.; Martinez, D.; Mayer, M.; McKinnon, B.; Meyer, C. A.; Mokeev, V.; Moutarde, H.; Munevar, E.; Munoz Camacho, C.; Nadel-Turonski, P.; Nepali, C. S.; Niccolai, S.; Niculescu, G.; Niculescu, I.; Osipenko, M.; Ostrovidov, A. I.; Pappalardo, L. L.; Paremuzyan, R.; Park, K.; Park, S.; Pasyuk, E.; Phelps, E.; Phillips, J. J.; Pisano, S.; Pogorelko, O.; Pozdniakov, S.; Price, J. W.; Procureur, S.; Prok, Y.; Protopopescu, D.; Puckett, A. J. R.; Raue, B. A.; Ripani, M.; Rimal, D.; Ritchie, B. G.; Rosner, G.; Rossi, P.; Sabatié, F.; Saini, M. S.; Salgado, C.; Schott, D.; Schumacher, R. A.; Seraydaryan, H.; Sharabian, Y. G.; Smith, G. D.; Sober, D. I.; Sokhan, D.; Stepanyan, S. S.; Stepanyan, S.; Stoler, P.; Strakovsky, I. I.; Strauch, S.; Taylor, C. E.; Tian, Ye; Tkachenko, S.; Torayev, B.; Ungaro, M.; Vernarsky, B.; Vlassov, A. V.; Voskanyan, H.; Voutier, E.; Walford, N. K.; Watts, D. P.; Weinstein, L. B.; Weygand, D. P.; Wood, M. H.; Zachariou, N.; Zana, L.; Zhang, J.; Zhao, Z. W.; Zonta, I.
2013-06-01
The first high-statistics cross sections for the reactions γp→K*+Λ and γp→K*+Σ0 were measured using the CLAS detector at photon energies between threshold and 3.9 GeV at the Thomas Jefferson National Accelerator Facility. Differential cross sections are presented over the full range of the center-of-mass angles, and then fitted to Legendre polynomials to extract the total cross section. Results for the K*+Λ final state are compared with two different calculations in an isobar and a Regge model, respectively. Theoretical calculations significantly underestimate the K*+Λ total cross sections between 2.1 and 2.6 GeV, but are in better agreement with present data at higher photon energies.
A Priori Estimation of Organic Reaction Yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, Fateme S.; Vahid, Amir; Wylie, Elizabeth K.
2015-07-21
A thermodynamically guided calculation of free energies of substrate and product molecules allows for the estimation of the yields of organic reactions. The non-ideality of the system and the solvent effects are taken into account through the activity coefficients calculated at the molecular level by perturbed-chain statistical associating fluid theory (PC-SAFT). The model is iteratively trained using a diverse set of reactions with yields that have been reported previously. This trained model can then estimate a priori the yields of reactions not included in the training set with an accuracy of ca. ±15 %. This ability has the potential tomore » translate into significant economic savings through the selection and then execution of only those reactions that can proceed in good yields.« less
Atomic rate coefficients in a degenerate plasma
NASA Astrophysics Data System (ADS)
Aslanyan, Valentin; Tallents, Greg
2015-11-01
The electrons in a dense, degenerate plasma follow Fermi-Dirac statistics, which deviate significantly in this regime from the usual Maxwell-Boltzmann approach used by many models. We present methods to calculate the atomic rate coefficients for the Fermi-Dirac distribution and present a comparison of the ionization fraction of carbon calculated using both models. We have found that for densities close to solid, although the discrepancy is small for LTE conditions, there is a large divergence from the ionization fraction by using classical rate coefficients in the presence of strong photoionizing radiation. We have found that using these modified rates and the degenerate heat capacity may affect the time evolution of a plasma subject to extreme ultraviolet and x-ray radiation such as produced in free electron laser irradiation of solid targets.
The effect of Gonioscopy on keratometry and corneal surface topography.
George, Mathew K; Kuriakose, Thomas; DeBroff, Brian M; Emerson, John W
2006-06-17
Biometric procedures such as keratometry performed shortly after contact procedures like gonioscopy and applanation tonometry could affect the validity of the measurement. This study was conducted to understand the short-term effect of gonioscopy on corneal curvature measurements and surface topography based Simulated Keratometry and whether this would alter the power of an intraocular lens implant calculated using post-gonioscopy measurements. We further compared the effect of the 2-mirror (Goldmann) and the 4-mirror (Sussman) Gonioscopes. A prospective clinic-based self-controlled comparative study. 198 eyes of 99 patients, above 50 years of age, were studied. Exclusion criteria included documented dry eye, history of ocular surgery or trauma, diabetes mellitus and connective tissue disorders. Auto-Keratometry and corneal topography measurements were obtained at baseline and at three follow-up times - within the first 5 minutes, between the 10th-15th minute and between the 20th-25th minute after intervention. One eye was randomized for intervention with the 2-mirror gonioscope and the other underwent the 4-mirror after baseline measurements. t-tests were used to examine differences between interventions and between the measurement methods. The sample size was calculated using an estimate of clinically significant lens implant power changes based on the SRK-II formula. Clinically and statistically significant steepening was observed in the first 5 minutes and in the 10-15 minute interval using topography-based Sim K. These changes were not present with the Auto-Keratometer measurements. Although changes from baseline were noted between 20 and 25 minutes topographically, these were not clinically or statistically significant. There was no significant difference between the two types of gonioscopes. There was greater variability in the changes from baseline using the topography-based Sim K readings. Reversible steepening of the central corneal surface is produced by the act of gonioscopy as measured by Sim K, whereas no significant differences were present with Auto-K measurements. The type of Gonioscope used does not appear to influence these results. If topographically derived Sim K is used to calculate the power of the intraocular lens implant, we recommend waiting a minimum of 20 minutes before measuring the corneal curvature after gonioscopy with either Goldmann or Sussman contact lenses.
The effect of Gonioscopy on keratometry and corneal surface topography
George, Mathew K; Kuriakose, Thomas; DeBroff, Brian M; Emerson, John W
2006-01-01
Background Biometric procedures such as keratometry performed shortly after contact procedures like gonioscopy and applanation tonometry could affect the validity of the measurement. This study was conducted to understand the short-term effect of gonioscopy on corneal curvature measurements and surface topography based Simulated Keratometry and whether this would alter the power of an intraocular lens implant calculated using post-gonioscopy measurements. We further compared the effect of the 2-mirror (Goldmann) and the 4-mirror (Sussman) Gonioscopes. Methods A prospective clinic-based self-controlled comparative study. 198 eyes of 99 patients, above 50 years of age, were studied. Exclusion criteria included documented dry eye, history of ocular surgery or trauma, diabetes mellitus and connective tissue disorders. Auto-Keratometry and corneal topography measurements were obtained at baseline and at three follow-up times – within the first 5 minutes, between the 10th-15th minute and between the 20th-25th minute after intervention. One eye was randomized for intervention with the 2-mirror gonioscope and the other underwent the 4-mirror after baseline measurements. t-tests were used to examine differences between interventions and between the measurement methods. The sample size was calculated using an estimate of clinically significant lens implant power changes based on the SRK-II formula. Results Clinically and statistically significant steepening was observed in the first 5 minutes and in the 10–15 minute interval using topography-based Sim K. These changes were not present with the Auto-Keratometer measurements. Although changes from baseline were noted between 20 and 25 minutes topographically, these were not clinically or statistically significant. There was no significant difference between the two types of gonioscopes. There was greater variability in the changes from baseline using the topography-based Sim K readings. Conclusion Reversible steepening of the central corneal surface is produced by the act of gonioscopy as measured by Sim K, whereas no significant differences were present with Auto-K measurements. The type of Gonioscope used does not appear to influence these results. If topographically derived Sim K is used to calculate the power of the intraocular lens implant, we recommend waiting a minimum of 20 minutes before measuring the corneal curvature after gonioscopy with either Goldmann or Sussman contact lenses. PMID:16780595
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
NASA Astrophysics Data System (ADS)
Jasper, Ahren W.; Dawes, Richard
2013-10-01
The lowest-energy singlet (1 1A') and two lowest-energy triplet (1 3A' and 1 3A″) electronic states of CO2 are characterized using dynamically weighted multireference configuration interaction (dw-MRCI+Q) electronic structure theory calculations extrapolated to the complete basis set (CBS) limit. Global analytic representations of the dw-MRCI+Q/CBS singlet and triplet surfaces and of their CASSCF/aug-cc-pVQZ spin-orbit coupling surfaces are obtained via the interpolated moving least squares (IMLS) semiautomated surface fitting method. The spin-forbidden kinetics of the title reaction is calculated using the coupled IMLS surfaces and coherent switches with decay of mixing non-Born-Oppenheimer molecular dynamics. The calculated spin-forbidden association rate coefficient (corresponding to the high pressure limit of the rate coefficient) is 7-35 times larger at 1000-5000 K than the rate coefficient used in many detailed chemical models of combustion. A dynamical analysis of the multistate trajectories is presented. The trajectory calculations reveal direct (nonstatistical) and indirect (statistical) spin-forbidden reaction mechanisms and may be used to test the suitability of transition-state-theory-like statistical methods for spin-forbidden kinetics. Specifically, we consider the appropriateness of the "double passage" approximation, of assuming statistical distributions of seam crossings, and of applications of the unified statistical model for spin-forbidden reactions.
Diabetes mellitus affects biomechanical properties of the optic nerve head in the rat.
Terai, Naim; Spoerl, Eberhard; Haustein, Michael; Hornykewycz, Karin; Haentzschel, Janek; Pillunat, Lutz E
2012-01-01
To investigate the effect of diabetes on the biomechanical behavior of the optic nerve head (ONH) and the peripapillary sclera (ppSc) in streptozocine-induced diabetic rats. Diabetes mellitus was induced in 20 Wistar rats using streptozocine. Twenty-five nondiabetic rats served as controls. Eyes were enucleated after 12 weeks and 2 strips of one eye were prepared containing ONH or ppSc. The stress-strain relation was measured in the stress range of 0.05-10 MPa using a biomaterial tester. At 5% strain the stress of the ONH in diabetic rats was 897±295 kPa and in the control group it was 671±246 kPa; there was a significant difference between both groups (p=0.011). The stress of the diabetic ppSc (574±185 kPa) increased compared to that of the nondiabetic ppSc (477±171 kPa), but this did not reach statistical significance (p=0.174). The calculated tangent modulus at 5% strain was 11.79 MPa in the diabetic ONH and 8.77 MPa in the nondiabetic ONH; there was a significant difference between both groups (p=0.006). The calculated tangent modulus at 5% strain was 7.17 MPa in the diabetic ppSc and 6.12 MPa in the nondiabetic ppSc, without a statistically significant difference (p=0.09). In contrast to the ppSc, the ONH of diabetic rats showed a significant increase in stiffness compared to nondiabetic rats, which might be explained by nonenzymatic collagen cross-linking mediated by advanced glycation end products due to high blood glucose levels in diabetes. Further studies are needed to investigate if these biomechanical changes represent a detrimental risk factor for intraocular pressure regulation in diabetic glaucoma patients. Copyright © 2011 S. Karger AG, Basel.
Conroy, M.J.; Samuel, M.D.; White, Joanne C.
1995-01-01
Statistical power (and conversely, Type II error) is often ignored by biologists. Power is important to consider in the design of studies, to ensure that sufficient resources are allocated to address a hypothesis under examination. Deter- mining appropriate sample size when designing experiments or calculating power for a statistical test requires an investigator to consider the importance of making incorrect conclusions about the experimental hypothesis and the biological importance of the alternative hypothesis (or the biological effect size researchers are attempting to measure). Poorly designed studies frequently provide results that are at best equivocal, and do little to advance science or assist in decision making. Completed studies that fail to reject Ho should consider power and the related probability of a Type II error in the interpretation of results, particularly when implicit or explicit acceptance of Ho is used to support a biological hypothesis or management decision. Investigators must consider the biological question they wish to answer (Tacha et al. 1982) and assess power on the basis of biologically significant differences (Taylor and Gerrodette 1993). Power calculations are somewhat subjective, because the author must specify either f or the minimum difference that is biologically important. Biologists may have different ideas about what values are appropriate. While determining biological significance is of central importance in power analysis, it is also an issue of importance in wildlife science. Procedures, references, and computer software to compute power are accessible; therefore, authors should consider power. We welcome comments or suggestions on this subject.
SU-F-R-20: Image Texture Features Correlate with Time to Local Failure in Lung SBRT Patients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, M; Abazeed, M; Woody, N
Purpose: To explore possible correlation between CT image-based texture and histogram features and time-to-local-failure in early stage non-small cell lung cancer (NSCLC) patients treated with stereotactic body radiotherapy (SBRT).Methods and Materials: From an IRB-approved lung SBRT registry for patients treated between 2009–2013 we selected 48 (20 male, 28 female) patients with local failure. Median patient age was 72.3±10.3 years. Mean time to local failure was 15 ± 7.1 months. Physician-contoured gross tumor volumes (GTV) on the planning CT images were processed and 3D gray-level co-occurrence matrix (GLCM) based texture and histogram features were calculated in Matlab. Data were exported tomore » R and a multiple linear regression model was used to examine the relationship between texture features and time-to-local-failure. Results: Multiple linear regression revealed that entropy (p=0.0233, multiple R2=0.60) from GLCM-based texture analysis and the standard deviation (p=0.0194, multiple R2=0.60) from the histogram-based features were statistically significantly correlated with the time-to-local-failure. Conclusion: Image-based texture analysis can be used to predict certain aspects of treatment outcomes of NSCLC patients treated with SBRT. We found entropy and standard deviation calculated for the GTV on the CT images displayed a statistically significant correlation with and time-to-local-failure in lung SBRT patients.« less
Estimation of true height: a study in population-specific methods among young South African adults.
Lahner, Christen Renée; Kassier, Susanna Maria; Veldman, Frederick Johannes
2017-02-01
To investigate the accuracy of arm-associated height estimation methods in the calculation of true height compared with stretch stature in a sample of young South African adults. A cross-sectional descriptive design was employed. Pietermaritzburg, Westville and Durban, KwaZulu-Natal, South Africa, 2015. Convenience sample (N 900) aged 18-24 years, which included an equal number of participants from both genders (150 per gender) stratified across race (Caucasian, Black African and Indian). Continuous variables that were investigated included: (i) stretch stature; (ii) total armspan; (iii) half-armspan; (iv) half-armspan ×2; (v) demi-span; (vi) demi-span gender-specific equation; (vii) WHO equation; and (viii) WHO-adjusted equations; as well as categorization according to gender and race. Statistical analysis was conducted using IBM SPSS Statistics Version 21.0. Significant correlations were identified between gender and height estimation measurements, with males being anatomically larger than females (P<0·001). Significant differences were documented when study participants were stratified according to race and gender (P<0·001). Anatomical similarities were noted between Indians and Black Africans, whereas Caucasians were anatomically different from the other race groups. Arm-associated height estimation methods were able to estimate true height; however, each method was specific to each gender and race group. Height can be calculated by using arm-associated measurements. Although universal equations for estimating true height exist, for the enhancement of accuracy, the use of equations that are race-, gender- and population-specific should be considered.
Herek, Duygu; Karabulut, Nevzat; Kocyıgıt, Ali; Yagcı, Ahmet Baki
2016-01-01
Our aim was to compare the apparent diffusion coefficient (ADC) values of normal abdominal parenchymal organs and signal-to-noise ratio (SNR) measurements in the same patients with breath hold (BH) and free breathing (FB) diffusion weighted imaging (DWI). Forty-eight patients underwent both BH and FB DWI. Spherical region of interest (ROI) was placed on the right hepatic lobe, spleen, pancreas, and renal cortices. ADC values were calculated for each organ on each sequence using an automated software. Image noise, defined as the standard deviation (SD) of the signal intensities in the most artifact-free area of the image background was measured by placing the largest possible ROI on either the left or the right side of the body outside the object in the recorded field of view. SNR was calculated using the formula: SNR=signal intensity (SI) (organ) /standard deviation (SD) (noise) . There were no statistically significant differences in ADC values of the abdominal organs between BH and FB DWI sequences ( p >0.05). There were statistically significant differences between SNR values of organs on BH and FB DWIs. SNRs were found to be better on FB DWI than BH DWI ( p <0.001). Free breathing DWI technique reduces image noise and increases SNR for abdominal examinations. Free breathing technique is therefore preferable to BH DWI in the evaluation of abdominal organs by DWI.
Significance of stress transfer in time-dependent earthquake probability calculations
Parsons, T.
2005-01-01
A sudden change in stress is seen to modify earthquake rates, but should it also revise earthquake probability? Data used to derive input parameters permits an array of forecasts; so how large a static stress change is require to cause a statistically significant earthquake probability change? To answer that question, effects of parameter and philosophical choices are examined through all phases of sample calculations, Drawing at random from distributions of recurrence-aperiodicity pairs identifies many that recreate long paleoseismic and historic earthquake catalogs. Probability density funtions built from the recurrence-aperiodicity pairs give the range of possible earthquake forecasts under a point process renewal model. Consequences of choices made in stress transfer calculations, such as different slip models, fault rake, dip, and friction are, tracked. For interactions among large faults, calculated peak stress changes may be localized, with most of the receiving fault area changed less than the mean. Thus, to avoid overstating probability change on segments, stress change values should be drawn from a distribution reflecting the spatial pattern rather than using the segment mean. Disparity resulting from interaction probability methodology is also examined. For a fault with a well-understood earthquake history, a minimum stress change to stressing rate ratio of 10:1 to 20:1 is required to significantly skew probabilities with >80-85% confidence. That ratio must be closer to 50:1 to exceed 90-95% confidence levels. Thus revision to earthquake probability is achievable when a perturbing event is very close to the fault in question or the tectonic stressing rate is low.
Dingus, Cheryl A; Teuschler, Linda K; Rice, Glenn E; Simmons, Jane Ellen; Narotsky, Michael G
2011-10-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA's Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss.
[Pharmacokinetic interaction of pioglitazone hydrochloride and atorvastatin calcium in Beagle dogs].
Chen, He-Li; Zhang, Wen-Ping; Yang, Fu-Ying; Wang, Xin-Yu; Yang, Wen-Cheng; Dang, Hong-Wan
2013-05-01
The object of this study is to investigate the pharmacokinetic interaction of pioglitazone hydrochloride and atorvastatin calcium in healthy adult Beagle dogs following single and multiple oral dose administration. A randomized, cross-over study was conducted with nine healthy adult Beagle dogs assigned to three groups. Each group was arranged to take atorvastatin calcium (A), pioglitazone hydrochloride (B), atorvastatin calcium and pioglitazone hydrochloride (C) orally in the first period, to take B, C, A in the second period, and to take C, A, B in the third period for 6 days respectively. The blood samples were collected at the first and the sixth day after the administration, plasma drug concentrations were determined by LC-MS/MS, a one-week wash-out period was needed between each period. The pharmacokinetic parameters of drug combination group and the drug alone group were calculated by statistical moment method, calculation of C(max) and AUC(0-t) was done by using 90% confidence interval method of the bioequivalence and bioavailability degree module DAS 3.2.1 software statistics. Compared with the separate administration, the main pharmacokinetic parameters (C(max) and AUC(0-t)) of joint use of pioglitazone hydrochloride and atorvastatin calcium within 90% confidence intervals for bioequivalence statistics were unqualified, the mean t(max) with standard deviation used paired Wilcoxon test resulted P > 0.05. There was no significant difference within t1/2, CL(int), MRT, V/F. Pioglitazone hydrochloride and atorvastatin calcium had pharmacokinetic interaction in healthy adult Beagle dogs.
Dingus, Cheryl A.; Teuschler, Linda K.; Rice, Glenn E.; Simmons, Jane Ellen; Narotsky, Michael G.
2011-01-01
In complex mixture toxicology, there is growing emphasis on testing environmentally representative doses that improve the relevance of results for health risk assessment, but are typically much lower than those used in traditional toxicology studies. Traditional experimental designs with typical sample sizes may have insufficient statistical power to detect effects caused by environmentally relevant doses. Proper study design, with adequate statistical power, is critical to ensuring that experimental results are useful for environmental health risk assessment. Studies with environmentally realistic complex mixtures have practical constraints on sample concentration factor and sample volume as well as the number of animals that can be accommodated. This article describes methodology for calculation of statistical power for non-independent observations for a multigenerational rodent reproductive/developmental bioassay. The use of the methodology is illustrated using the U.S. EPA’s Four Lab study in which rodents were exposed to chlorinated water concentrates containing complex mixtures of drinking water disinfection by-products. Possible experimental designs included two single-block designs and a two-block design. Considering the possible study designs and constraints, a design of two blocks of 100 females with a 40:60 ratio of control:treated animals and a significance level of 0.05 yielded maximum prospective power (~90%) to detect pup weight decreases, while providing the most power to detect increased prenatal loss. PMID:22073030
Shah, Neomi; Hanna, David B; Teng, Yanping; Sotres-Alvarez, Daniela; Hall, Martica; Loredo, Jose S; Zee, Phyllis; Kim, Mimi; Yaggi, H Klar; Redline, Susan; Kaplan, Robert C
2016-06-01
We developed and validated the first-ever sleep apnea (SA) risk calculator in a large population-based cohort of Hispanic/Latino subjects. Cross-sectional data on adults from the Hispanic Community Health Study/Study of Latinos (2008-2011) were analyzed. Subjective and objective sleep measurements were obtained. Clinically significant SA was defined as an apnea-hypopnea index ≥ 15 events per hour. Using logistic regression, four prediction models were created: three sex-specific models (female-only, male-only, and a sex × covariate interaction model to allow differential predictor effects), and one overall model with sex included as a main effect only. Models underwent 10-fold cross-validation and were assessed by using the C statistic. SA and its predictive variables; a total of 17 variables were considered. A total of 12,158 participants had complete sleep data available; 7,363 (61%) were women. The population-weighted prevalence of SA (apnea-hypopnea index ≥ 15 events per hour) was 6.1% in female subjects and 13.5% in male subjects. Male-only (C statistic, 0.808) and female-only (C statistic, 0.836) prediction models had the same predictor variables (ie, age, BMI, self-reported snoring). The sex-interaction model (C statistic, 0.836) contained sex, age, age × sex, BMI, BMI × sex, and self-reported snoring. The final overall model (C statistic, 0.832) contained age, BMI, snoring, and sex. We developed two websites for our SA risk calculator: one in English (https://www.montefiore.org/sleepapneariskcalc.html) and another in Spanish (http://www.montefiore.org/sleepapneariskcalc-es.html). We created an internally validated, highly discriminating, well-calibrated, and parsimonious prediction model for SA. Contrary to the study hypothesis, the variables did not have different predictive magnitudes in male and female subjects. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
CALIPSO Observations of Near-Cloud Aerosol Properties as a Function of Cloud Fraction
NASA Technical Reports Server (NTRS)
Yang, Weidong; Marshak, Alexander; Varnai, Tamas; Wood, Robert
2015-01-01
This paper uses spaceborne lidar data to study how near-cloud aerosol statistics of attenuated backscatter depend on cloud fraction. The results for a large region around the Azores show that: (1) far-from-cloud aerosol statistics are dominated by samples from scenes with lower cloud fractions, while near-cloud aerosol statistics are dominated by samples from scenes with higher cloud fractions; (2) near-cloud enhancements of attenuated backscatter occur for any cloud fraction but are most pronounced for higher cloud fractions; (3) the difference in the enhancements for different cloud fractions is most significant within 5km from clouds; (4) near-cloud enhancements can be well approximated by logarithmic functions of cloud fraction and distance to clouds. These findings demonstrate that if variability in cloud fraction across the scenes used to composite aerosol statistics are not considered, a sampling artifact will affect these statistics calculated as a function of distance to clouds. For the Azores-region dataset examined here, this artifact occurs mostly within 5 km from clouds, and exaggerates the near-cloud enhancements of lidar backscatter and color ratio by about 30. This shows that for accurate characterization of the changes in aerosol properties with distance to clouds, it is important to account for the impact of changes in cloud fraction.
Erda, F G; Bloemen, J; Steppe, K
2014-01-01
In studies on internal CO2 transport, average xylem sap pH (pH(x)) is one of the factors used for calculation of the concentration of dissolved inorganic carbon in the xylem sap ([CO2 *]). Lack of detailed pH(x) measurements at high temporal resolution could be a potential source of error when evaluating [CO2*] dynamics. In this experiment, we performed continuous measurements of CO2 concentration ([CO2]) and stem temperature (T(stem)), complemented with pH(x) measurements at 30-min intervals during the day at various stages of the growing season (Day of the Year (DOY): 86 (late winter), 128 (mid-spring) and 155 (early summer)) on a plum tree (Prunus domestica L. cv. Reine Claude d'Oullins). We used the recorded pH(x) to calculate [CO2*] based on T(stem) and the corresponding measured [CO2]. No statistically significant difference was found between mean [CO2*] calculated with instantaneous pH(x) and daily average pH(x). However, using an average pH(x) value from a different part of the growing season than the measurements of [CO2] and T(stem) to estimate [CO2*] led to a statistically significant error. The error varied between 3.25 ± 0.01% under-estimation and 3.97 ± 0.01% over-estimation, relative to the true [CO2*] data. Measured pH(x) did not show a significant daily variation, unlike [CO2], which increased during the day and declined at night. As the growing season progressed, daily average [CO2] (3.4%, 5.3%, 7.4%) increased and average pH(x) (5.43, 5.29, 5.20) decreased. Increase in [CO2] will increase its solubility in xylem sap according to Henry's law, and the dissociation of [CO2*] will negatively affect pH(x). Our results are the first quantifying the error in [CO2*] due to the interaction between [CO2] and pH(x) on a seasonal time scale. We found significant changes in pH(x) across the growing season, but overall the effect on the calculation of [CO2*] remained within an error range of 4%. However, it is possible that the error could be more substantial for other tree species, particularly if pH(x) is in the more sensitive range (pH(x) > 6.5). © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands.
The best motivator priorities parents choose via analytical hierarchy process
NASA Astrophysics Data System (ADS)
Farah, R. N.; Latha, P.
2015-05-01
Motivation is probably the most important factor that educators can target in order to improve learning. Numerous cross-disciplinary theories have been postulated to explain motivation. While each of these theories has some truth, no single theory seems to adequately explain all human motivation. The fact is that human beings in general and pupils in particular are complex creatures with complex needs and desires. In this paper, Analytic Hierarchy Process (AHP) has been proposed as an emerging solution to move towards too large, dynamic and complex real world multi-criteria decision making problems in selecting the most suitable motivator when choosing school for their children. Data were analyzed using SPSS 17.0 ("Statistical Package for Social Science") software. Statistic testing used are descriptive and inferential statistic. Descriptive statistic used to identify respondent pupils and parents demographic factors. The statistical testing used to determine the pupils and parents highest motivator priorities and parents' best priorities using AHP to determine the criteria chosen by parents such as school principals, teachers, pupils and parents. The moderating factors are selected schools based on "Standard Kualiti Pendidikan Malaysia" (SKPM) in Ampang. Inferential statistics such as One-way ANOVA used to get the significant and data used to calculate the weightage of AHP. School principals is found to be the best motivator for parents in choosing school for their pupils followed by teachers, parents and pupils.
Spatial Accessibility and Availability Measures and Statistical Properties in the Food Environment
Van Meter, E.; Lawson, A.B.; Colabianchi, N.; Nichols, M.; Hibbert, J.; Porter, D.; Liese, A.D.
2010-01-01
Spatial accessibility is of increasing interest in the health sciences. This paper addresses the statistical use of spatial accessibility and availability indices. These measures are evaluated via an extensive simulation based on cluster models for local food outlet density. We derived Monte Carlo critical values for several statistical tests based on the indices. In particular we are interested in the ability to make inferential comparisons between different study areas where indices of accessibility and availability are to be calculated. We derive tests of mean difference as well as tests for differences in Moran's I for spatial correlation for each of the accessibility and availability indices. We also apply these new statistical tests to a data example based on two counties in South Carolina for various accessibility and availability measures calculated for food outlets, stores, and restaurants. PMID:21499528
Assessing exclusionary power of a paternity test involving a pair of alleged grandparents.
Scarpetta, Marco A; Staub, Rick W; Einum, David D
2007-02-01
The power of a genetic test battery to exclude a pair of individuals as grandparents is an important consideration for parentage testing laboratories. However, a reliable method to calculate such a statistic with short-tandem repeat (STR) genetic markers has not been presented. Two formulae describing the random grandparents not excluded (RGPNE) statistic at a single genetic locus were derived: RGPNE = a(4 - 6a + 4a(2)- a(3)) when the paternal obligate allele (POA) is defined and RGPNE = 2[(a + b)(2 - a - b)][1 - (a + b)(2 - a - b)] + [(a + b)(2 - a - b)] when the POA is ambiguous. A minimum number of genetic markers required to yield cumulative RGPNE values of not greater than 0.01 was calculated with weighted average allele frequencies of the CODIS STR loci. RGPNE data for actual grandparentage cases are also presented to empirically examine the exclusionary power of routine casework. A comparison of RGPNE and random man not excluded (RMNE) values demonstrates the increased difficulty involved in excluding two individuals as grandparents compared to excluding a single alleged parent. A minimum of 12 STR markers is necessary to achieve RGPNE values of not greater than 0.01 when the mother is tested; more than 25 markers are required without the mother. Cumulative RGPNE values for each of 22 nonexclusionary grandparentage cases were not more than 0.01 but were significantly weaker when calculated without data from the mother. Calculation of the RGPNE provides a simple means to help minimize the potential of false inclusions in grandparentage analyses. This study also underscores the importance of testing the mother when examining the parents of an unavailable alleged father (AF).
Methodological quality of behavioural weight loss studies: a systematic review
Lemon, S. C.; Wang, M. L.; Haughton, C. F.; Estabrook, D. P.; Frisard, C. F.; Pagoto, S. L.
2018-01-01
Summary This systematic review assessed the methodological quality of behavioural weight loss intervention studies conducted among adults and associations between quality and statistically significant weight loss outcome, strength of intervention effectiveness and sample size. Searches for trials published between January, 2009 and December, 2014 were conducted using PUBMED, MEDLINE and PSYCINFO and identified ninety studies. Methodological quality indicators included study design, anthropometric measurement approach, sample size calculations, intent-to-treat (ITT) analysis, loss to follow-up rate, missing data strategy, sampling strategy, report of treatment receipt and report of intervention fidelity (mean = 6.3). Indicators most commonly utilized included randomized design (100%), objectively measured anthropometrics (96.7%), ITT analysis (86.7%) and reporting treatment adherence (76.7%). Most studies (62.2%) had a follow-up rate >75% and reported a loss to follow-up analytic strategy or minimal missing data (69.9%). Describing intervention fidelity (34.4%) and sampling from a known population (41.1%) were least common. Methodological quality was not associated with reporting a statistically significant result, effect size or sample size. This review found the published literature of behavioural weight loss trials to be of high quality for specific indicators, including study design and measurement. Identified for improvement include utilization of more rigorous statistical approaches to loss to follow up and better fidelity reporting. PMID:27071775
Statistical tests to compare motif count exceptionalities
Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent
2007-01-01
Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349
Effect of Low-Dose MDCT and Iterative Reconstruction on Trabecular Bone Microstructure Assessment.
Kopp, Felix K; Holzapfel, Konstantin; Baum, Thomas; Nasirudin, Radin A; Mei, Kai; Garcia, Eduardo G; Burgkart, Rainer; Rummeny, Ernst J; Kirschke, Jan S; Noël, Peter B
2016-01-01
We investigated the effects of low-dose multi detector computed tomography (MDCT) in combination with statistical iterative reconstruction algorithms on trabecular bone microstructure parameters. Twelve donated vertebrae were scanned with the routine radiation exposure used in our department (standard-dose) and a low-dose protocol. Reconstructions were performed with filtered backprojection (FBP) and maximum-likelihood based statistical iterative reconstruction (SIR). Trabecular bone microstructure parameters were assessed and statistically compared for each reconstruction. Moreover, fracture loads of the vertebrae were biomechanically determined and correlated to the assessed microstructure parameters. Trabecular bone microstructure parameters based on low-dose MDCT and SIR significantly correlated with vertebral bone strength. There was no significant difference between microstructure parameters calculated on low-dose SIR and standard-dose FBP images. However, the results revealed a strong dependency on the regularization strength applied during SIR. It was observed that stronger regularization might corrupt the microstructure analysis, because the trabecular structure is a very small detail that might get lost during the regularization process. As a consequence, the introduction of SIR for trabecular bone microstructure analysis requires a specific optimization of the regularization parameters. Moreover, in comparison to other approaches, superior noise-resolution trade-offs can be found with the proposed methods.
Hydrostatic paradox: experimental verification of pressure equilibrium
NASA Astrophysics Data System (ADS)
Kodejška, Č.; Ganci, S.; Říha, J.; Sedláčková, H.
2017-11-01
This work is focused on the experimental verification of the balance between the atmospheric pressure acting on the sheet of paper, which encloses the cylinder completely or partially filled with water from below, where the hydrostatic pressure of the water column acts against the atmospheric pressure. First of all this paper solves a theoretical analysis of the problem, which is based, firstly, on the equation for isothermal process and, secondly, on the equality of pressures inside and outside the cylinder. From the measured values the confirmation of the theoretical quadratic dependence of the air pressure inside the cylinder on the level of the liquid in the cylinder is obtained, the maximum change in the volume of air within the cylinder occurs for the height of the water column L of one half of the total height of the vessel H. The measurements were made for different diameters of the cylinder and with plates made of different materials located at the bottom of the cylinder to prevent liquid from flowing out of the cylinder. The measured values were subjected to statistical analysis, which demonstrated the validity of the zero hypothesis, i.e. that the measured values are not statistically significantly different from the theoretically calculated ones at the statistical significance level α = 0.05.
Comparison of beam position calculation methods for application in digital acquisition systems
NASA Astrophysics Data System (ADS)
Reiter, A.; Singh, R.
2018-05-01
Different approaches to the data analysis of beam position monitors in hadron accelerators are compared adopting the perspective of an analog-to-digital converter in a sampling acquisition system. Special emphasis is given to position uncertainty and robustness against bias and interference that may be encountered in an accelerator environment. In a time-domain analysis of data in the presence of statistical noise, the position calculation based on the difference-over-sum method with algorithms like signal integral or power can be interpreted as a least-squares analysis of a corresponding fit function. This link to the least-squares method is exploited in the evaluation of analysis properties and in the calculation of position uncertainty. In an analytical model and experimental evaluations the positions derived from a straight line fit or equivalently the standard deviation are found to be the most robust and to offer the least variance. The measured position uncertainty is consistent with the model prediction in our experiment, and the results of tune measurements improve significantly.
Exotic and excited-state radiative transitions in charmonium from lattice QCD
Dudek, Jozef J.; Edwards, Robert G.; Thomas, Christopher E.
2009-05-01
We compute, for the first time using lattice QCD methods, radiative transition rates involving excited charmonium states, states of high spin and exotics. Utilizing a large basis of interpolating fields we are able to project out various excited state contributions to three-point correlators computed on quenched anisotropic lattices. In the first lattice QCD calculation of the exoticmore » $$1^{-+}$$ $$\\eta_{c1}$$ radiative decay, we find a large partial width $$\\Gamma(\\eta_{c1} \\to J/\\psi \\gamma) \\sim 100 \\,\\mathrm{keV}$$. We find clear signals for electric dipole and magnetic quadrupole transition form factors in $$\\chi_{c2} \\to J/\\psi \\gamma$$, calculated for the first time in this framework, and study transitions involving excited $$\\psi$$ and $$\\chi_{c1,2}$$ states. We calculate hindered magnetic dipole transition widths without the sensitivity to assumptions made in model studies and find statistically significant signals, including a non-exotic vector hybrid candidate $Y_{\\mathrm{hyb?}} \\to \\et« less
Calculating p-values and their significances with the Energy Test for large datasets
NASA Astrophysics Data System (ADS)
Barter, W.; Burr, C.; Parkes, C.
2018-04-01
The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.
[Can the local energy minimization refine the PDB structures of different resolution universally?].
Godzi, M G; Gromova, A P; Oferkin, I V; Mironov, P V
2009-01-01
The local energy minimization was statistically validated as the refinement strategy for PDB structure pairs of different resolution. Thirteen pairs of structures with the only difference in resolution were extracted from PDB, and the structures of 11 identical proteins obtained by different X-ray diffraction techniques were represented. The distribution of RMSD value was calculated for these pairs before and after the local energy minimization of each structure. The MMFF94 field was used for energy calculations, and the quasi-Newton method was used for local energy minimization. By comparison of these two RMSD distributions, the local energy minimization was proved to statistically increase the structural differences in pairs so that it cannot be used for refinement purposes. To explore the prospects of complex refinement strategies based on energy minimization, randomized structures were obtained by moving the initial PDB structures as far as the minimized structures had been moved in a multidimensional space of atomic coordinates. For these randomized structures, the RMSD distribution was calculated and compared with that for minimized structures. The significant differences in their mean values proved the energy surface of the protein to have only few minima near the conformations of different resolution obtained by X-ray diffraction for PDB. Some other results obtained by exploring the energy surface near these conformations are also presented. These results are expected to be very useful for the development of new protein refinement strategies based on energy minimization.