ERIC Educational Resources Information Center
Schneider, William R.
2011-01-01
The purpose of this study was to determine the relationship between statistics self-efficacy, statistics anxiety, and performance in introductory graduate statistics courses. The study design compared two statistics self-efficacy measures developed by Finney and Schraw (2003), a statistics anxiety measure developed by Cruise and Wilkins (1980),…
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set-proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters.
Ma, Yue; Yin, Fei; Zhang, Tao; Zhou, Xiaohua Andrew; Li, Xiaosong
2016-01-01
Spatial scan statistics are widely used in various fields. The performance of these statistics is influenced by parameters, such as maximum spatial cluster size, and can be improved by parameter selection using performance measures. Current performance measures are based on the presence of clusters and are thus inapplicable to data sets without known clusters. In this work, we propose a novel overall performance measure called maximum clustering set–proportion (MCS-P), which is based on the likelihood of the union of detected clusters and the applied dataset. MCS-P was compared with existing performance measures in a simulation study to select the maximum spatial cluster size. Results of other performance measures, such as sensitivity and misclassification, suggest that the spatial scan statistic achieves accurate results in most scenarios with the maximum spatial cluster sizes selected using MCS-P. Given that previously known clusters are not required in the proposed strategy, selection of the optimal maximum cluster size with MCS-P can improve the performance of the scan statistic in applications without identified clusters. PMID:26820646
NASA Technical Reports Server (NTRS)
1989-01-01
An assessment of quantitative methods and measures for measuring launch commit criteria (LCC) performance measurement trends is made. A statistical performance trending analysis pilot study was processed and compared to STS-26 mission data. This study used four selected shuttle measurement types (solid rocket booster, external tank, space shuttle main engine, and range safety switch safe and arm device) from the five missions prior to mission 51-L. After obtaining raw data coordinates, each set of measurements was processed to obtain statistical confidence bounds and mean data profiles for each of the selected measurement types. STS-26 measurements were compared to the statistical data base profiles to verify the statistical capability of assessing occurrences of data trend anomalies and abnormal time-varying operational conditions associated with data amplitude and phase shifts.
The Statistical Loop Analyzer (SLA)
NASA Technical Reports Server (NTRS)
Lindsey, W. C.
1985-01-01
The statistical loop analyzer (SLA) is designed to automatically measure the acquisition, tracking and frequency stability performance characteristics of symbol synchronizers, code synchronizers, carrier tracking loops, and coherent transponders. Automated phase lock and system level tests can also be made using the SLA. Standard baseband, carrier and spread spectrum modulation techniques can be accomodated. Through the SLA's phase error jitter and cycle slip measurements the acquisition and tracking thresholds of the unit under test are determined; any false phase and frequency lock events are statistically analyzed and reported in the SLA output in probabilistic terms. Automated signal drop out tests can be performed in order to trouble shoot algorithms and evaluate the reacquisition statistics of the unit under test. Cycle slip rates and cycle slip probabilities can be measured using the SLA. These measurements, combined with bit error probability measurements, are all that are needed to fully characterize the acquisition and tracking performance of a digital communication system.
Dymova, Natalya; Hanumara, R. Choudary; Gagnon, Ronald N.
2009-01-01
Performance measurement is increasingly viewed as an essential component of environmental and public health protection programs. In characterizing program performance over time, investigators often observe multiple changes resulting from a single intervention across a range of categories. Although a variety of statistical tools allow evaluation of data one variable at a time, the global test statistic is uniquely suited for analyses of categories or groups of interrelated variables. Here we demonstrate how the global test statistic can be applied to environmental and occupational health data for the purpose of making overall statements on the success of targeted intervention strategies. PMID:19696393
Dymova, Natalya; Hanumara, R Choudary; Enander, Richard T; Gagnon, Ronald N
2009-10-01
Performance measurement is increasingly viewed as an essential component of environmental and public health protection programs. In characterizing program performance over time, investigators often observe multiple changes resulting from a single intervention across a range of categories. Although a variety of statistical tools allow evaluation of data one variable at a time, the global test statistic is uniquely suited for analyses of categories or groups of interrelated variables. Here we demonstrate how the global test statistic can be applied to environmental and occupational health data for the purpose of making overall statements on the success of targeted intervention strategies.
ERIC Educational Resources Information Center
Shim, Wonsik "Jeff"; McClure, Charles R.; Fraser, Bruce T.; Bertot, John Carlo
This manual provides a beginning approach for research libraries to better describe the use and users of their networked services. The manual also aims to increase the visibility and importance of developing such statistics and measures. Specific objectives are: to identify selected key statistics and measures that can describe use and users of…
ERIC Educational Resources Information Center
Nelson, Frank, Comp.
This report is a compilation of input and output measures and other statistics in reference to Idaho's public libraries, covering the period from October 1997 through September 1998. The introductory sections include notes on the statistics, definitions of performance measures, Idaho public library rankings for fiscal year 1996, and a state map…
Zamanzad Ghavidel, Fatemeh; Claesen, Jürgen; Burzykowski, Tomasz; Valkenborg, Dirk
2014-02-01
To extract a genuine peptide signal from a mass spectrum, an observed series of peaks at a particular mass can be compared with the isotope distribution expected for a peptide of that mass. To decide whether the observed series of peaks is similar to the isotope distribution, a similarity measure is needed. In this short communication, we investigate whether the Mahalanobis distance could be an alternative measure for the commonly employed Pearson's χ(2) statistic. We evaluate the performance of the two measures by using a controlled MALDI-TOF experiment. The results indicate that Pearson's χ(2) statistic has better discriminatory performance than the Mahalanobis distance and is a more robust measure.
NASA Astrophysics Data System (ADS)
Zamanzad Ghavidel, Fatemeh; Claesen, Jürgen; Burzykowski, Tomasz; Valkenborg, Dirk
2014-02-01
To extract a genuine peptide signal from a mass spectrum, an observed series of peaks at a particular mass can be compared with the isotope distribution expected for a peptide of that mass. To decide whether the observed series of peaks is similar to the isotope distribution, a similarity measure is needed. In this short communication, we investigate whether the Mahalanobis distance could be an alternative measure for the commonly employed Pearson's χ2 statistic. We evaluate the performance of the two measures by using a controlled MALDI-TOF experiment. The results indicate that Pearson's χ2 statistic has better discriminatory performance than the Mahalanobis distance and is a more robust measure.
Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio
2013-03-01
To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Performance of Between-Study Heterogeneity Measures in the Cochrane Library.
Ma, Xiaoyue; Lin, Lifeng; Qu, Zhiyong; Zhu, Motao; Chu, Haitao
2018-05-29
The growth in comparative effectiveness research and evidence-based medicine has increased attention to systematic reviews and meta-analyses. Meta-analysis synthesizes and contrasts evidence from multiple independent studies to improve statistical efficiency and reduce bias. Assessing heterogeneity is critical for performing a meta-analysis and interpreting results. As a widely used heterogeneity measure, the I statistic quantifies the proportion of total variation across studies that is due to real differences in effect size. The presence of outlying studies can seriously exaggerate the I statistic. Two alternative heterogeneity measures, the Ir and Im, have been recently proposed to reduce the impact of outlying studies. To evaluate these measures' performance empirically, we applied them to 20,599 meta-analyses in the Cochrane Library. We found that the Ir and Im have strong agreement with the I, while they are more robust than the I when outlying studies appear.
Variability-aware compact modeling and statistical circuit validation on SRAM test array
NASA Astrophysics Data System (ADS)
Qiao, Ying; Spanos, Costas J.
2016-03-01
Variability modeling at the compact transistor model level can enable statistically optimized designs in view of limitations imposed by the fabrication technology. In this work we propose a variability-aware compact model characterization methodology based on stepwise parameter selection. Transistor I-V measurements are obtained from bit transistor accessible SRAM test array fabricated using a collaborating foundry's 28nm FDSOI technology. Our in-house customized Monte Carlo simulation bench can incorporate these statistical compact models; and simulation results on SRAM writability performance are very close to measurements in distribution estimation. Our proposed statistical compact model parameter extraction methodology also has the potential of predicting non-Gaussian behavior in statistical circuit performances through mixtures of Gaussian distributions.
DOT National Transportation Integrated Search
1965-07-01
A statistical study of training- and job-performance measures of several hundred Air Traffic Control Specialists (ATCS) representing Enroute, Terminal, and Flight Service Station specialties revealed that training-performance measures reflected: : 1....
Validating Future Force Performance Measures (Army Class): Concluding Analyses
2016-06-01
32 Table 3.10. Descriptive Statistics and Intercorrelations for LV Final Predictor Factor Scores...55 Table 4.7. Descriptive Statistics for Analysis Criteria...Soldier attrition and performance: Dependability (Non- Delinquency ), Adjustment, Physical Conditioning, Leadership, Work Orientation, and Agreeableness
Probability and Statistics in Sensor Performance Modeling
2010-12-01
language software program is called Environmental Awareness for Sensor and Emitter Employment. Some important numerical issues in the implementation...3 Statistical analysis for measuring sensor performance...complementary cumulative distribution function cdf cumulative distribution function DST decision-support tool EASEE Environmental Awareness of
42 CFR 421.122 - Performance standards.
Code of Federal Regulations, 2010 CFR
2010-10-01
... performance, application of acceptable statistical measures of variation to nationwide intermediary experience... or criterion. (b) Factors beyond intermediary's control. To identify measurable factors that significantly affect an intermediary's performance, but that are not within the intermediary's control, CMS will...
Statistical learning and auditory processing in children with music training: An ERP study.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Ibrahim, Ronny; Arciuli, Joanne
2017-07-01
The question whether musical training is associated with enhanced auditory and cognitive abilities in children is of considerable interest. In the present study, we compared children with music training versus those without music training across a range of auditory and cognitive measures, including the ability to detect implicitly statistical regularities in input (statistical learning). Statistical learning of regularities embedded in auditory and visual stimuli was measured in musically trained and age-matched untrained children between the ages of 9-11years. In addition to collecting behavioural measures, we recorded electrophysiological measures to obtain an online measure of segmentation during the statistical learning tasks. Musically trained children showed better performance on melody discrimination, rhythm discrimination, frequency discrimination, and auditory statistical learning. Furthermore, grand-averaged ERPs showed that triplet onset (initial stimulus) elicited larger responses in the musically trained children during both auditory and visual statistical learning tasks. In addition, children's music skills were associated with performance on auditory and visual behavioural statistical learning tasks. Our data suggests that individual differences in musical skills are associated with children's ability to detect regularities. The ERP data suggest that musical training is associated with better encoding of both auditory and visual stimuli. Although causality must be explored in further research, these results may have implications for developing music-based remediation strategies for children with learning impairments. Copyright © 2017 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning.
Mandikal Vasuki, Pragati Rao; Sharma, Mridula; Demuth, Katherine; Arciuli, Joanne
2016-12-01
It has been hypothesized that musical expertise is associated with enhanced auditory processing and cognitive abilities. Recent research has examined the relationship between musicians' advantage and implicit statistical learning skills. In the present study, we assessed a variety of auditory processing skills, cognitive processing skills, and statistical learning (auditory and visual forms) in age-matched musicians (N = 17) and non-musicians (N = 18). Musicians had significantly better performance than non-musicians on frequency discrimination, and backward digit span. A key finding was that musicians had better auditory, but not visual, statistical learning than non-musicians. Performance on the statistical learning tasks was not correlated with performance on auditory and cognitive measures. Musicians' superior performance on auditory (but not visual) statistical learning suggests that musical expertise is associated with an enhanced ability to detect statistical regularities in auditory stimuli. Copyright © 2016 Elsevier B.V. All rights reserved.
The Roles of Experience, Gender, and Individual Differences in Statistical Reasoning
ERIC Educational Resources Information Center
Martin, Nadia; Hughes, Jeffrey; Fugelsang, Jonathan
2017-01-01
We examine the joint effects of gender and experience on statistical reasoning. Participants with various levels of experience in statistics completed the Statistical Reasoning Assessment (Garfield, 2003), along with individual difference measures assessing cognitive ability and thinking dispositions. Although the performance of both genders…
2014-01-01
Quantitative imaging biomarkers (QIBs) are being used increasingly in medicine to diagnose and monitor patients’ disease. The computer algorithms that measure QIBs have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms’ bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms’ performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for QIB studies. PMID:24919828
Aero-Optics Measurement System for the AEDC Aero-Optics Test Facility
1991-02-01
Pulse Energy Statistics , 150 Pulses ........................................ 41 AEDC-TR-90-20 APPENDIXES A. Optical Performance of Heated Windows...hypersonic wind tunnel, where the requisite extensive statistical database can be developed in a cost- and time-effective manner. Ground testing...At the present time at AEDC, measured AO parameter statistics are derived from sets of image-spot recordings with a set containing as many as 150
ERIC Educational Resources Information Center
Nolan, Meaghan M.; Beran, Tanya; Hecker, Kent G.
2012-01-01
Students with positive attitudes toward statistics are likely to show strong academic performance in statistics courses. Multiple surveys measuring students' attitudes toward statistics exist; however, a comparison of the validity and reliability of interpretations based on their scores is needed. A systematic review of relevant electronic…
ERIC Educational Resources Information Center
Noser, Thomas C.; Tanner, John R.; Shah, Situl
2008-01-01
The purpose of this study was to measure the comprehension of basic mathematical skills of students enrolled in statistics classes at a large regional university, and to determine if the scores earned on a basic math skills test are useful in forecasting student performance in these statistics classes, and to determine if students' basic math…
Performance index for virtual reality phacoemulsification surgery
NASA Astrophysics Data System (ADS)
Söderberg, Per; Laurell, Carl-Gustaf; Simawi, Wamidh; Skarman, Eva; Nordqvist, Per; Nordh, Leif
2007-02-01
We have developed a virtual reality (VR) simulator for phacoemulsification (phaco) surgery. The current work aimed at developing a performance index that characterizes the performance of an individual trainee. We recorded measurements of 28 response variables during three iterated surgical sessions in 9 subjects naive to cataract surgery and 6 experienced cataract surgeons, separately for the sculpting phase and the evacuation phase of phacoemulsification surgery. We further defined a specific performance index for a specific measurement variable and a total performance index for a specific trainee. The distribution function for the total performance index was relatively evenly distributed both for the sculpting and the evacuation phase indicating that parametric statistics can be used for comparison of total average performance indices for different groups in the future. The current total performance index for an individual considers all measurement variables included with the same weight. It is possible that a future development of the system will indicate that a better characterization of a trainee can be obtained if the various measurements variables are given specific weights. The currently developed total performance index for a trainee is statistically an independent observation of that particular trainee.
Comparison of 2- and 10-micron coherent Doppler lidar performance
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1995-01-01
The performance of 2- and 10-micron coherent Doppler lidar is presented in terms of the statistical distribution of the maximum-likelihood velocity estimator from simulations for fixed range resolution and fixed velocity search space as a function of the number of coherent photoelectrons per estimate. The wavelength dependence of the aerosol backscatter coefficient, the detector quantum efficiency, and the atmospheric extinction produce a simple shift of the performance curves. Results are presented for a typical boundary layer measurement and a space-based measurement for two regimes: the pulse-dominated regime where the signal statistics are determined by the transmitted pulse, and the atmospheric-dominated regime where the signal statistics are determined by the velocity fluctuations over the range gate. The optimal choice of wavelength depends on the problem under consideration.
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%–155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%–71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power. PMID:28479943
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power.
Miciak, Jeremy; Taylor, W Pat; Stuebing, Karla K; Fletcher, Jack M; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated measures. This can result in attenuated pretest-posttest correlations, reducing the variance explained by the pretest covariate. We investigated the implications of two potential range restriction scenarios: direct truncation on a selection measure and indirect range restriction on correlated measures. Empirical and simulated data indicated direct range restriction on the pretest covariate greatly reduced statistical power and necessitated sample size increases of 82%-155% (dependent on selection criteria) to achieve equivalent statistical power to parameters with unrestricted samples. However, measures demonstrating indirect range restriction required much smaller sample size increases (32%-71%) under equivalent scenarios. Additional analyses manipulated the correlations between measures and pretest-posttest correlations to guide planning experiments. Results highlight the need to differentiate between selection measures and potential covariates and to investigate range restriction as a factor impacting statistical power.
Paetkau, D; Waits, L P; Clarkson, P L; Craighead, L; Strobeck, C
1997-12-01
A large microsatellite data set from three species of bear (Ursidae) was used to empirically test the performance of six genetic distance measures in resolving relationships at a variety of scales ranging from adjacent areas in a continuous distribution to species that diverged several million years ago. At the finest scale, while some distance measures performed extremely well, statistics developed specifically to accommodate the mutational processes of microsatellites performed relatively poorly, presumably because of the relatively higher variance of these statistics. At the other extreme, no statistic was able to resolve the close sister relationship of polar bears and brown bears from more distantly related pairs of species. This failure is most likely due to constraints on allele distributions at microsatellite loci. At intermediate scales, both within continuous distributions and in comparisons to insular populations of late Pleistocene origin, it was not possible to define the point where linearity was lost for each of the statistics, except that it is clearly lost after relatively short periods of independent evolution. All of the statistics were affected by the amount of genetic diversity within the populations being compared, significantly complicating the interpretation of genetic distance data.
Paetkau, D.; Waits, L. P.; Clarkson, P. L.; Craighead, L.; Strobeck, C.
1997-01-01
A large microsatellite data set from three species of bear (Ursidae) was used to empirically test the performance of six genetic distance measures in resolving relationships at a variety of scales ranging from adjacent areas in a continuous distribution to species that diverged several million years ago. At the finest scale, while some distance measures performed extremely well, statistics developed specifically to accommodate the mutational processes of microsatellites performed relatively poorly, presumably because of the relatively higher variance of these statistics. At the other extreme, no statistic was able to resolve the close sister relationship of polar bears and brown bears from more distantly related pairs of species. This failure is most likely due to constraints on allele distributions at microsatellite loci. At intermediate scales, both within continuous distributions and in comparisons to insular populations of late Pleistocene origin, it was not possible to define the point where linearity was lost for each of the statistics, except that it is clearly lost after relatively short periods of independent evolution. All of the statistics were affected by the amount of genetic diversity within the populations being compared, significantly complicating the interpretation of genetic distance data. PMID:9409849
40 CFR 1065.12 - Approval of alternate procedures.
Code of Federal Regulations, 2010 CFR
2010-07-01
... engine meets all applicable emission standards according to specified procedures. (iii) Use statistical.... (e) We may give you specific directions regarding methods for statistical analysis, or we may approve... statistical tests. Perform the tests as follows: (1) Repeat measurements for all applicable duty cycles at...
Nicolopoulou, E P; Ztoupis, I N; Karabetsos, E; Gonos, I F; Stathopulos, I A
2015-04-01
The second round of an interlaboratory comparison scheme on radio frequency electromagnetic field measurements has been conducted in order to evaluate the overall performance of laboratories that perform measurements in the vicinity of mobile phone base stations and broadcast antenna facilities. The participants recorded the electric field strength produced by two high frequency signal generators inside an anechoic chamber in three measurement scenarios with the antennas transmitting each time different signals at the FM, VHF, UHF and GSM frequency bands. In each measurement scenario, the participants also used their measurements in order to calculate the relative exposure ratios. The results were evaluated in each test level calculating performance statistics (z-scores and En numbers). Subsequently, possible sources of errors for each participating laboratory were discussed, and the overall evaluation of their performances was determined by using an aggregated performance statistic. A comparison between the two rounds proves the necessity of the scheme. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Linking Performance Measures to Resource Allocation: Exploring Unmapped Terrain.
ERIC Educational Resources Information Center
Ewell, Peter T.
1999-01-01
Examination of how (and whether) particular types of institutional performance measures can be beneficially used in making resource allocation decisions finds that only easily verifiable "hard" statistics should be used in classic performance funding approaches, although surveys and the use of good practices by institutions may…
The effects of estimation of censoring, truncation, transformation and partial data vectors
NASA Technical Reports Server (NTRS)
Hartley, H. O.; Smith, W. B.
1972-01-01
The purpose of this research was to attack statistical problems concerning the estimation of distributions for purposes of predicting and measuring assembly performance as it appears in biological and physical situations. Various statistical procedures were proposed to attack problems of this sort, that is, to produce the statistical distributions of the outcomes of biological and physical situations which, employ characteristics measured on constituent parts. The techniques are described.
Statistical analysis of oil percolation through pressboard measured by optical recording
NASA Astrophysics Data System (ADS)
Rogalski, Przemysław; Kozak, Czesław
2017-08-01
The paper presents a measuring station used to measure the percolation of transformer oil by electrotechnical pressboard. Nytro Taurus insulating oil manufactured by Nynas company percolation rate by the Pucaro company pressboard investigation was made. Approximately 60 samples of Pucaro made pressboard, widely used for insulation of power transformers, was measured. Statistical analysis of oil percolation times were performed. The measurements made it possible to determine the distribution of capillary diameters occurring in the pressboard.
Measuring Student and School Progress with the California API. CSE Technical Report.
ERIC Educational Resources Information Center
Thum, Yeow Meng
This paper focuses on interpreting the major conceptual features of California's Academic Performance Index (API) as a coherent set of statistical procedures. To facilitate a characterization of its statistical properties, the paper casts the index as a simple weighted average of the subjective worth of students' normative performance and presents…
Basic Math Skills and Performance in an Introductory Statistics Course
ERIC Educational Resources Information Center
Johnson, Marianne; Kuennen, Eric
2006-01-01
We identify the student characteristics most associated with success in an introductory business statistics class, placing special focus on the relationship between student math skills and course performance, as measured by student grade in the course. To determine which math skills are important for student success, we examine (1) whether the…
Snell, Kym Ie; Ensor, Joie; Debray, Thomas Pa; Moons, Karel Gm; Riley, Richard D
2017-01-01
If individual participant data are available from multiple studies or clusters, then a prediction model can be externally validated multiple times. This allows the model's discrimination and calibration performance to be examined across different settings. Random-effects meta-analysis can then be used to quantify overall (average) performance and heterogeneity in performance. This typically assumes a normal distribution of 'true' performance across studies. We conducted a simulation study to examine this normality assumption for various performance measures relating to a logistic regression prediction model. We simulated data across multiple studies with varying degrees of variability in baseline risk or predictor effects and then evaluated the shape of the between-study distribution in the C-statistic, calibration slope, calibration-in-the-large, and E/O statistic, and possible transformations thereof. We found that a normal between-study distribution was usually reasonable for the calibration slope and calibration-in-the-large; however, the distributions of the C-statistic and E/O were often skewed across studies, particularly in settings with large variability in the predictor effects. Normality was vastly improved when using the logit transformation for the C-statistic and the log transformation for E/O, and therefore we recommend these scales to be used for meta-analysis. An illustrated example is given using a random-effects meta-analysis of the performance of QRISK2 across 25 general practices.
Obuchowski, Nancy A; Barnhart, Huiman X; Buckler, Andrew J; Pennello, Gene; Wang, Xiao-Feng; Kalpathy-Cramer, Jayashree; Kim, Hyun J Grace; Reeves, Anthony P
2015-02-01
Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients' disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms' bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms' performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
The Impact of Measurement Noise in GPA Diagnostic Analysis of a Gas Turbine Engine
NASA Astrophysics Data System (ADS)
Ntantis, Efstratios L.; Li, Y. G.
2013-12-01
The performance diagnostic analysis of a gas turbine is accomplished by estimating a set of internal engine health parameters from available sensor measurements. No physical measuring instruments however can ever completely eliminate the presence of measurement uncertainties. Sensor measurements are often distorted by noise and bias leading to inaccurate estimation results. This paper explores the impact of measurement noise on Gas Turbine GPA analysis. The analysis is demonstrated with a test case where gas turbine performance simulation and diagnostics code TURBOMATCH is used to build a performance model of a model engine similar to Rolls-Royce Trent 500 turbofan engine, and carry out the diagnostic analysis with the presence of different levels of measurement noise. Conclusively, to improve the reliability of the diagnostic results, a statistical analysis of the data scattering caused by sensor uncertainties is made. The diagnostic tool used to deal with the statistical analysis of measurement noise impact is a model-based method utilizing a non-linear GPA.
A novel measure and significance testing in data analysis of cell image segmentation.
Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L
2017-03-14
Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.
Children's Services Statistical Neighbour Benchmarking Tool. Practitioner User Guide
ERIC Educational Resources Information Center
National Foundation for Educational Research, 2007
2007-01-01
Statistical neighbour models provide one method for benchmarking progress. For each local authority (LA), these models designate a number of other LAs deemed to have similar characteristics. These designated LAs are known as statistical neighbours. Any LA may compare its performance (as measured by various indicators) against its statistical…
Measuring the Success of an Academic Development Programme: A Statistical Analysis
ERIC Educational Resources Information Center
Smith, L. C.
2009-01-01
This study uses statistical analysis to estimate the impact of first-year academic development courses in microeconomics, statistics, accountancy, and information systems, offered by the University of Cape Town's Commerce Academic Development Programme, on students' graduation performance relative to that achieved by mainstream students. The data…
Benchmarking can add up for healthcare accounting.
Czarnecki, M T
1994-09-01
In 1993, a healthcare accounting and finance benchmarking survey of hospital and nonhospital organizations gathered statistics about key common performance areas. A low response did not allow for statistically significant findings, but the survey identified performance measures that can be used in healthcare financial management settings. This article explains the benchmarking process and examines some of the 1993 study's findings.
ERIC Educational Resources Information Center
Tabor, Josh
2010-01-01
On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)
Measuring primary care practice performance within an integrated delivery system: a case study.
Stewart, Louis J; Greisler, David
2002-01-01
This article examines the use of an integrated performance measurement system to plan and control primary care service delivery within an integrated delivery system. We review a growing body of literature that focuses on the development and implementation of management reporting systems among healthcare providers. Our study extends the existing literature by examining the use of performance information generated by an integrated performance measurement system within a healthcare organization. We conduct our examination through a case study of the WMG Primary Care Medicine Group, the primary care medical group practice of WellSpan Health System. WellSpan Health System is an integrated delivery system that serves south central Pennsylvania and northern Maryland. Our study examines the linkage between WellSpan Health's strategic objectives and its primary care medicine group's integrated performance measurement system. The conceptual design of this integrated performance measurement system combines financial metrics with practice management and clinical operating metrics to provide a more complete picture of medical group performance. Our findings demonstrate that WellSpan Health was able to achieve superior financial results despite a weak linkage between its integrated performance measurement system and its strategic objectives. WellSpan Health achieved this objective for its primary care medicine group by linking clinical performance information to physician compensation and reporting practice management performance through the use of statistical process charts. They found that the combined mechanisms of integrated performance measurement and statistical process control charts improved organizational learning and communications between organizational stakeholders.
Markovic, Gabriela; Schult, Marie-Louise; Bartfai, Aniko; Elg, Mattias
2017-01-31
Progress in early cognitive recovery after acquired brain injury is uneven and unpredictable, and thus the evaluation of rehabilitation is complex. The use of time-series measurements is susceptible to statistical change due to process variation. To evaluate the feasibility of using a time-series method, statistical process control, in early cognitive rehabilitation. Participants were 27 patients with acquired brain injury undergoing interdisciplinary rehabilitation of attention within 4 months post-injury. The outcome measure, the Paced Auditory Serial Addition Test, was analysed using statistical process control. Statistical process control identifies if and when change occurs in the process according to 3 patterns: rapid, steady or stationary performers. The statistical process control method was adjusted, in terms of constructing the baseline and the total number of measurement points, in order to measure a process in change. Statistical process control methodology is feasible for use in early cognitive rehabilitation, since it provides information about change in a process, thus enabling adjustment of the individual treatment response. Together with the results indicating discernible subgroups that respond differently to rehabilitation, statistical process control could be a valid tool in clinical decision-making. This study is a starting-point in understanding the rehabilitation process using a real-time-measurements approach.
Villani, N; Gérard, K; Marchesi, V; Huger, S; François, P; Noël, A
2010-06-01
The first purpose of this study was to illustrate the contribution of statistical process control for a better security in intensity modulated radiotherapy (IMRT) treatments. This improvement is possible by controlling the dose delivery process, characterized by pretreatment quality control results. So, it is necessary to put under control portal dosimetry measurements (currently, the ionisation chamber measurements were already monitored by statistical process control thanks to statistical process control tools). The second objective was to state whether it is possible to substitute ionisation chamber with portal dosimetry in order to optimize time devoted to pretreatment quality control. At Alexis-Vautrin center, pretreatment quality controls in IMRT for prostate and head and neck treatments were performed for each beam of each patient. These controls were made with an ionisation chamber, which is the reference detector for the absolute dose measurement, and with portal dosimetry for the verification of dose distribution. Statistical process control is a statistical analysis method, coming from industry, used to control and improve the studied process quality. It uses graphic tools as control maps to follow-up process, warning the operator in case of failure, and quantitative tools to evaluate the process toward its ability to respect guidelines: this is the capability study. The study was performed on 450 head and neck beams and on 100 prostate beams. Control charts, showing drifts, both slow and weak, and also both strong and fast, of mean and standard deviation have been established and have shown special cause introduced (manual shift of the leaf gap of the multileaf collimator). Correlation between dose measured at one point, given with the EPID and the ionisation chamber has been evaluated at more than 97% and disagreement cases between the two measurements were identified. The study allowed to demonstrate the feasibility to reduce the time devoted to pretreatment controls, by substituting the ionisation chamber's measurements with those performed with EPID, and also that a statistical process control monitoring of data brought security guarantee. 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.
Performance Measurement and Investment Objectives for Educational Endowment Funds.
ERIC Educational Resources Information Center
Williamson, J. Peter
This book is one of a series of projects developed to increase understanding of the management of educational endowment funds. Specifics of performance measurement and the setting of objectives are emphasized. Part one deals with measurement of the rate of return, or profitability, of an endowment fund. Part two reviews some statistical measures…
Siegelman, Noam; Bogaerts, Louisa; Kronenfeld, Ofer; Frost, Ram
2017-10-07
From a theoretical perspective, most discussions of statistical learning (SL) have focused on the possible "statistical" properties that are the object of learning. Much less attention has been given to defining what "learning" is in the context of "statistical learning." One major difficulty is that SL research has been monitoring participants' performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization stream. Is that all there is to characterizing SL abilities? Here we adopt a novel perspective for investigating the processing of regularities in the visual modality. By tracking online performance in a self-paced SL paradigm, we focus on the trajectory of learning. In a set of three experiments we show that this paradigm provides a reliable and valid signature of SL performance, and it offers important insights for understanding how statistical regularities are perceived and assimilated in the visual modality. This demonstrates the promise of integrating different operational measures to our theory of SL. © 2017 Cognitive Science Society, Inc.
Predicting driving performance in older adults: we are not there yet!
Bédard, Michel; Weaver, Bruce; Darzins, Peteris; Porter, Michelle M
2008-08-01
We set up this study to determine the predictive value of approaches for which a statistical association with driving performance has been documented. We determined the statistical association (magnitude of association and probability of occurrence by chance alone) between four different predictors (the Mini-Mental State Examination, Trails A test, Useful Field of View [UFOV], and a composite measure of past driving incidents) and driving performance. We then explored the predictive value of these measures with receiver operating characteristic (ROC) curves and various cutoff values. We identified associations between the predictors and driving performance well beyond the play of chance (p < .01). Nonetheless, the predictors had limited predictive value with areas under the curve ranging from .51 to .82. Statistical associations are not sufficient to infer adequate predictive value, especially when crucial decisions such as whether one can continue driving are at stake. The predictors we examined have limited predictive value if used as stand-alone screening tests.
Designing Intervention Studies: Selected Populations, Range Restrictions, and Statistical Power
ERIC Educational Resources Information Center
Miciak, Jeremy; Taylor, W. Pat; Stuebing, Karla K.; Fletcher, Jack M.; Vaughn, Sharon
2016-01-01
An appropriate estimate of statistical power is critical for the design of intervention studies. Although the inclusion of a pretest covariate in the test of the primary outcome can increase statistical power, samples selected on the basis of pretest performance may demonstrate range restriction on the selection measure and other correlated…
Engineering Students Designing a Statistical Procedure for Quantifying Variability
ERIC Educational Resources Information Center
Hjalmarson, Margret A.
2007-01-01
The study examined first-year engineering students' responses to a statistics task that asked them to generate a procedure for quantifying variability in a data set from an engineering context. Teams used technological tools to perform computations, and their final product was a ranking procedure. The students could use any statistical measures,…
Statistical Model of Dynamic Markers of the Alzheimer's Pathological Cascade.
Balsis, Steve; Geraci, Lisa; Benge, Jared; Lowe, Deborah A; Choudhury, Tabina K; Tirso, Robert; Doody, Rachelle S
2018-05-05
Alzheimer's disease (AD) is a progressive disease reflected in markers across assessment modalities, including neuroimaging, cognitive testing, and evaluation of adaptive function. Identifying a single continuum of decline across assessment modalities in a single sample is statistically challenging because of the multivariate nature of the data. To address this challenge, we implemented advanced statistical analyses designed specifically to model complex data across a single continuum. We analyzed data from the Alzheimer's Disease Neuroimaging Initiative (ADNI; N = 1,056), focusing on indicators from the assessments of magnetic resonance imaging (MRI) volume, fluorodeoxyglucose positron emission tomography (FDG-PET) metabolic activity, cognitive performance, and adaptive function. Item response theory was used to identify the continuum of decline. Then, through a process of statistical scaling, indicators across all modalities were linked to that continuum and analyzed. Findings revealed that measures of MRI volume, FDG-PET metabolic activity, and adaptive function added measurement precision beyond that provided by cognitive measures, particularly in the relatively mild range of disease severity. More specifically, MRI volume, and FDG-PET metabolic activity become compromised in the very mild range of severity, followed by cognitive performance and finally adaptive function. Our statistically derived models of the AD pathological cascade are consistent with existing theoretical models.
Blankenship, Tashauna L.; O'Neill, Meagan; Deater-Deckard, Kirby; Diana, Rachel A.; Bell, Martha Ann
2016-01-01
The contributions of hemispheric-specific electrophysiology (electroencephalogram or EEG) and independent executive functions (inhibitory control, working memory, cognitive flexibility) to episodic memory performance were examined using abstract paintings. Right hemisphere frontotemporal functional connectivity during encoding and retrieval, measured via EEG alpha coherence, statistically predicted performance on recency but not recognition judgments for the abstract paintings. Theta coherence, however, did not predict performance. Likewise, cognitive flexibility statistically predicted performance on recency judgments, but not recognition. These findings suggest that recognition and recency operate via separate electrophysiological and executive mechanisms. PMID:27388478
Evaluation of different models to estimate the global solar radiation on inclined surface
NASA Astrophysics Data System (ADS)
Demain, C.; Journée, M.; Bertrand, C.
2012-04-01
Global and diffuse solar radiation intensities are, in general, measured on horizontal surfaces, whereas stationary solar conversion systems (both flat plate solar collector and solar photovoltaic) are mounted on inclined surface to maximize the amount of solar radiation incident on the collector surface. Consequently, the solar radiation incident measured on a tilted surface has to be determined by converting solar radiation from horizontal surface to tilted surface of interest. This study evaluates the performance of 14 models transposing 10 minutes, hourly and daily diffuse solar irradiation from horizontal to inclined surface. Solar radiation data from 8 months (April to November 2011) which include diverse atmospheric conditions and solar altitudes, measured on the roof of the radiation tower of the Royal Meteorological Institute of Belgium in Uccle (Longitude 4.35°, Latitude 50.79°) were used for validation purposes. The individual model performance is assessed by an inter-comparison between the calculated and measured solar global radiation on the south-oriented surface tilted at 50.79° using statistical methods. The relative performance of the different models under different sky conditions has been studied. Comparison of the statistical errors between the different radiation models in function of the clearness index shows that some models perform better under one type of sky condition. Putting together different models acting under different sky conditions can lead to a diminution of the statistical error between global measured solar radiation and global estimated solar radiation. As models described in this paper have been developed for hourly data inputs, statistical error indexes are minimum for hourly data and increase for 10 minutes and one day frequency data.
Silich, Bert A; Yang, James J
2012-05-01
Measuring workplace performance is important to emergency department management. If an unreliable model is used, the results will be inaccurate. Use of inaccurate results to make decisions, such as how to distribute the incentive pay, will lead to rewarding the wrong people and will potentially demoralize top performers. This article demonstrates a statistical model to reliably measure the work accomplished, which can then be used as a performance measurement.
Statistical significance of trace evidence matches using independent physicochemical measurements
NASA Astrophysics Data System (ADS)
Almirall, Jose R.; Cole, Michael; Furton, Kenneth G.; Gettinby, George
1997-02-01
A statistical approach to the significance of glass evidence is proposed using independent physicochemical measurements and chemometrics. Traditional interpretation of the significance of trace evidence matches or exclusions relies on qualitative descriptors such as 'indistinguishable from,' 'consistent with,' 'similar to' etc. By performing physical and chemical measurements with are independent of one another, the significance of object exclusions or matches can be evaluated statistically. One of the problems with this approach is that the human brain is excellent at recognizing and classifying patterns and shapes but performs less well when that object is represented by a numerical list of attributes. Chemometrics can be employed to group similar objects using clustering algorithms and provide statistical significance in a quantitative manner. This approach is enhanced when population databases exist or can be created and the data in question can be evaluated given these databases. Since the selection of the variables used and their pre-processing can greatly influence the outcome, several different methods could be employed in order to obtain a more complete picture of the information contained in the data. Presently, we report on the analysis of glass samples using refractive index measurements and the quantitative analysis of the concentrations of the metals: Mg, Al, Ca, Fe, Mn, Ba, Sr, Ti and Zr. The extension of this general approach to fiber and paint comparisons also is discussed. This statistical approach should not replace the current interpretative approaches to trace evidence matches or exclusions but rather yields an additional quantitative measure. The lack of sufficient general population databases containing the needed physicochemical measurements and the potential for confusion arising from statistical analysis currently hamper this approach and ways of overcoming these obstacles are presented.
Jeffrey J. Barry; John M. Buffington; Peter Goodwin; John .G. King; William W. Emmett
2008-01-01
Previous studies assessing the accuracy of bed-load transport equations have considered equation performance statistically based on paired observations of measured and predicted bed-load transport rates. However, transport measurements were typically taken during low flows, biasing the assessment of equation performance toward low discharges, and because equation...
Zhou, Xiangrong; Xu, Rui; Hara, Takeshi; Hirano, Yasushi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Kido, Shoji; Fujita, Hiroshi
2014-07-01
The shapes of the inner organs are important information for medical image analysis. Statistical shape modeling provides a way of quantifying and measuring shape variations of the inner organs in different patients. In this study, we developed a universal scheme that can be used for building the statistical shape models for different inner organs efficiently. This scheme combines the traditional point distribution modeling with a group-wise optimization method based on a measure called minimum description length to provide a practical means for 3D organ shape modeling. In experiments, the proposed scheme was applied to the building of five statistical shape models for hearts, livers, spleens, and right and left kidneys by use of 50 cases of 3D torso CT images. The performance of these models was evaluated by three measures: model compactness, model generalization, and model specificity. The experimental results showed that the constructed shape models have good "compactness" and satisfied the "generalization" performance for different organ shape representations; however, the "specificity" of these models should be improved in the future.
Dai, Qi; Yang, Yanchun; Wang, Tianming
2008-10-15
Many proposed statistical measures can efficiently compare biological sequences to further infer their structures, functions and evolutionary information. They are related in spirit because all the ideas for sequence comparison try to use the information on the k-word distributions, Markov model or both. Motivated by adding k-word distributions to Markov model directly, we investigated two novel statistical measures for sequence comparison, called wre.k.r and S2.k.r. The proposed measures were tested by similarity search, evaluation on functionally related regulatory sequences and phylogenetic analysis. This offers the systematic and quantitative experimental assessment of our measures. Moreover, we compared our achievements with these based on alignment or alignment-free. We grouped our experiments into two sets. The first one, performed via ROC (receiver operating curve) analysis, aims at assessing the intrinsic ability of our statistical measures to search for similar sequences from a database and discriminate functionally related regulatory sequences from unrelated sequences. The second one aims at assessing how well our statistical measure is used for phylogenetic analysis. The experimental assessment demonstrates that our similarity measures intending to incorporate k-word distributions into Markov model are more efficient.
NASA Astrophysics Data System (ADS)
Azila Che Musa, Nor; Mahmud, Zamalia; Baharun, Norhayati
2017-09-01
One of the important skills that is required from any student who are learning statistics is knowing how to solve statistical problems correctly using appropriate statistical methods. This will enable them to arrive at a conclusion and make a significant contribution and decision for the society. In this study, a group of 22 students majoring in statistics at UiTM Shah Alam were given problems relating to topics on testing of hypothesis which require them to solve the problems using confidence interval, traditional and p-value approach. Hypothesis testing is one of the techniques used in solving real problems and it is listed as one of the difficult concepts for students to grasp. The objectives of this study is to explore students’ perceived and actual ability in solving statistical problems and to determine which item in statistical problem solving that students find difficult to grasp. Students’ perceived and actual ability were measured based on the instruments developed from the respective topics. Rasch measurement tools such as Wright map and item measures for fit statistics were used to accomplish the objectives. Data were collected and analysed using Winsteps 3.90 software which is developed based on the Rasch measurement model. The results showed that students’ perceived themselves as moderately competent in solving the statistical problems using confidence interval and p-value approach even though their actual performance showed otherwise. Item measures for fit statistics also showed that the maximum estimated measures were found on two problems. These measures indicate that none of the students have attempted these problems correctly due to reasons which include their lack of understanding in confidence interval and probability values.
Furlan, Leonardo; Sterr, Annette
2018-01-01
Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.
[The research protocol VI: How to choose the appropriate statistical test. Inferential statistics].
Flores-Ruiz, Eric; Miranda-Novales, María Guadalupe; Villasís-Keever, Miguel Ángel
2017-01-01
The statistical analysis can be divided in two main components: descriptive analysis and inferential analysis. An inference is to elaborate conclusions from the tests performed with the data obtained from a sample of a population. Statistical tests are used in order to establish the probability that a conclusion obtained from a sample is applicable to the population from which it was obtained. However, choosing the appropriate statistical test in general poses a challenge for novice researchers. To choose the statistical test it is necessary to take into account three aspects: the research design, the number of measurements and the scale of measurement of the variables. Statistical tests are divided into two sets, parametric and nonparametric. Parametric tests can only be used if the data show a normal distribution. Choosing the right statistical test will make it easier for readers to understand and apply the results.
Weather related continuity and completeness on Deep Space Ka-band links: statistics and forecasting
NASA Technical Reports Server (NTRS)
Shambayati, Shervin
2006-01-01
In this paper the concept of link 'stability' as means of measuring the continuity of the link is introduced and through it, along with the distributions of 'good' periods and 'bad' periods, the performance of the proposed Ka-band link design method using both forecasting and long-term statistics has been analyzed. The results indicate that the proposed link design method has relatively good continuity and completeness characteristics even when only long-term statistics are used and that the continuity performance further improves when forecasting is employed. .
Empirical performance of interpolation techniques in risk-neutral density (RND) estimation
NASA Astrophysics Data System (ADS)
Bahaludin, H.; Abdullah, M. H.
2017-03-01
The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.
Early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Ditto, William; Carney, Paul R.
2007-11-01
The performance of five seizure detection schemes, i.e., Nonlinear embedding delay, Hurst scaling, Wavelet Scale, autocorrelation and gradient of accumulated energy, in their ability to detect EEG seizures close to the seizure onset time were evaluated to determine the feasibility of their application in the development of a real time closed loop seizure intervention program (RCLSIP). The criteria chosen for the performance evaluation were, high statistical robustness as determined through the predictability index, the sensitivity and the specificity of a given measure to detect an EEG seizure, the lag in seizure detection with respect to the EEG seizure onset time, as determined through visual inspection and the computational efficiency for each detection measure. An optimality function was designed to evaluate the overall performance of each measure dependent on the criteria chosen. While each of the above measures analyzed for seizure detection performed very well in terms of the statistical parameters, the nonlinear embedding delay measure was found to have the highest optimality index due to its ability to detect seizure very close to the EEG seizure onset time, thereby making it the most suitable dynamical measure in the development of RCLSIP in rat model with chronic limbic epilepsy.
Humans make efficient use of natural image statistics when performing spatial interpolation.
D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S
2013-12-16
Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.
Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.
2009-01-01
In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212
Le Strat, Yann
2017-01-01
The objective of this paper is to evaluate a panel of statistical algorithms for temporal outbreak detection. Based on a large dataset of simulated weekly surveillance time series, we performed a systematic assessment of 21 statistical algorithms, 19 implemented in the R package surveillance and two other methods. We estimated false positive rate (FPR), probability of detection (POD), probability of detection during the first week, sensitivity, specificity, negative and positive predictive values and F1-measure for each detection method. Then, to identify the factors associated with these performance measures, we ran multivariate Poisson regression models adjusted for the characteristics of the simulated time series (trend, seasonality, dispersion, outbreak sizes, etc.). The FPR ranged from 0.7% to 59.9% and the POD from 43.3% to 88.7%. Some methods had a very high specificity, up to 99.4%, but a low sensitivity. Methods with a high sensitivity (up to 79.5%) had a low specificity. All methods had a high negative predictive value, over 94%, while positive predictive values ranged from 6.5% to 68.4%. Multivariate Poisson regression models showed that performance measures were strongly influenced by the characteristics of time series. Past or current outbreak size and duration strongly influenced detection performances. PMID:28715489
The effect of various factors on the masticatory performance of removable denture wearer
NASA Astrophysics Data System (ADS)
Pratama, S.; Koesmaningati, H.; Kusdhany, L. S.
2017-08-01
An individual’s masticatory performance concerns his/her ability to break down food in order to facilitate digestion, and it therefore plays an important role in nutrition. Removable dentures are used to rehabilitate a loss of teeth, which could jeopardize masticatory performance. Further, there exist various other factors that can affect masticatory performance. The objective of this research is to analyze the relationship between various factors and masticatory performance. Thirty-four removable denture wearers (full dentures, single complete dentures, or partial dentures) participated in a cross-sectional study of masticatory performance using color-changeable chewing gum (Masticatory Performance Evaluating Gum Xylitol®). The volume of saliva was evaluated using measuring cups, while the residual ridge heights were measured using a modified mouth mirror no. 3 with metric measurements. The residual ridge height and removable-denture-wearing experience exhibited a significant relationship with masticatory performance. However, age, gender, saliva volume, denture type, and the number and location of the missing teeth did not have a statistically significant association with masticatory performance. The residual ridge height influences the masticatory performance of removable denture wearers, since the greater the ridge height, the better the performance. The experience of using dentures also has a statistically significant influence on masticatory performance.
Bruner, L H; Carr, G J; Harbell, J W; Curren, R D
2002-06-01
An approach commonly used to measure new toxicity test method (NTM) performance in validation studies is to divide toxicity results into positive and negative classifications, and the identify true positive (TP), true negative (TN), false positive (FP) and false negative (FN) results. After this step is completed, the contingent probability statistics (CPS), sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) are calculated. Although these statistics are widely used and often the only statistics used to assess the performance of toxicity test methods, there is little specific guidance in the validation literature on what values for these statistics indicate adequate performance. The purpose of this study was to begin developing data-based answers to this question by characterizing the CPS obtained from an NTM whose data have a completely random association with a reference test method (RTM). Determining the CPS of this worst-case scenario is useful because it provides a lower baseline from which the performance of an NTM can be judged in future validation studies. It also provides an indication of relationships in the CPS that help identify random or near-random relationships in the data. The results from this study of randomly associated tests show that the values obtained for the statistics vary significantly depending on the cut-offs chosen, that high values can be obtained for individual statistics, and that the different measures cannot be considered independently when evaluating the performance of an NTM. When the association between results of an NTM and RTM is random the sum of the complementary pairs of statistics (sensitivity + specificity, NPV + PPV) is approximately 1, and the prevalence (i.e., the proportion of toxic chemicals in the population of chemicals) and PPV are equal. Given that combinations of high sensitivity-low specificity or low specificity-high sensitivity (i.e., the sum of the sensitivity and specificity equal to approximately 1) indicate lack of predictive capacity, an NTM having these performance characteristics should be considered no better for predicting toxicity than by chance alone.
Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka
2015-01-01
The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.
Statistical analysis of the determinations of the Sun's Galactocentric distance
NASA Astrophysics Data System (ADS)
Malkin, Zinovy
2013-02-01
Based on several tens of R0 measurements made during the past two decades, several studies have been performed to derive the best estimate of R0. Some used just simple averaging to derive a result, whereas others provided comprehensive analyses of possible errors in published results. In either case, detailed statistical analyses of data used were not performed. However, a computation of the best estimates of the Galactic rotation constants is not only an astronomical but also a metrological task. Here we perform an analysis of 53 R0 measurements (published in the past 20 years) to assess the consistency of the data. Our analysis shows that they are internally consistent. It is also shown that any trend in the R0 estimates from the last 20 years is statistically negligible, which renders the presence of a bandwagon effect doubtful. On the other hand, the formal errors in the published R0 estimates improve significantly with time.
Mangione, Francesca; Meleo, Deborah; Talocco, Marco; Pecci, Raffaella; Pacifici, Luciano; Bedini, Rossella
2013-01-01
The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT) system used in dental clinical practice, by comparing it with microCT system as standard reference. Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.
Non-parametric early seizure detection in an animal model of temporal lobe epilepsy
NASA Astrophysics Data System (ADS)
Talathi, Sachin S.; Hwang, Dong-Uk; Spano, Mark L.; Simonotto, Jennifer; Furman, Michael D.; Myers, Stephen M.; Winters, Jason T.; Ditto, William L.; Carney, Paul R.
2008-03-01
The performance of five non-parametric, univariate seizure detection schemes (embedding delay, Hurst scale, wavelet scale, nonlinear autocorrelation and variance energy) were evaluated as a function of the sampling rate of EEG recordings, the electrode types used for EEG acquisition, and the spatial location of the EEG electrodes in order to determine the applicability of the measures in real-time closed-loop seizure intervention. The criteria chosen for evaluating the performance were high statistical robustness (as determined through the sensitivity and the specificity of a given measure in detecting a seizure) and the lag in seizure detection with respect to the seizure onset time (as determined by visual inspection of the EEG signal by a trained epileptologist). An optimality index was designed to evaluate the overall performance of each measure. For the EEG data recorded with microwire electrode array at a sampling rate of 12 kHz, the wavelet scale measure exhibited better overall performance in terms of its ability to detect a seizure with high optimality index value and high statistics in terms of sensitivity and specificity.
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
PPM/NAR 8.4-GHz noise temperature statistics for DSN 64-meter antennas, 1982-1984
NASA Technical Reports Server (NTRS)
Slobin, S. D.; Andres, E. M.
1986-01-01
From August 1982 through November 1984, X-band downlink (8.4-GHz) system noise temperature measurements were made on the DSN 64-m antennas during tracking periods. Statistics of these noise temperature values are needed by the DSN and by spacecraft mission planners to assess antenna, receiving, and telemetry system needs, present performance, and future performance. These measurements were made using the DSN Mark III precision power monitor noise-adding radiometers located at each station. It is found that for DSS 43 and DSS 63, at the 90% cumulative distribution level, equivalent zenith noise temperature values fall between those presented in the earlier (1977) and present (1983) versions of DSN/Flight Project design documents. Noise temperatures measured for DSS 14 (Goldstone) are higher than those given in existing design documents and this disagreement will be investigated as a diagnostic of possible PPM or receiving system performance problems.
The Real World Significance of Performance Prediction
ERIC Educational Resources Information Center
Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu
2012-01-01
In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler B.
2017-01-01
This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The goal of the process is to characterize the filter performance in the metric of covariance realism. The Chi-squared statistic is the value calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance. The process of tuning an Extended Kalman Filter (EKF) for Aqua and Aura support is described, including examination of the measurement errors of available observation types, and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-squared statistic, calculated from EKF solutions, are assessed.
Model Performance Evaluation and Scenario Analysis (MPESA) Tutorial
The model performance evaluation consists of metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors.
Roggemann, M C; Welsh, B M; Montera, D; Rhoadarmer, T A
1995-07-10
Simulating the effects of atmospheric turbulence on optical imaging systems is an important aspect of understanding the performance of these systems. Simulations are particularly important for understanding the statistics of some adaptive-optics system performance measures, such as the mean and variance of the compensated optical transfer function, and for understanding the statistics of estimators used to reconstruct intensity distributions from turbulence-corrupted image measurements. Current methods of simulating the performance of these systems typically make use of random phase screens placed in the system pupil. Methods exist for making random draws of phase screens that have the correct spatial statistics. However, simulating temporal effects and anisoplanatism requires one or more phase screens at different distances from the aperture, possibly moving with different velocities. We describe and demonstrate a method for creating random draws of phase screens with the correct space-time statistics for a bitrary turbulence and wind-velocity profiles, which can be placed in the telescope pupil in simulations. Results are provided for both the von Kármán and the Kolmogorov turbulence spectra. We also show how to simulate anisoplanatic effects with this technique.
Zeng, Qing T; Kogan, Sandra; Ngo, Long; Greenes, Robert A
2004-01-01
Millions of consumers perform health information retrieval (HIR) online. To better understand the consumers' perspective on HIR performance, we conducted an observation and interview study of 97 health information consumers. Consumers were asked to perform HIR tasks and we recorded their view regarding performance using several differ-ent subjective measurements: finding the desired information, usefulness of the information found, satisfaction with the information, and intention to continue searching. Statistical analysis was applied to verify if the multiple subjective measurements were redundant. The measurements ranged from slight agreement to no agreement among them. A number of reasons were identified for this lack of agreement. Although related, the four subjective measurements of HIR performance are distinct from each other and carried different useful information
Metz, Anneke M
2008-01-01
There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study.
Measurement of the relationship between perceived and computed color differences
NASA Astrophysics Data System (ADS)
García, Pedro A.; Huertas, Rafael; Melgosa, Manuel; Cui, Guihua
2007-07-01
Using simulated data sets, we have analyzed some mathematical properties of different statistical measurements that have been employed in previous literature to test the performance of different color-difference formulas. Specifically, the properties of the combined index PF/3 (performance factor obtained as average of three terms), widely employed in current literature, have been considered. A new index named standardized residual sum of squares (STRESS), employed in multidimensional scaling techniques, is recommended. The main difference between PF/3 and STRESS is that the latter is simpler and allows inferences on the statistical significance of two color-difference formulas with respect to a given set of visual data.
Assessment of the beryllium lymphocyte proliferation test using statistical process control.
Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M
2006-10-01
Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that were beyond what would be expected due to chance alone. Patterns of test results suggested that variations were systematic. We conclude that laboratories performing the BeBLPT or other similar biological assays of immunological response could benefit from a statistical approach such as SPC to improve quality management.
[Clinical research=design*measurements*statistical analyses].
Furukawa, Toshiaki
2012-06-01
A clinical study must address true endpoints that matter for the patients and the doctors. A good clinical study starts with a good clinical question. Formulating a clinical question in the form of PECO can sharpen one's original question. In order to perform a good clinical study one must have a knowledge of study design, measurements and statistical analyses: The first is taught by epidemiology, the second by psychometrics and the third by biostatistics.
A survey and evaluations of histogram-based statistics in alignment-free sequence comparison.
Luczak, Brian B; James, Benjamin T; Girgis, Hani Z
2017-12-06
Since the dawn of the bioinformatics field, sequence alignment scores have been the main method for comparing sequences. However, alignment algorithms are quadratic, requiring long execution time. As alternatives, scientists have developed tens of alignment-free statistics for measuring the similarity between two sequences. We surveyed tens of alignment-free k-mer statistics. Additionally, we evaluated 33 statistics and multiplicative combinations between the statistics and/or their squares. These statistics are calculated on two k-mer histograms representing two sequences. Our evaluations using global alignment scores revealed that the majority of the statistics are sensitive and capable of finding similar sequences to a query sequence. Therefore, any of these statistics can filter out dissimilar sequences quickly. Further, we observed that multiplicative combinations of the statistics are highly correlated with the identity score. Furthermore, combinations involving sequence length difference or Earth Mover's distance, which takes the length difference into account, are always among the highest correlated paired statistics with identity scores. Similarly, paired statistics including length difference or Earth Mover's distance are among the best performers in finding the K-closest sequences. Interestingly, similar performance can be obtained using histograms of shorter words, resulting in reducing the memory requirement and increasing the speed remarkably. Moreover, we found that simple single statistics are sufficient for processing next-generation sequencing reads and for applications relying on local alignment. Finally, we measured the time requirement of each statistic. The survey and the evaluations will help scientists with identifying efficient alternatives to the costly alignment algorithm, saving thousands of computational hours. The source code of the benchmarking tool is available as Supplementary Materials. © The Author 2017. Published by Oxford University Press.
Thombs, Dennis L.; Olds, R. Scott; Bondy, Susan J.; Winchell, Janice; Baliunas, Dolly; Rehm, Jürgen
2009-01-01
Objective: Findings from previous prospective research suggest the association between alcohol use and undergraduate academic performance is negligible. This study was designed to address weaknesses of the past research by relying on objective measures of both drinking and academic performance. Method: A prospective study was conducted with repeated measures of exposure to alcohol linked to institutional academic records. Alcohol data were collected in residence halls at a nonselective, midwestern, public university in the United States. A total of 659 first- and second-year undergraduate students were tracked over the course of 15-week semesters. Results: A statistically significant negative association with semester academic performance was found for different alcohol indicators: frequency of breath alcohol concentration (BrAC) above .08, mean BrAC, standard deviation, and maximum BrAC recorded. These associations remained statistically significant when controlled for sociodemographic variables and individual level confounders, but the effect sizes were relatively small with a contribution to explained variance of less than 1%. When additionally adjusted for residence hall building, all alcohol indicators no longer reached statistical significance (p ≥ .05). Conclusions: Consistent with past prospective research, the magnitude of the association between undergraduate alcohol use and academic performance is small when the effects of high school academic aptitude and performance are accounted for in multivariable analyses. This is the first study to find that living environment may have a robust effect on the academic achievement of undergraduates. Future research should examine more closely the relation between residence and academic performance and the role that alcohol use may play in creating residential environments. PMID:19737503
NPS National Transit Inventory and Performance Report, 2014
DOT National Transportation Integrated Search
2015-09-09
This document summarizes key highlights and performance measures from the National Park Service (NPS) 2014 National Transit Inventory, and presents data for NPS transit systems system-wide. The document discusses statistics related to ridership, busi...
NASA Astrophysics Data System (ADS)
Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.
2013-03-01
The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.
NASA Astrophysics Data System (ADS)
Roch, Nicolas
2015-03-01
Measurement can be harnessed to probabilistically generate entanglement in the absence of local interactions, for example between spatially separated quantum objects. Continuous weak measurement allows us to observe the dynamics associated with this process. In particular, we perform joint dispersive readout of two superconducting transmon qubits separated by one meter of coaxial cable. We track the evolution of a joint quantum state under the influence of measurement, both as an ensemble and as a set of individual quantum trajectories. Analyzing the statistics of such quantum trajectories can shed new light on the underlying entangling mechanism.
Aalizadeh, Bahman; Mohammadzadeh, Hassan; Khazani, Ali; Dadras, Ali
2016-01-01
Background: Physical exercises can influence some anthropometric and fitness components differently. The aim of present study was to evaluate how a relatively long-term training program in 11-14-year-old male Iranian students affects their anthropometric and motor performance measures. Methods: Measurements were conducted on the anthropometric and fitness components of participants (n = 28) prior to and following the program. They trained 20 weeks, 1.5 h/session with 10 min rest, in 4 times trampoline training programs per week. Motor performance of all participants was assessed using standing long jump and vertical jump based on Eurofit Test Battery. Results: The analysis of variance (ANOVA) repeated measurement test showed a statistically significant main effect of time in calf girth P = 0.001, fat% P = 0.01, vertical jump P = 0.001, and long jump P = 0.001. The ANOVA repeated measurement test revealed a statistically significant main effect of group in fat% P = 0.001. Post hoc paired t-tests indicated statistical significant differences in trampoline group between the two measurements about calf girth (t = −4.35, P = 0.001), fat% (t = 5.87, P = 0.001), vertical jump (t = −5.53, P = 0.001), and long jump (t = −10.00, P = 0.001). Conclusions: We can conclude that 20-week trampoline training with four physical activity sessions/week in 11–14-year-old students seems to have a significant effect on body fat% reduction and effective results in terms of anaerobic physical fitness. Therefore, it is suggested that different training model approach such as trampoline exercises can help students to promote the level of health and motor performance. PMID:27512557
Aalizadeh, Bahman; Mohammadzadeh, Hassan; Khazani, Ali; Dadras, Ali
2016-01-01
Physical exercises can influence some anthropometric and fitness components differently. The aim of present study was to evaluate how a relatively long-term training program in 11-14-year-old male Iranian students affects their anthropometric and motor performance measures. Measurements were conducted on the anthropometric and fitness components of participants (n = 28) prior to and following the program. They trained 20 weeks, 1.5 h/session with 10 min rest, in 4 times trampoline training programs per week. Motor performance of all participants was assessed using standing long jump and vertical jump based on Eurofit Test Battery. The analysis of variance (ANOVA) repeated measurement test showed a statistically significant main effect of time in calf girth P = 0.001, fat% P = 0.01, vertical jump P = 0.001, and long jump P = 0.001. The ANOVA repeated measurement test revealed a statistically significant main effect of group in fat% P = 0.001. Post hoc paired t-tests indicated statistical significant differences in trampoline group between the two measurements about calf girth (t = -4.35, P = 0.001), fat% (t = 5.87, P = 0.001), vertical jump (t = -5.53, P = 0.001), and long jump (t = -10.00, P = 0.001). We can conclude that 20-week trampoline training with four physical activity sessions/week in 11-14-year-old students seems to have a significant effect on body fat% reduction and effective results in terms of anaerobic physical fitness. Therefore, it is suggested that different training model approach such as trampoline exercises can help students to promote the level of health and motor performance.
Saywell, R M; Bean, J A; Ludke, R L; Redman, R W; McHugh, G J
1981-01-01
To examine the relationships between measures of attending physician teams' clinical and utilization performance, inpatient hospital audits were conducted in 22 Maryland and western Pennsylvania nonfederal short-term hospitals. A total of 6,980 medical records were abstracted from eight diagnostic categories using the Payne and JCAH PEP medical audit procedures. The results indicate weak statistical associations between the two medical care evaluation audits; between clinical performance and utilization performance, as measured by appropriateness of admissions and length of stay; and between three utilization measures. Based on these findings, it does not appear valid to use performance in one area to evaluate performance in the other in order to measure or evaluate and ultimately improve physicians; clinical or utilization performance. PMID:6946048
Statistics of natural movements are reflected in motor errors.
Howard, Ian S; Ingram, James N; Körding, Konrad P; Wolpert, Daniel M
2009-09-01
Humans use their arms to engage in a wide variety of motor tasks during everyday life. However, little is known about the statistics of these natural arm movements. Studies of the sensory system have shown that the statistics of sensory inputs are key to determining sensory processing. We hypothesized that the statistics of natural everyday movements may, in a similar way, influence motor performance as measured in laboratory-based tasks. We developed a portable motion-tracking system that could be worn by subjects as they went about their daily routine outside of a laboratory setting. We found that the well-documented symmetry bias is reflected in the relative incidence of movements made during everyday tasks. Specifically, symmetric and antisymmetric movements are predominant at low frequencies, whereas only symmetric movements are predominant at high frequencies. Moreover, the statistics of natural movements, that is, their relative incidence, correlated with subjects' performance on a laboratory-based phase-tracking task. These results provide a link between natural movement statistics and motor performance and confirm that the symmetry bias documented in laboratory studies is a natural feature of human movement.
Intercomparison between ozone profiles measured above Spitsbergen by lidar and sonde techniques
NASA Technical Reports Server (NTRS)
Fabian, Rolf; Vondergathen, Peter; Ehlers, J.; Krueger, Bernd C.; Neuber, Roland; Beyerle, Georg
1994-01-01
This paper compares coincident ozone profile measurements by electrochemical sondes and lidar performed at Ny-Alesund/Spitsbergen. A detailed height dependent statistical analysis of the differences between these complementary methods was performed for the overlapping altitude region (13-35 km). The data set comprises ozone profile measurements conducted between Jan. 1989 and Jan. 1991. Differences of up to 25 percent were found above 30 km altitude.
Educational Indicators: A Guide for Policymakers. CPRE Occasional Paper Series.
ERIC Educational Resources Information Center
Oakes, Jeannie
An educational indicator is a statistic revealing something about the education system's health or performance. Indicators must meet certain substantive and technical standards that define the kind of information they should provide and the features they should measure. There are two types of statistical indicators. Whereas single statistics…
Environmental Health Practice: Statistically Based Performance Measurement
Enander, Richard T.; Gagnon, Ronald N.; Hanumara, R. Choudary; Park, Eugene; Armstrong, Thomas; Gute, David M.
2007-01-01
Objectives. State environmental and health protection agencies have traditionally relied on a facility-by-facility inspection-enforcement paradigm to achieve compliance with government regulations. We evaluated the effectiveness of a new approach that uses a self-certification random sampling design. Methods. Comprehensive environmental and occupational health data from a 3-year statewide industry self-certification initiative were collected from representative automotive refinishing facilities located in Rhode Island. Statistical comparisons between baseline and postintervention data facilitated a quantitative evaluation of statewide performance. Results. The analysis of field data collected from 82 randomly selected automotive refinishing facilities showed statistically significant improvements (P<.05, Fisher exact test) in 4 major performance categories: occupational health and safety, air pollution control, hazardous waste management, and wastewater discharge. Statistical significance was also shown when a modified Bonferroni adjustment for multiple comparisons was performed. Conclusions. Our findings suggest that the new self-certification approach to environmental and worker protection is effective and can be used as an adjunct to further enhance state and federal enforcement programs. PMID:17267709
Measurements in quantitative research: how to select and report on research instruments.
Hagan, Teresa L
2014-07-01
Measures exist to numerically represent degrees of attributes. Quantitative research is based on measurement and is conducted in a systematic, controlled manner. These measures enable researchers to perform statistical tests, analyze differences between groups, and determine the effectiveness of treatments. If something is not measurable, it cannot be tested.
Bayesian statistics in radionuclide metrology: measurement of a decaying source
NASA Astrophysics Data System (ADS)
Bochud, François O.; Bailat, Claude J.; Laedermann, Jean-Pascal
2007-08-01
The most intuitive way of defining a probability is perhaps through the frequency at which it appears when a large number of trials are realized in identical conditions. The probability derived from the obtained histogram characterizes the so-called frequentist or conventional statistical approach. In this sense, probability is defined as a physical property of the observed system. By contrast, in Bayesian statistics, a probability is not a physical property or a directly observable quantity, but a degree of belief or an element of inference. The goal of this paper is to show how Bayesian statistics can be used in radionuclide metrology and what its advantages and disadvantages are compared with conventional statistics. This is performed through the example of an yttrium-90 source typically encountered in environmental surveillance measurement. Because of the very low activity of this kind of source and the small half-life of the radionuclide, this measurement takes several days, during which the source decays significantly. Several methods are proposed to compute simultaneously the number of unstable nuclei at a given reference time, the decay constant and the background. Asymptotically, all approaches give the same result. However, Bayesian statistics produces coherent estimates and confidence intervals in a much smaller number of measurements. Apart from the conceptual understanding of statistics, the main difficulty that could deter radionuclide metrologists from using Bayesian statistics is the complexity of the computation.
Abstracts of ARI Research Publications, FY 1978
1980-09-01
initial item pool, 49 items were identified as having signifi- cant item-to-total-score correlations and were statistically determined to address a...failing. Differences among the three groups on main gun performance measures and the previous experience of gun- ners were not statistically significant...forms of the noncognitive cod- ing speed test; and (d) a second field administration to derive norms and other statistical characteristics of the new
The Shock and Vibration Digest. Volume 14, Number 12
1982-12-01
to evaluate the uses of statistical energy analysis for determining sound transmission performance. Coupling loss factors were mea- sured and compared...measurements for the artificial (Also see No. 2623) cracks in mild-steel test pieces. 82-2676 Ihprovement of the Method of Statistical Energy Analysis for...eters, using a large number of free-response time histories In the application of the statistical energy analysis theory simultaneously in one analysis
Diagnosis of students' ability in a statistical course based on Rasch probabilistic outcome
NASA Astrophysics Data System (ADS)
Mahmud, Zamalia; Ramli, Wan Syahira Wan; Sapri, Shamsiah; Ahmad, Sanizah
2017-06-01
Measuring students' ability and performance are important in assessing how well students have learned and mastered the statistical courses. Any improvement in learning will depend on the student's approaches to learning, which are relevant to some factors of learning, namely assessment methods carrying out tasks consisting of quizzes, tests, assignment and final examination. This study has attempted an alternative approach to measure students' ability in an undergraduate statistical course based on the Rasch probabilistic model. Firstly, this study aims to explore the learning outcome patterns of students in a statistics course (Applied Probability and Statistics) based on an Entrance-Exit survey. This is followed by investigating students' perceived learning ability based on four Course Learning Outcomes (CLOs) and students' actual learning ability based on their final examination scores. Rasch analysis revealed that students perceived themselves as lacking the ability to understand about 95% of the statistics concepts at the beginning of the class but eventually they had a good understanding at the end of the 14 weeks class. In terms of students' performance in their final examination, their ability in understanding the topics varies at different probability values given the ability of the students and difficulty of the questions. Majority found the probability and counting rules topic to be the most difficult to learn.
Papantoniou, Panagiotis
2018-04-03
The present research relies on 2 main objectives. The first is to investigate whether latent model analysis through a structural equation model can be implemented on driving simulator data in order to define an unobserved driving performance variable. Subsequently, the second objective is to investigate and quantify the effect of several risk factors including distraction sources, driver characteristics, and road and traffic environment on the overall driving performance and not in independent driving performance measures. For the scope of the present research, 95 participants from all age groups were asked to drive under different types of distraction (conversation with passenger, cell phone use) in urban and rural road environments with low and high traffic volume in a driving simulator experiment. Then, in the framework of the statistical analysis, a correlation table is presented investigating any of a broad class of statistical relationships between driving simulator measures and a structural equation model is developed in which overall driving performance is estimated as a latent variable based on several individual driving simulator measures. Results confirm the suitability of the structural equation model and indicate that the selection of the specific performance measures that define overall performance should be guided by a rule of representativeness between the selected variables. Moreover, results indicate that conversation with the passenger was not found to have a statistically significant effect, indicating that drivers do not change their performance while conversing with a passenger compared to undistracted driving. On the other hand, results support the hypothesis that cell phone use has a negative effect on driving performance. Furthermore, regarding driver characteristics, age, gender, and experience all have a significant effect on driving performance, indicating that driver-related characteristics play the most crucial role in overall driving performance. The findings of this study allow a new approach to the investigation of driving behavior in driving simulator experiments and in general. By the successful implementation of the structural equation model, driving behavior can be assessed in terms of overall performance and not through individual performance measures, which allows an important scientific step forward from piecemeal analyses to a sound combined analysis of the interrelationship between several risk factors and overall driving performance.
20 CFR 661.205 - What is the role of the State Board?
Code of Federal Regulations, 2010 CFR
2010-04-01
... performance measures, including State adjusted levels of performance, to assess the effectiveness of the... employment statistics system described in section 15(e) of the Wagner-Peyser Act; and (i) Development of an...
A laboratory evaluation of the influence of weighing gauges performance on extreme events statistics
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca
2014-05-01
The effects of inaccurate ground based rainfall measurements on the information derived from rain records is yet not much documented in the literature. La Barbera et al. (2002) investigated the propagation of the systematic mechanic errors of tipping bucket type rain gauges (TBR) into the most common statistics of rainfall extremes, e.g. in the assessment of the return period T (or the related non-exceedance probability) of short-duration/high intensity events. Colli et al. (2012) and Lanza et al. (2012) extended the analysis to a 22-years long precipitation data set obtained from a virtual weighing type gauge (WG). The artificial WG time series was obtained basing on real precipitation data measured at the meteo-station of the University of Genova and modelling the weighing gauge output as a linear dynamic system. This approximation was previously validated with dedicated laboratory experiments and is based on the evidence that the accuracy of WG measurements under real world/time varying rainfall conditions is mainly affected by the dynamic response of the gauge (as revealed during the last WMO Field Intercomparison of Rainfall Intensity Gauges). The investigation is now completed by analyzing actual measurements performed by two common weighing gauges, the OTT Pluvio2 load-cell gauge and the GEONOR T-200 vibrating-wire gauge, since both these instruments demonstrated very good performance under previous constant flow rate calibration efforts. A laboratory dynamic rainfall generation system has been arranged and validated in order to simulate a number of precipitation events with variable reference intensities. Such artificial events were generated basing on real world rainfall intensity (RI) records obtained from the meteo-station of the University of Genova so that the statistical structure of the time series is preserved. The influence of the WG RI measurements accuracy on the associated extreme events statistics is analyzed by comparing the original intensity-duration-frequency (IDF) curves with those obtained from the measuring of the simulated rain events. References: Colli, M., L.G. Lanza, and P. La Barbera, (2012). Weighing gauges measurement errors and the design rainfall for urban scale applications, 9th International Workshop On Precipitation In Urban Areas, 6-9 December, 2012, St. Moritz, Switzerland Lanza, L.G., M. Colli, and P. La Barbera (2012). On the influence of rain gauge performance on extreme events statistics: the case of weighing gauges, EGU General Assembly 2012, April 22th, Wien, Austria La Barbera, P., L.G. Lanza, and L. Stagi, (2002). Influence of systematic mechanical errors of tipping-bucket rain gauges on the statistics of rainfall extremes. Water Sci. Techn., 45(2), 1-9.
Nishiyama, Yoshihiro
2002-12-01
It has been considered that the effective bending rigidity of fluid membranes should be reduced by thermal undulations. However, recent thorough investigation by Pinnow and Helfrich revealed the significance of measure factors for the partition sum. Accepting the local curvature as a statistical measure, they found that fluid membranes are stiffened macroscopically. In order to examine this remarkable idea, we performed extensive ab initio simulations for a fluid membrane. We set up a transfer matrix that is diagonalized by means of the density-matrix renormalization group. Our method has an advantage, in that it allows us to survey various statistical measures. As a consequence, we found that the effective bending rigidity flows toward strong coupling under the choice of local curvature as a statistical measure. On the contrary, for other measures such as normal displacement and tilt angle, we found a clear tendency toward softening.
Cardinal rules: Visual orientation perception reflects knowledge of environmental statistics
Girshick, Ahna R.; Landy, Michael S.; Simoncelli, Eero P.
2011-01-01
Humans are remarkably good at performing visual tasks, but experimental measurements reveal substantial biases in the perception of basic visual attributes. An appealing hypothesis is that these biases arise through a process of statistical inference, in which information from noisy measurements is fused with a probabilistic model of the environment. But such inference is optimal only if the observer’s internal model matches the environment. Here, we provide evidence that this is the case. We measured performance in an orientation-estimation task, demonstrating the well-known fact that orientation judgements are more accurate at cardinal (horizontal and vertical) orientations, along with a new observation that judgements made under conditions of uncertainty are strongly biased toward cardinal orientations. We estimate observers’ internal models for orientation and find that they match the local orientation distribution measured in photographs. We also show how a neural population could embed probabilistic information responsible for such biases. PMID:21642976
NASA Astrophysics Data System (ADS)
Posselt, D.; L'Ecuyer, T.; Matsui, T.
2009-05-01
Cloud resolving models are typically used to examine the characteristics of clouds and precipitation and their relationship to radiation and the large-scale circulation. As such, they are not required to reproduce the exact location of each observed convective system, much less each individual cloud. Some of the most relevant information about clouds and precipitation is provided by instruments located on polar-orbiting satellite platforms, but these observations are intermittent "snapshots" in time, making assessment of model performance challenging. In contrast to direct comparison, model results can be evaluated statistically. This avoids the requirement for the model to reproduce the observed systems, while returning valuable information on the performance of the model in a climate-relevant sense. The focus of this talk is a model evaluation study, in which updates to the microphysics scheme used in a three-dimensional version of the Goddard Cumulus Ensemble (GCE) model are evaluated using statistics of observed clouds, precipitation, and radiation. We present the results of multiday (non-equilibrium) simulations of organized deep convection using single- and double-moment versions of a the model's cloud microphysical scheme. Statistics of TRMM multi-sensor derived clouds, precipitation, and radiative fluxes are used to evaluate the GCE results, as are simulated TRMM measurements obtained using a sophisticated instrument simulator suite. We present advantages and disadvantages of performing model comparisons in retrieval and measurement space and conclude by motivating the use of data assimilation techniques for analyzing and improving model parameterizations.
2008-01-01
There is an increasing need for students in the biological sciences to build a strong foundation in quantitative approaches to data analyses. Although most science, engineering, and math field majors are required to take at least one statistics course, statistical analysis is poorly integrated into undergraduate biology course work, particularly at the lower-division level. Elements of statistics were incorporated into an introductory biology course, including a review of statistics concepts and opportunity for students to perform statistical analysis in a biological context. Learning gains were measured with an 11-item statistics learning survey instrument developed for the course. Students showed a statistically significant 25% (p < 0.005) increase in statistics knowledge after completing introductory biology. Students improved their scores on the survey after completing introductory biology, even if they had previously completed an introductory statistics course (9%, improvement p < 0.005). Students retested 1 yr after completing introductory biology showed no loss of their statistics knowledge as measured by this instrument, suggesting that the use of statistics in biology course work may aid long-term retention of statistics knowledge. No statistically significant differences in learning were detected between male and female students in the study. PMID:18765754
ERIC Educational Resources Information Center
Council of the Great City Schools, 2008
2008-01-01
This report describes statistical indicators developed by the Council of the Great City Schools and its member districts to measure big-city school performance on a range of operational functions in business, finance, human resources and technology. The report also presents data city-by-city on those indicators. This is the second time that…
Statistical analysis of global horizontal solar irradiation GHI in Fez city, Morocco
NASA Astrophysics Data System (ADS)
Bounoua, Z.; Mechaqrane, A.
2018-05-01
An accurate knowledge of the solar energy reaching the ground is necessary for sizing and optimizing the performances of solar installations. This paper describes a statistical analysis of the global horizontal solar irradiation (GHI) at Fez city, Morocco. For better reliability, we have first applied a set of check procedures to test the quality of hourly GHI measurements. We then eliminate the erroneous values which are generally due to measurement or the cosine effect errors. Statistical analysis show that the annual mean daily values of GHI is of approximately 5 kWh/m²/day. Daily monthly mean values and other parameter are also calculated.
A Data Warehouse Architecture for DoD Healthcare Performance Measurements.
1999-09-01
design, develop, implement, and apply statistical analysis and data mining tools to a Data Warehouse of healthcare metrics. With the DoD healthcare...framework, this thesis defines a methodology to design, develop, implement, and apply statistical analysis and data mining tools to a Data Warehouse...21 F. INABILITY TO CONDUCT HELATHCARE ANALYSIS
Performance of Reclassification Statistics in Comparing Risk Prediction Models
Paynter, Nina P.
2012-01-01
Concerns have been raised about the use of traditional measures of model fit in evaluating risk prediction models for clinical use, and reclassification tables have been suggested as an alternative means of assessing the clinical utility of a model. Several measures based on the table have been proposed, including the reclassification calibration (RC) statistic, the net reclassification improvement (NRI), and the integrated discrimination improvement (IDI), but the performance of these in practical settings has not been fully examined. We used simulations to estimate the type I error and power for these statistics in a number of scenarios, as well as the impact of the number and type of categories, when adding a new marker to an established or reference model. The type I error was found to be reasonable in most settings, and power was highest for the IDI, which was similar to the test of association. The relative power of the RC statistic, a test of calibration, and the NRI, a test of discrimination, varied depending on the model assumptions. These tools provide unique but complementary information. PMID:21294152
Effects of exercise on fatigue, sleep, and performance: a randomized trial.
Coleman, Elizabeth Ann; Goodwin, Julia A; Kennedy, Robert; Coon, Sharon K; Richards, Kathy; Enderlin, Carol; Stewart, Carol B; McNatt, Paula; Lockhart, Kim; Anaissie, Elias J
2012-09-01
To compare usual care with a home-based individualized exercise program (HBIEP) in patients receiving intensive treatment for multiple myeloma (MM)and epoetin alfa therapy. Randomized trial with repeated measures of two groups (one experimental and one control) and an approximate 15-week experimental period. Outpatient setting of the Myeloma Institute for Research and Therapy at the Rockfellow Cancer Center at the University of Arkansas for Medical Sciences. 187 patients with newly diagnosed MM enrolled in a separate study evaluating effectiveness of the Total Therapy regimen, with or without thalidomide. Measurements included the Profile of Mood States fatigue scale, Functional Assessment of Cancer Therapy-Fatigue, ActiGraph® recordings, 6-Minute Walk Test, and hemoglobin levels at baseline and before and after stem cell collection. Descriptive statistics were used to compare demographics and treatment effects, and repeated measures analysis of variance was used to determine effects of HBIEP. Fatigue, nighttime sleep, performance (aerobic capacity) as dependent or outcome measures, and HBIEP combining strength building and aerobic exercise as the independent variable. Both groups were equivalent for age, gender, race, receipt of thalidomide, hemoglobin levels, and type of treatment regimen for MM. No statistically significant differences existed among the experimental and control groups for fatigue, sleep, or performance (aerobic capacity). Statistically significant differences (p < 0.05) were found in each of the study outcomes for all patients as treatment progressed and patients experienced more fatigue and poorer nighttime sleep and performance (aerobic capacity). The effect of exercise seemed to be minimal on decreasing fatigue, improving sleep, and improving performance (aerobic capacity). Exercise is safe and has physiologic benefits for patients undergoing MM treatment; exercise combined with epoetin alfa helped alleviate anemia.
Popovic, Gordana; Harhara, Thana; Pope, Ashley; Al-Awamer, Ahmed; Banerjee, Subrata; Bryson, John; Mak, Ernie; Lau, Jenny; Hannon, Breffni; Swami, Nadia; Le, Lisa W; Zimmermann, Camilla
2018-06-01
Performance status measures are increasingly completed by patients in outpatient cancer settings, but are not well validated for this use. We assessed performance of a patient-reported functional status measure (PRFS, based on the Eastern Cooperative Oncology Group [ECOG]), compared with the physician-completed ECOG, in terms of agreement in ratings and prediction of survival. Patients and physicians independently completed five-point PRFS (lay version of ECOG) and ECOG measures on first consultation at an oncology palliative care clinic. We assessed agreement between PRFS and ECOG using weighted Kappa statistics, and used linear regression to determine factors associated with the difference between PRFS and ECOG ratings. We used the Kaplan-Meier method to estimate the patients' median survival, categorized by PRFS and ECOG, and assessed predictive accuracy of these measures using the C-statistic. For the 949 patients, there was moderate agreement between PRFS and ECOG (weighted Kappa 0.32; 95% CI: 0.28-0.36). On average, patients' ratings of performance status were worse by 0.31 points (95% CI: 0.25-0.37, P < 0.0001); this tendency was greater for younger patients (P = 0.002) and those with worse symptoms (P < 0.0001). Both PRFS and ECOG scores correlated well with overall survival; the C-statistic was higher for the average of PRFS and ECOG scores (0.619) than when reported individually (0.596 and 0.604, respectively). Patients tend to rate their performance status worse than physicians, particularly if they are younger or have greater symptom burden. Prognostic ability of performance status could be improved by using the average of patients and physician scores. Copyright © 2018 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Implementing the Routine Computation and Use of Roadway Performance Measures within WSDOT
DOT National Transportation Integrated Search
2017-08-01
The Washington State Department of Transportation (WSDOT) is one of the nation's leaders in calculating and using reliability statistics for urban freeways. The Department currently uses reliability measures for decision making for urban freeways-whe...
Elangovan, Satheesh; Brogden, Kim A; Dawson, Deborah V; Blanchette, Derek; Pagan-Rivera, Keyla; Stanford, Clark M; Johnson, Georgia K; Recker, Erica; Bowers, Rob; Haynes, William G; Avila-Ortiz, Gustavo
2014-01-01
To examine the relationships between three measures of body fat-body mass index (BMI), waist circumference (WC), and total body fat percent-and markers of inflammation around dental implants in stable periodontal maintenance patients. Seventy-three subjects were enrolled in this cross-sectional assessment. The study visit consisted of a physical examination that included anthropologic measurements of body composition (BMI, WC, body fat %); intraoral assessments were performed (full-mouth plaque index, periodontal and peri-implant comprehensive examinations) and peri-implant sulcular fluid (PISF) was collected on the study implants. Levels of interleukin (IL)-1α, IL-1β, IL-6, IL-8, IL-10, IL-12, IL-17, tumor necrosis factor-α, C-reactive protein, osteoprotegerin, leptin, and adiponectin in the PISF were measured using multiplex proteomic immunoassays. Correlation analysis with body fat measures was then performed using appropriate statistical methods. After adjustments for covariates, regression analyses revealed statistically significant correlation between IL-1β in PISF and WC (R = 0.33; P = .0047). In this study in stable periodontal maintenance patients, a modest but statistically significant positive correlation was observed between the levels of IL-1β, a major proinflammatory cytokine in PISF, and WC, a reliable measure of central obesity.
Gene coexpression measures in large heterogeneous samples using count statistics.
Wang, Y X Rachel; Waterman, Michael S; Huang, Haiyan
2014-11-18
With the advent of high-throughput technologies making large-scale gene expression data readily available, developing appropriate computational tools to process these data and distill insights into systems biology has been an important part of the "big data" challenge. Gene coexpression is one of the earliest techniques developed that is still widely in use for functional annotation, pathway analysis, and, most importantly, the reconstruction of gene regulatory networks, based on gene expression data. However, most coexpression measures do not specifically account for local features in expression profiles. For example, it is very likely that the patterns of gene association may change or only exist in a subset of the samples, especially when the samples are pooled from a range of experiments. We propose two new gene coexpression statistics based on counting local patterns of gene expression ranks to take into account the potentially diverse nature of gene interactions. In particular, one of our statistics is designed for time-course data with local dependence structures, such as time series coupled over a subregion of the time domain. We provide asymptotic analysis of their distributions and power, and evaluate their performance against a wide range of existing coexpression measures on simulated and real data. Our new statistics are fast to compute, robust against outliers, and show comparable and often better general performance.
Summary statistics in auditory perception.
McDermott, Josh H; Schemitsch, Michael; Simoncelli, Eero P
2013-04-01
Sensory signals are transduced at high resolution, but their structure must be stored in a more compact format. Here we provide evidence that the auditory system summarizes the temporal details of sounds using time-averaged statistics. We measured discrimination of 'sound textures' that were characterized by particular statistical properties, as normally result from the superposition of many acoustic features in auditory scenes. When listeners discriminated examples of different textures, performance improved with excerpt duration. In contrast, when listeners discriminated different examples of the same texture, performance declined with duration, a paradoxical result given that the information available for discrimination grows with duration. These results indicate that once these sounds are of moderate length, the brain's representation is limited to time-averaged statistics, which, for different examples of the same texture, converge to the same values with increasing duration. Such statistical representations produce good categorical discrimination, but limit the ability to discern temporal detail.
A flexibly shaped space-time scan statistic for disease outbreak detection and monitoring.
Takahashi, Kunihiko; Kulldorff, Martin; Tango, Toshiro; Yih, Katherine
2008-04-11
Early detection of disease outbreaks enables public health officials to implement disease control and prevention measures at the earliest possible time. A time periodic geographical disease surveillance system based on a cylindrical space-time scan statistic has been used extensively for disease surveillance along with the SaTScan software. In the purely spatial setting, many different methods have been proposed to detect spatial disease clusters. In particular, some spatial scan statistics are aimed at detecting irregularly shaped clusters which may not be detected by the circular spatial scan statistic. Based on the flexible purely spatial scan statistic, we propose a flexibly shaped space-time scan statistic for early detection of disease outbreaks. The performance of the proposed space-time scan statistic is compared with that of the cylindrical scan statistic using benchmark data. In order to compare their performances, we have developed a space-time power distribution by extending the purely spatial bivariate power distribution. Daily syndromic surveillance data in Massachusetts, USA, are used to illustrate the proposed test statistic. The flexible space-time scan statistic is well suited for detecting and monitoring disease outbreaks in irregularly shaped areas.
Brain tissues volume measurements from 2D MRI using parametric approach
NASA Astrophysics Data System (ADS)
L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.
2018-04-01
The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.
2017-01-01
Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers (QIBs) to measure changes in these features. Critical to the performance of a QIB in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method and metrics used to assess a QIB for clinical use. It is therefore, difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America (RSNA) and the Quantitative Imaging Biomarker Alliance (QIBA) with technical, radiological and statistical experts developed a set of technical performance analysis methods, metrics and study designs that provide terminology, metrics and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of QIB performance studies so that results from multiple studies can be compared, contrasted or combined. PMID:24919831
ERIC Educational Resources Information Center
Larbi-Apau, Josephine A.; Guerra-Lopez, Ingrid; Moseley, James L.; Spannaus, Timothy; Yaprak, Attila
2017-01-01
The study examined teaching faculty's educational technology-related performances (ETRP) as a measure for predicting eLearning management in Ghana. A total of valid data (n = 164) were collected and analyzed on applied ISTE-NETS-T Performance Standards using descriptive and ANOVA statistics. Results showed an overall moderate performance with the…
Litwin, A S; Avgar, A C; Pronovost, P J
2012-01-01
Just as researchers and clinicians struggle to pin down the benefits attendant to health information technology (IT), management scholars have long labored to identify the performance effects arising from new technologies and from other organizational innovations, namely the reorganization of work and the devolution of decision-making authority. This paper applies lessons from that literature to theorize the likely sources of measurement error that yield the weak statistical relationship between measures of health IT and various performance outcomes. In so doing, it complements the evaluation literature's more conceptual examination of health IT's limited performance impact. The paper focuses on seven issues, in particular, that likely bias downward the estimated performance effects of health IT. They are 1.) negative self-selection, 2.) omitted or unobserved variables, 3.) mis-measured contextual variables, 4.) mismeasured health IT variables, 5.) lack of attention to the specific stage of the adoption-to-use continuum being examined, 6.) too short of a time horizon, and 7.) inappropriate units-of-analysis. The authors offer ways to counter these challenges. Looking forward more broadly, they suggest that researchers take an organizationally-grounded approach that privileges internal validity over generalizability. This focus on statistical and empirical issues in health IT-performance studies should be complemented by a focus on theoretical issues, in particular, the ways that health IT creates value and apportions it to various stakeholders.
Relationships Between Potentiation Effects After Ballistic Half-Squats and Bilateral Symmetry.
Suchomel, Timothy J; Sato, Kimitake; DeWeese, Brad H; Ebben, William P; Stone, Michael H
2016-05-01
The purposes of this study were to examine the effect of ballistic concentric-only half-squats (COHS) on subsequent squat-jump (SJ) performances at various rest intervals and to examine the relationships between changes in SJ performance and bilateral symmetry at peak performance. Thirteen resistance-trained men performed an SJ immediately and every minute up to 10 min on dual force plates after 2 ballistic COHS repetitions at 90% of their 1-repetition-maximum COHS. SJ peak force, peak power, net impulse, and rate of force development (RFD) were compared using a series of 1-way repeated-measures ANOVAs. The percent change in performance at which peak performance occurred for each variable was correlated with the symmetry index scores at the corresponding time point using Pearson correlation coefficients. Statistical differences in peak power (P = .031) existed between rest intervals; however, no statistically significant pairwise comparisons were present (P > .05). No statistical differences in peak force (P = .201), net impulse (P = .064), and RFD (P = .477) were present between rest intervals. The relationships between changes in SJ performance and bilateral symmetry after the rest interval that produced the greatest performance for peak force (r = .300, P = .319), peak power (r = -.041, P = .894), net impulse (r = -.028, P = .927), and RFD (r = -.434, P = .138) were not statistically significant. Ballistic COHS may enhance SJ performance; however, the changes in performance were not related to bilateral symmetry.
Modeling, implementation, and validation of arterial travel time reliability : [summary].
DOT National Transportation Integrated Search
2013-11-01
Travel time reliability (TTR) has been proposed as : a better measure of a facilitys performance than : a statistical measure like peak hour demand. TTR : is based on more information about average traffic : flows and longer time periods, thus inc...
Statistical Learning Is Not Affected by a Prior Bout of Physical Exercise.
Stevens, David J; Arciuli, Joanne; Anderson, David I
2016-05-01
This study examined the effect of a prior bout of exercise on implicit cognition. Specifically, we examined whether a prior bout of moderate intensity exercise affected performance on a statistical learning task in healthy adults. A total of 42 participants were allocated to one of three conditions-a control group, a group that exercised for 15 min prior to the statistical learning task, and a group that exercised for 30 min prior to the statistical learning task. The participants in the exercise groups cycled at 60% of their respective V˙O2 max. Each group demonstrated significant statistical learning, with similar levels of learning among the three groups. Contrary to previous research that has shown that a prior bout of exercise can affect performance on explicit cognitive tasks, the results of the current study suggest that the physiological stress induced by moderate-intensity exercise does not affect implicit cognition as measured by statistical learning. Copyright © 2015 Cognitive Science Society, Inc.
Statistical and Machine Learning forecasting methods: Concerns and ways forward
Makridakis, Spyros; Assimakopoulos, Vassilios
2018-01-01
Machine Learning (ML) methods have been proposed in the academic literature as alternatives to statistical ones for time series forecasting. Yet, scant evidence is available about their relative performance in terms of accuracy and computational requirements. The purpose of this paper is to evaluate such performance across multiple forecasting horizons using a large subset of 1045 monthly time series used in the M3 Competition. After comparing the post-sample accuracy of popular ML methods with that of eight traditional statistical ones, we found that the former are dominated across both accuracy measures used and for all forecasting horizons examined. Moreover, we observed that their computational requirements are considerably greater than those of statistical methods. The paper discusses the results, explains why the accuracy of ML models is below that of statistical ones and proposes some possible ways forward. The empirical results found in our research stress the need for objective and unbiased ways to test the performance of forecasting methods that can be achieved through sizable and open competitions allowing meaningful comparisons and definite conclusions. PMID:29584784
ERIC Educational Resources Information Center
Council of the Great City Schools, 2008
2008-01-01
This report describes statistical indicators developed by the Council of the Great City Schools and its member districts to measure big-city school performance on a range of operational functions in business, finance, human resources and technology. The report also presents data city-by-city on those indicators. This is the second time that…
Effects of special composite stretching on the swing of amateur golf players
Lee, Joong-chul; Lee, Sung-wan; Yeo, Yun-ghi; Park, Gi Duck
2015-01-01
[Purpose] The study investigated stretching for safer a golf swing compared to present stretching methods for proper swings in order to examine the effects of stretching exercises on golf swings. [Subjects] The subjects were 20 amateur golf club members who were divided into two groups: an experimental group which performed stretching, and a control group which did not. The subjects had no bone deformity, muscle weakness, muscle soreness, or neurological problems. [Methods] A swing analyzer and a ROM measuring instrument were used as the measuring tools. The swing analyzer was a GS400-golf hit ball analyzer (Korea) and the ROM measuring instrument was a goniometer (Korea). [Results] The experimental group showed a statistically significant improvement in driving distance. After the special stretching training for golf, a statistically significant difference in hit-ball direction deviation after swings were found between the groups. The experimental group showed statistically significant decreases in hit ball direction deviation. After the special stretching training for golf, statistically significant differences in hit-ball speed were found between the groups. The experimental group showed significant increases in hit-ball speed. [Conclusion] To examine the effects of a special stretching program for golf on golf swing-related factors, 20 male amateur golf club members performed a 12-week stretching training program. After the golf stretching training, statistically significant differences were found between the groups in hit-ball driving distance, direction deviation, deflection distance, and speed. PMID:25995553
Effects of special composite stretching on the swing of amateur golf players.
Lee, Joong-Chul; Lee, Sung-Wan; Yeo, Yun-Ghi; Park, Gi Duck
2015-04-01
[Purpose] The study investigated stretching for safer a golf swing compared to present stretching methods for proper swings in order to examine the effects of stretching exercises on golf swings. [Subjects] The subjects were 20 amateur golf club members who were divided into two groups: an experimental group which performed stretching, and a control group which did not. The subjects had no bone deformity, muscle weakness, muscle soreness, or neurological problems. [Methods] A swing analyzer and a ROM measuring instrument were used as the measuring tools. The swing analyzer was a GS400-golf hit ball analyzer (Korea) and the ROM measuring instrument was a goniometer (Korea). [Results] The experimental group showed a statistically significant improvement in driving distance. After the special stretching training for golf, a statistically significant difference in hit-ball direction deviation after swings were found between the groups. The experimental group showed statistically significant decreases in hit ball direction deviation. After the special stretching training for golf, statistically significant differences in hit-ball speed were found between the groups. The experimental group showed significant increases in hit-ball speed. [Conclusion] To examine the effects of a special stretching program for golf on golf swing-related factors, 20 male amateur golf club members performed a 12-week stretching training program. After the golf stretching training, statistically significant differences were found between the groups in hit-ball driving distance, direction deviation, deflection distance, and speed.
Fundamentals of Sports Analytics.
Wasserman, Erin B; Herzog, Mackenzie M; Collins, Christy L; Morris, Sarah N; Marshall, Stephen W
2018-07-01
Recently, the importance of statistics and analytics in sports has increased. This review describes measures of sports injury and fundamentals of sports injury research with a brief overview of some of the emerging measures of sports performance. We describe research study designs that can be used to identify risk factors for injury, injury surveillance programs, and common measures of injury risk and association. Finally, we describe measures of physical performance and training and considerations for using these measures. This review provides sports medicine clinicians with an understanding of current research measures and considerations for designing sports injury research studies. Copyright © 2018 Elsevier Inc. All rights reserved.
Does daily nurse staffing match ward workload variability? Three hospitals' experiences.
Gabbay, Uri; Bukchin, Michael
2009-01-01
Nurse shortage and rising healthcare resource burdens mean that appropriate workforce use is imperative. This paper aims to evaluate whether daily nursing staffing meets ward workload needs. Nurse attendance and daily nurses' workload capacity in three hospitals were evaluated. Statistical process control was used to evaluate intra-ward nurse workload capacity and day-to-day variations. Statistical process control is a statistics-based method for process monitoring that uses charts with predefined target measure and control limits. Standardization was performed for inter-ward analysis by converting ward-specific crude measures to ward-specific relative measures by dividing observed/expected. Two charts: acceptable and tolerable daily nurse workload intensity, were defined. Appropriate staffing indicators were defined as those exceeding predefined rates within acceptable and tolerable limits (50 percent and 80 percent respectively). A total of 42 percent of the overall days fell within acceptable control limits and 71 percent within tolerable control limits. Appropriate staffing indicators were met in only 33 percent of wards regarding acceptable nurse workload intensity and in only 45 percent of wards regarding tolerable workloads. The study work did not differentiate crude nurse attendance and it did not take into account patient severity since crude bed occupancy was used. Double statistical process control charts and certain staffing indicators were used, which is open to debate. Wards that met appropriate staffing indicators prove the method's feasibility. Wards that did not meet appropriate staffing indicators prove the importance and the need for process evaluations and monitoring. Methods presented for monitoring daily staffing appropriateness are simple to implement either for intra-ward day-to-day variation by using nurse workload capacity statistical process control charts or for inter-ward evaluation using standardized measure of nurse workload intensity. The real challenge will be to develop planning systems and implement corrective interventions such as dynamic and flexible daily staffing, which will face difficulties and barriers. The paper fulfils the need for workforce utilization evaluation. A simple method using available data for daily staffing appropriateness evaluation, which is easy to implement and operate, is presented. The statistical process control method enables intra-ward evaluation, while standardization by converting crude into relative measures enables inter-ward analysis. The staffing indicator definitions enable performance evaluation. This original study uses statistical process control to develop simple standardization methods and applies straightforward statistical tools. This method is not limited to crude measures, rather it uses weighted workload measures such as nursing acuity or weighted nurse level (i.e. grade/band).
Summary of Key Operating Statistics: Data Collected from the 2009 Annual Institutional Report
ERIC Educational Resources Information Center
Accrediting Council for Independent Colleges and Schools, 2010
2010-01-01
The Accrediting Council for Independent Colleges and Schools (ACICS) provides the Summary of Key Operating Statistics (KOS) as an annual review of the performance and key measurements of the more than 800 private post-secondary institutions we accredit. This edition of the KOS contains information based on the 2009 Annual Institutional Reports…
An Empirical Investigation of Methods for Assessing Item Fit for Mixed Format Tests
ERIC Educational Resources Information Center
Chon, Kyong Hee; Lee, Won-Chan; Ansley, Timothy N.
2013-01-01
Empirical information regarding performance of model-fit procedures has been a persistent need in measurement practice. Statistical procedures for evaluating item fit were applied to real test examples that consist of both dichotomously and polytomously scored items. The item fit statistics used in this study included the PARSCALE's G[squared],…
ERIC Educational Resources Information Center
Shinaberger, Lee
2017-01-01
An instructor transformed an undergraduate business statistics course over 10 semesters from a traditional lecture course to a flipped classroom course. The researcher used a linear mixed model to explore the effectiveness of the evolution on student success as measured by exam performance. The results provide guidance to successfully implement a…
Filter Tuning Using the Chi-Squared Statistic
NASA Technical Reports Server (NTRS)
Lilly-Salkowski, Tyler
2017-01-01
The Goddard Space Flight Center (GSFC) Flight Dynamics Facility (FDF) performs orbit determination (OD) for the Aqua and Aura satellites. Both satellites are located in low Earth orbit (LEO), and are part of what is considered the A-Train satellite constellation. Both spacecraft are currently in the science phase of their respective missions. The FDF has recently been tasked with delivering definitive covariance for each satellite.The main source of orbit determination used for these missions is the Orbit Determination Toolkit developed by Analytical Graphics Inc. (AGI). This software uses an Extended Kalman Filter (EKF) to estimate the states of both spacecraft. The filter incorporates force modelling, ground station and space network measurements to determine spacecraft states. It also generates a covariance at each measurement. This covariance can be useful for evaluating the overall performance of the tracking data measurements and the filter itself. An accurate covariance is also useful for covariance propagation which is utilized in collision avoidance operations. It is also valuable when attempting to determine if the current orbital solution will meet mission requirements in the future.This paper examines the use of the Chi-square statistic as a means of evaluating filter performance. The Chi-square statistic is calculated to determine the realism of a covariance based on the prediction accuracy and the covariance values at a given point in time. Once calculated, it is the distribution of this statistic that provides insight on the accuracy of the covariance.For the EKF to correctly calculate the covariance, error models associated with tracking data measurements must be accurately tuned. Over estimating or under estimating these error values can have detrimental effects on the overall filter performance. The filter incorporates ground station measurements, which can be tuned based on the accuracy of the individual ground stations. It also includes measurements from the NASA space network (SN), which can be affected by the assumed accuracy of the TDRS satellite state at the time of the measurement.The force modelling in the EKF is also an important factor that affects the propagation accuracy and covariance sizing. The dominant force in the LEO orbit regime is the drag force caused by atmospheric drag. Accurate accounting of the drag force is especially important for the accuracy of the propagated state. The implementation of a box and wing model to improve drag estimation accuracy, and its overall effect on the covariance state is explored.The process of tuning the EKF for Aqua and Aura support is described, including examination of the measurement errors of available observation types (Doppler and range), and methods of dealing with potentially volatile atmospheric drag modeling. Predictive accuracy and the distribution of the Chi-square statistic, calculated based of the ODTK EKF solutions, are assessed versus accepted norms for the orbit regime.
ERIC Educational Resources Information Center
Cui, Ying; Gierl, Mark; Guo, Qi
2016-01-01
The purpose of the current investigation was to describe how the artificial neural networks (ANNs) can be used to interpret student performance on cognitive diagnostic assessments (CDAs) and evaluate the performances of ANNs using simulation results. CDAs are designed to measure student performance on problem-solving tasks and provide useful…
The effect of warm-ups with stretching on the isokinetic moments of collegiate men.
Park, Hyoung-Kil; Jung, Min-Kyung; Park, Eunkyung; Lee, Chang-Young; Jee, Yong-Seok; Eun, Denny; Cha, Jun-Youl; Yoo, Jaehyun
2018-02-01
Performing warm-ups increases muscle temperature and blood flow, which contributes to improved exercise performance and reduced risk of injuries to muscles and tendons. Stretching increases the range of motion of the joints and is effective for the maintenance and enhancement of exercise performance and flexibility, as well as for injury prevention. However, stretching as a warm-up activity may temporarily decrease muscle strength, muscle power, and exercise performance. This study aimed to clarify the effect of stretching during warm-ups on muscle strength, muscle power, and muscle endurance in a nonathletic population. The subjects of this study consisted of 13 physically active male collegiate students with no medical conditions. A self-assessment questionnaire regarding how well the subjects felt about their physical abilities was administered to measure psychological readiness before and after the warm-up. Subjects performed a non-warm-up, warm-up, or warm-up regimen with stretching prior to the assessment of the isokinetic moments of knee joints. After the measurements, the respective variables were analyzed using nonparametric tests. First, no statistically significant intergroup differences were found in the flexor and extensor peak torques of the knee joints at 60°/sec, which were assessed to measure muscle strength. Second, no statistically significant intergroup differences were found in the flexor and extensor peak torques of the knee joints at 180°/sec, which were assessed to measure muscle power. Third, the total work of the knee joints at 240°/sec, intended to measure muscle endurance, was highest in the aerobic-stretch-warm-ups (ASW) group, but no statistically significant differences were found among the groups. Finally, the psychological readiness for physical activity according to the type of warm-up was significantly higher in ASW. Simple stretching during warm-ups appears to have no effect on variables of exercise physiology in nonathletes who participate in routine recreational sport activities. However, they seem to have a meaningful effect on exercise performance by affording psychological stability, preparation, and confidence in exercise performance.
Toplak, Maggie E; Sorge, Geoff B; Benoit, André; West, Richard F; Stanovich, Keith E
2010-07-01
The Iowa Gambling Task (IGT) has been used to study decision-making differences in many different clinical and developmental samples. It has been suggested that IGT performance captures abilities that are separable from cognitive abilities, including executive functions and intelligence. The purpose of the current review was to examine studies that have explicitly examined the relationship between IGT performance and these cognitive abilities. We included 43 studies that reported correlational analyses with IGT performance, including measures of inhibition, working memory, and set-shifting as indices of executive functions, as well as measures of verbal, nonverbal, and full-scale IQ as indices of intelligence. Overall, only a small proportion of the studies reported a statistically significant relationship between IGT performance and these cognitive abilities. The majority of studies reported a non-significant relationship. Of the minority of studies that reported statistically significant effects, effect sizes were, at best, small to modest, and confidence intervals were large, indicating that considerable variability in performance on the IGT is not captured by current measures of executive function and intelligence. These findings highlight the separability between decision-making on the IGT and cognitive abilities, which is consistent with recent conceptualizations that differentiate rationality from intelligence. 2010 Elsevier Ltd. All rights reserved.
Ion induced electron emission statistics under Agm- cluster bombardment of Ag
NASA Astrophysics Data System (ADS)
Breuers, A.; Penning, R.; Wucher, A.
2018-05-01
The electron emission from a polycrystalline silver surface under bombardment with Agm- cluster ions (m = 1, 2, 3) is investigated in terms of ion induced kinetic excitation. The electron yield γ is determined directly by a current measurement method on the one hand and implicitly by the analysis of the electron emission statistics on the other hand. Successful measurements of the electron emission spectra ensure a deeper understanding of the ion induced kinetic electron emission process, with particular emphasis on the effect of the projectile cluster size to the yield as well as to emission statistics. The results allow a quantitative comparison to computer simulations performed for silver atoms and clusters impinging onto a silver surface.
A simple rain attenuation model for earth-space radio links operating at 10-35 GHz
NASA Technical Reports Server (NTRS)
Stutzman, W. L.; Yon, K. M.
1986-01-01
The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.
da Silva, R C V; de Sá, C C; Pascual-Vaca, Á O; de Souza Fontes, L H; Herbella Fernandes, F A M; Dib, R A; Blanco, C R; Queiroz, R A; Navarro-Rodriguez, T
2013-07-01
The treatment of gastroesophageal reflux disease may be clinical or surgical. The clinical consists basically of the use of drugs; however, there are new techniques to complement this treatment, osteopathic intervention in the diaphragmatic muscle is one these. The objective of the study is to compare pressure values in the examination of esophageal manometry of the lower esophageal sphincter (LES) before and immediately after osteopathic intervention in the diaphragm muscle. Thirty-eight patients with gastroesophageal reflux disease - 16 submitted to sham technique and 22 submitted osteopathic technique - were randomly selected. The average respiratory pressure (ARP) and the maximum expiratory pressure (MEP) of the LES were measured by manometry before and after osteopathic technique at the point of highest pressure. Statistical analysis was performed using the Student's t-test and Mann-Whitney, and magnitude of the technique proposed was measured using the Cohen's index. Statistically significant difference in the osteopathic technique was found in three out of four in relation to the group of patients who performed the sham technique for the following measures of LES pressure: ARP with P= 0.027. The MEP had no statistical difference (P= 0.146). The values of Cohen d for the same measures were: ARP with d= 0.80 and MEP d= 0.52. Osteopathic manipulative technique produces a positive increment in the LES region soon after its performance. © 2012 Copyright the Authors. Journal compilation © 2012, Wiley Periodicals, Inc. and the International Society for Diseases of the Esophagus.
Hospital performance measures and 30-day readmission rates.
Stefan, Mihaela S; Pekow, Penelope S; Nsa, Wato; Priya, Aruna; Miller, Lauren E; Bratzler, Dale W; Rothberg, Michael B; Goldberg, Robert J; Baus, Kristie; Lindenauer, Peter K
2013-03-01
Lowering hospital readmission rates has become a primary target for the Centers for Medicare & Medicaid Services, but studies of the relationship between adherence to the recommended hospital care processes and readmission rates have provided inconsistent and inconclusive results. To examine the association between hospital performance on Medicare's Hospital Compare process quality measures and 30-day readmission rates for patients with acute myocardial infarction (AMI), heart failure and pneumonia, and for those undergoing major surgery. We assessed hospital performance on process measures using the 2007 Hospital Inpatient Quality Reporting Program. The process measures for each condition were aggregated in two separate measures: Overall Measure (OM) and Appropriate Care Measure (ACM) scores. Readmission rates were calculated using Medicare claims. Risk-standardized 30-day all-cause readmission rate was calculated as the ratio of predicted to expected rate standardized by the overall mean readmission rate. We calculated predicted readmission rate using hierarchical generalized linear models and adjusting for patient-level factors. Among patients aged ≥ 66 years, the median OM score ranged from 79.4 % for abdominal surgery to 95.7 % for AMI, and the median ACM scores ranged from 45.8 % for abdominal surgery to 87.9 % for AMI. We observed a statistically significant, but weak, correlation between performance scores and readmission rates for pneumonia (correlation coefficient R = 0.07), AMI (R = 0.10), and orthopedic surgery (R = 0.06). The difference in the mean readmission rate between hospitals in the 1st and 4th quartiles of process measure performance was statistically significant only for AMI (0.25 percentage points) and pneumonia (0.31 percentage points). Performance on process measures explained less than 1 % of hospital-level variation in readmission rates. Hospitals with greater adherence to recommended care processes did not achieve meaningfully better 30-day hospital readmission rates compared to those with lower levels of performance.
Coming up short on nonfinancial performance measurement.
Ittner, Christopher D; Larcker, David F
2003-11-01
Companies in increasing numbers are measuring customer loyalty, employee satisfaction, and other nonfinancial areas of performance that they believe affect profitability. But they've failed to relate these measures to their strategic goals or establish a connection between activities undertaken and financial outcomes achieved. Failure to make such connections has led many companies to misdirect their investments and reward ineffective managers. Extensive field research now shows that businesses make some common mistakes when choosing, analyzing, and acting on their nonfinancial measures. Among these mistakes: They set the wrong performance targets because they focus too much on short-term financial results, and they use metrics that lack strong statistical validity and reliability. As a result, the companies can't demonstrate that improvements in nonfinancial measures actually affect their financial results. The authors lay out a series of steps that will allow companies to realize the genuine promise of nonfinancial performance measures. First, develop a model that proposes a causal relationship between the chosen nonfinancial drivers of strategic success and specific outcomes. Next, take careful inventory of all the data within your company. Then use established statistical methods for validating the assumed relationships and continue to test the model as market conditions evolve. Finally, base action plans on analysis of your findings, and determine whether those plans and their investments actually produce the desired results. Nonfinancial measures will offer little guidance unless you use a process for choosing and analyzing them that relies on sophisticated quantitative and qualitative inquiries into the factors actually contributing to economic results.
Quality of blood pressure measurement in community health centres.
Sandoya-Olivera, Edgardo; Ferreira-Umpiérrez, Augusto; Machado-González, Federico
To determine the quality of the blood pressure measurements performed during routine care in community health centres. An observational, cross-sectional study was conducted in 5 private and public health centres in Maldonado, Uruguay, in July-August 2015. The observations were made during the measurements performed by health personnel, using the requirements established by the American Heart Association. An analysis was made on 36 variables that were grouped in categories related to environment, equipment, interrogation, patient, and observer. Statistical analysis was performed using Chi 2 test or Fisher test. Statistical significance was considered to be less than 5% (p<.05). The measurements were made by a registered nurse or nurse in 71% of cases, physician in 20%, and student nurse in 9%. An aneroid sphygmomanometer was used in 89%, and mercury 11%. Satisfactory results were found in variables related to environment (93%), equipment (99%), and patient attitude (82%), and intermediate in the attitudes of the operator (64%), and poor in relation to the interrogation (18%), with the mean of correct variables per measurement being 69%. The main flaws in the procedure were the operator. The measurement of blood pressure is a manoeuvre that healthcare professionals perform thousands of times a year. If the measurement is used for the diagnosis and/or chronic management of arterial hypertension, not systematically applying the established recommendations leads to an inappropriate care of a very significant number of patients. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Time series, periodograms, and significance
NASA Astrophysics Data System (ADS)
Hernandez, G.
1999-05-01
The geophysical literature shows a wide and conflicting usage of methods employed to extract meaningful information on coherent oscillations from measurements. This makes it difficult, if not impossible, to relate the findings reported by different authors. Therefore, we have undertaken a critical investigation of the tests and methodology used for determining the presence of statistically significant coherent oscillations in periodograms derived from time series. Statistical significance tests are only valid when performed on the independent frequencies present in a measurement. Both the number of possible independent frequencies in a periodogram and the significance tests are determined by the number of degrees of freedom, which is the number of true independent measurements, present in the time series, rather than the number of sample points in the measurement. The number of degrees of freedom is an intrinsic property of the data, and it must be determined from the serial coherence of the time series. As part of this investigation, a detailed study has been performed which clearly illustrates the deleterious effects that the apparently innocent and commonly used processes of filtering, de-trending, and tapering of data have on periodogram analysis and the consequent difficulties in the interpretation of the statistical significance thus derived. For the sake of clarity, a specific example of actual field measurements containing unevenly-spaced measurements, gaps, etc., as well as synthetic examples, have been used to illustrate the periodogram approach, and pitfalls, leading to the (statistical) significance tests for the presence of coherent oscillations. Among the insights of this investigation are: (1) the concept of a time series being (statistically) band limited by its own serial coherence and thus having a critical sampling rate which defines one of the necessary requirements for the proper statistical design of an experiment; (2) the design of a critical test for the maximum number of significant frequencies which can be used to describe a time series, while retaining intact the variance of the test sample; (3) a demonstration of the unnecessary difficulties that manipulation of the data brings into the statistical significance interpretation of said data; and (4) the resolution and correction of the apparent discrepancy in significance results obtained by the use of the conventional Lomb-Scargle significance test, when compared with the long-standing Schuster-Walker and Fisher tests.
Kim, Youngshin
2008-01-01
The purpose of this study was to investigate the effects of two music therapy approaches, improvisation-assisted desensitization, and music-assisted progressive muscle relaxation and imagery on ameliorating the symptoms of music performance anxiety (MPA) among student pianists. Thirty female college pianists (N = 30) were randomly assigned to one of two conditions: (a) improvised music-assisted desensitization group (n = 15), or (b) music-assisted progressive muscle relaxation (PMR) and imagery group (n = 15). All participants received 6 weekly music therapy sessions according to their assigned group. Two lab performances were provided; one before and one after the 6 music therapy sessions, as the performance stimuli for MPA. All participants completed pretest and posttest measures that included four types of visual analogue scales (MPA, stress, tension, and comfort), the state portion of Spielberger's State-Trait Anxiety Inventory (STAI), and the Music Performance Anxiety Questionnaire (MPAQ) developed by Lehrer, Goldman, and Strommen (1990). Participants' finger temperatures were also measured. When results of the music-assisted PMR and imagery condition were compared from pretest to posttest, statistically significant differences occurred in 6 out of the 7 measures-MPA, tension, comfort, STAI, MPAQ, and finger temperature, indicating that the music-assisted PMR and imagery treatment was very successful in reducing MPA. For the improvisation-assisted desensitization condition, the statistically significant decreases in tension and STAI, with increases in finger temperature indicated that this approach was effective in managing MPA to some extent. When the difference scores for the two approaches were compared, there was no statistically significant difference between the two approaches for any of the seven measures. Therefore, no one treatment condition appeared more effective than the other. Although statistically significant differences were not found between the two groups, a visual analysis of mean difference scores revealed that the music-assisted PMR and imagery condition resulted in greater mean differences from pretest to posttest than the improvisation-assisted desensitization condition across all seven measures. This result may be due to the fact that all participants in the music-assisted PMR and imagery condition followed the procedure easily, while two of the 15 participants in the improvisation-assisted desensitization group had difficulty improvising.
Santori, G; Andorno, E; Morelli, N; Casaccia, M; Bottino, G; Di Domenico, S; Valente, U
2009-05-01
In many Western countries a "minimum volume rule" policy has been adopted as a quality measure for complex surgical procedures. In Italy, the National Transplant Centre set the minimum number of orthotopic liver transplantation (OLT) procedures/y at 25/center. OLT procedures performed in a single center for a reasonably large period may be treated as a time series to evaluate trend, seasonal cycles, and nonsystematic fluctuations. Between January 1, 1987 and December 31, 2006, we performed 563 cadaveric donor OLTs to adult recipients. During 2007, there were another 28 procedures. The greatest numbers of OLTs/y were performed in 2001 (n = 51), 2005 (n = 50), and 2004 (n = 49). A time series analysis performed using R Statistical Software (Foundation for Statistical Computing, Vienna, Austria), a free software environment for statistical computing and graphics, showed an incremental trend after exponential smoothing as well as after seasonal decomposition. The predicted OLT/mo for 2007 calculated with the Holt-Winters exponential smoothing applied to the previous period 1987-2006 helped to identify the months where there was a major difference between predicted and performed procedures. The time series approach may be helpful to establish a minimum volume/y at a single-center level.
NASA Astrophysics Data System (ADS)
Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.
We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.
Noman, Abu Hanifa Md; Gee, Chan Sok; Isa, Che Ruhana
2017-01-01
This study examines the influence of competition on the financial stability of the commercial banks of Association of Southeast Asian Nation (ASEAN) over the 1990 to 2014 period. Panzar-Rosse H-statistic, Lerner index and Herfindahl-Hirschman Index (HHI) are used as measures of competition, while Z-score, non-performing loan (NPL) ratio and equity ratio are used as measures of financial stability. Two-step system Generalized Method of Moments (GMM) estimates demonstrate that competition measured by H-statistic is positively related to Z-score and equity ratio, and negatively related to non-performing loan ratio. Conversely, market power measured by Lerner index is negatively related to Z-score and equity ratio and positively related to NPL ratio. These results strongly support the competition-stability view for ASEAN banks. We also capture the non-linear relationship between competition and financial stability by incorporating a quadratic term of competition in our models. The results show that the coefficient of the quadratic term of H-statistic is negative for the Z-score model given a positive coefficient of the linear term in the same model. These results support the non-linear relationship between competition and financial stability of the banking sector. The study contains significant policy implications for improving the financial stability of the commercial banks.
Gee, Chan Sok; Isa, Che Ruhana
2017-01-01
This study examines the influence of competition on the financial stability of the commercial banks of Association of Southeast Asian Nation (ASEAN) over the 1990 to 2014 period. Panzar-Rosse H-statistic, Lerner index and Herfindahl-Hirschman Index (HHI) are used as measures of competition, while Z-score, non-performing loan (NPL) ratio and equity ratio are used as measures of financial stability. Two-step system Generalized Method of Moments (GMM) estimates demonstrate that competition measured by H-statistic is positively related to Z-score and equity ratio, and negatively related to non-performing loan ratio. Conversely, market power measured by Lerner index is negatively related to Z-score and equity ratio and positively related to NPL ratio. These results strongly support the competition-stability view for ASEAN banks. We also capture the non-linear relationship between competition and financial stability by incorporating a quadratic term of competition in our models. The results show that the coefficient of the quadratic term of H-statistic is negative for the Z-score model given a positive coefficient of the linear term in the same model. These results support the non-linear relationship between competition and financial stability of the banking sector. The study contains significant policy implications for improving the financial stability of the commercial banks. PMID:28486548
NASA Astrophysics Data System (ADS)
Hincks, Ian; Granade, Christopher; Cory, David G.
2018-01-01
The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.
Statistical analysis of CCSN/SS7 traffic data from working CCS subnetworks
NASA Astrophysics Data System (ADS)
Duffy, Diane E.; McIntosh, Allen A.; Rosenstein, Mark; Willinger, Walter
1994-04-01
In this paper, we report on an ongoing statistical analysis of actual CCSN traffic data. The data consist of approximately 170 million signaling messages collected from a variety of different working CCS subnetworks. The key findings from our analysis concern: (1) the characteristics of both the telephone call arrival process and the signaling message arrival process; (2) the tail behavior of the call holding time distribution; and (3) the observed performance of the CCSN with respect to a variety of performance and reliability measurements.
NASA Technical Reports Server (NTRS)
Huang, N. E.; Long, S. R.
1980-01-01
Laboratory experiments were performed to measure the surface elevation probability density function and associated statistical properties for a wind-generated wave field. The laboratory data along with some limited field data were compared. The statistical properties of the surface elevation were processed for comparison with the results derived from the Longuet-Higgins (1963) theory. It is found that, even for the highly non-Gaussian cases, the distribution function proposed by Longuet-Higgins still gives good approximations.
Microdose acquisition in adolescent leg length discrepancy using a low-dose biplane imaging system.
Jensen, Janni; Mussmann, Bo R; Hjarbæk, John; Al-Aubaidi, Zaid; Pedersen, Niels W; Gerke, Oke; Torfing, Trine
2017-09-01
Background Children with leg length discrepancy often undergo repeat imaging. Therefore, every effort to reduce radiation dose is important. Using low dose preview images and noise reduction software rather than diagnostic images for length measurements might contribute to reducing dose. Purpose To compare leg length measurements performed on diagnostic images and low dose preview images both acquired using a low-dose bi-planar imaging system. Material and Methods Preview and diagnostic images from 22 patients were retrospectively collected (14 girls, 8 boys; mean age, 12.8 years; age range, 10-15 years). All images were anonymized and measured independently by two musculoskeletal radiologists. Three sets of measurements were performed on all images; the mechanical axis lines of the femur and the tibia as well as the anatomical line of the entire extremity. Statistical significance was tested with a paired t-test. Results No statistically significant difference was found between measurements performed on the preview and on the diagnostic image. The mean tibial length difference between the observers was -0.06 cm (95% confidence interval [CI], -0.12 to 0.01) and -0.08 cm (95% CI, -0.21 to 0.05), respectively; 0.10 cm (95% CI, 0.02-0.17) and 0.06 cm (95% CI, -0.02 to 0.14) for the femoral measurements and 0.12 cm (95% CI, -0.05 to 0.26) and 0.08 cm (95% CI, -0.02 to 0.19) for total leg length discrepancy. ICCs were >0.99 indicating excellent inter- and intra-rater reliability. Conclusion The data strongly imply that leg length measurements performed on preview images from a low-dose bi-planar imaging system are comparable to measurements performed on diagnostic images.
Does sensitivity measured from screening test-sets predict clinical performance?
NASA Astrophysics Data System (ADS)
Soh, BaoLin P.; Lee, Warwick B.; Mello-Thoms, Claudia R.; Tapia, Kriscia A.; Ryan, John; Hung, Wai Tak; Thompson, Graham J.; Heard, Rob; Brennan, Patrick C.
2014-03-01
Aim: To examine the relationship between sensitivity measured from the BREAST test-set and clinical performance. Background: Although the UK and Australia national breast screening programs have regarded PERFORMS and BREAST test-set strategies as possible methods of estimating readers' clinical efficacy, the relationship between test-set and real life performance results has never been satisfactorily understood. Methods: Forty-one radiologists from BreastScreen New South Wales participated in this study. Each reader interpreted a BREAST test-set which comprised sixty de-identified mammographic examinations sourced from the BreastScreen Digital Imaging Library. Spearman's rank correlation coefficient was used to compare the sensitivity measured from the BREAST test-set with screen readers' clinical audit data. Results: Results shown statistically significant positive moderate correlations between test-set sensitivity and each of the following metrics: rate of invasive cancer per 10 000 reads (r=0.495; p < 0.01); rate of small invasive cancer per 10 000 reads (r=0.546; p < 0.001); detection rate of all invasive cancers and DCIS per 10 000 reads (r=0.444; p < 0.01). Conclusion: Comparison between sensitivity measured from the BREAST test-set and real life detection rate demonstrated statistically significant positive moderate correlations which validated that such test-set strategies can reflect readers' clinical performance and be used as a quality assurance tool. The strength of correlation demonstrated in this study was higher than previously found by others.
Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas
2017-07-04
BACKGROUND The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). MATERIAL AND METHODS 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch's t-test and the Wilcoxon test. RESULTS There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. CONCLUSIONS PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC.
Česaitienė, Gabrielė; Česaitis, Kęstutis; Junevičius, Jonas; Venskutonis, Tadas
2017-01-01
Background The aim of this study was to compare the reliability of panoramic radiography (PR) and cone beam computed tomography (CBCT) in the evaluation of the distance of the roots of lateral teeth to the inferior alveolar nerve canal (IANC). Material/Methods 100 PR and 100 CBCT images that met the selection criteria were selected from the database. In PR images, the distances were measured using an electronic caliper with 0.01 mm accuracy and white light x-ray film reviewer. Actual values of the measurements were calculated taking into consideration the magnification used in PR images (130%). Measurements on CBCT images were performed using i-CAT Vision software. Statistical data analysis was performed using R software and applying Welch’s t-test and the Wilcoxon test. Results There was no statistically significant difference in the mean distance from the root of the second premolar and the mesial and distal roots of the first molar to the IANC between PR and CBCT images. The difference in the mean distance from the mesial and distal roots of the second and the third molars to the IANC measured in PR and CBCT images was statistically significant. Conclusions PR may be uninformative or misleading when measuring the distance from the mesial and distal roots of the second and the third molars to the IANC. PMID:28674379
Systems and methods for detection of blowout precursors in combustors
Lieuwen, Tim C.; Nair, Suraj
2006-08-15
The present invention comprises systems and methods for detecting flame blowout precursors in combustors. The blowout precursor detection system comprises a combustor, a pressure measuring device, and blowout precursor detection unit. A combustion controller may also be used to control combustor parameters. The methods of the present invention comprise receiving pressure data measured by an acoustic pressure measuring device, performing one or a combination of spectral analysis, statistical analysis, and wavelet analysis on received pressure data, and determining the existence of a blowout precursor based on such analyses. The spectral analysis, statistical analysis, and wavelet analysis further comprise their respective sub-methods to determine the existence of blowout precursors.
ERIC Educational Resources Information Center
Awang-Hashim, Rosa; O'Neil, Harold F., Jr.; Hocevar, Dennis
2002-01-01
The relations between motivational constructs, effort, self-efficacy and worry, and statistics achievement were investigated in a sample of 360 undergraduates in Malaysia. Both trait (cross-situational) and state (task-specific) measures of each construct were used to test a mediational trait (r) state (r) performance (TSP) model. As hypothesized,…
Air Combat Training: Good Stick Index Validation. Final Report for Period 3 April 1978-1 April 1979.
ERIC Educational Resources Information Center
Moore, Samuel B.; And Others
A study was conducted to investigate and statistically validate a performance measuring system (the Good Stick Index) in the Tactical Air Command Combat Engagement Simulator I (TAC ACES I) Air Combat Maneuvering (ACM) training program. The study utilized a twelve-week sample of eighty-nine student pilots to statistically validate the Good Stick…
45 CFR 305.65 - State cooperation in audit.
Code of Federal Regulations, 2010 CFR
2010-10-01
... PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.65 State cooperation in... submitted on the Federal statistical and financial reports that will be used to calculate the State's performance. The State shall also make available personnel associated with the State's IV-D program to provide...
USDA-ARS?s Scientific Manuscript database
An analytical and statistical method has been developed to measure the ultrasound-enhanced bioscouring performance of milligram quantities of endo- and exo-polygalacturonase enzymes obtained from Rhizopus oryzae fungi. UV-Vis spectrophotometric data and a general linear mixed models procedure indic...
Tooth-size discrepancy: A comparison between manual and digital methods
Correia, Gabriele Dória Cabral; Habib, Fernando Antonio Lima; Vogel, Carlos Jorge
2014-01-01
Introduction Technological advances in Dentistry have emerged primarily in the area of diagnostic tools. One example is the 3D scanner, which can transform plaster models into three-dimensional digital models. Objective This study aimed to assess the reliability of tooth size-arch length discrepancy analysis measurements performed on three-dimensional digital models, and compare these measurements with those obtained from plaster models. Material and Methods To this end, plaster models of lower dental arches and their corresponding three-dimensional digital models acquired with a 3Shape R700T scanner were used. All of them had lower permanent dentition. Four different tooth size-arch length discrepancy calculations were performed on each model, two of which by manual methods using calipers and brass wire, and two by digital methods using linear measurements and parabolas. Results Data were statistically assessed using Friedman test and no statistically significant differences were found between the two methods (P > 0.05), except for values found by the linear digital method which revealed a slight, non-significant statistical difference. Conclusions Based on the results, it is reasonable to assert that any of these resources used by orthodontists to clinically assess tooth size-arch length discrepancy can be considered reliable. PMID:25279529
Nonlinearity analysis of measurement model for vision-based optical navigation system
NASA Astrophysics Data System (ADS)
Li, Jianguo; Cui, Hutao; Tian, Yang
2015-02-01
In the autonomous optical navigation system based on line-of-sight vector observation, nonlinearity of measurement model is highly correlated with the navigation performance. By quantitatively calculating the degree of nonlinearity of the focal plane model and the unit vector model, this paper focuses on determining which optical measurement model performs better. Firstly, measurement equations and measurement noise statistics of these two line-of-sight measurement models are established based on perspective projection co-linearity equation. Then the nonlinear effects of measurement model on the filter performance are analyzed within the framework of the Extended Kalman filter, also the degrees of nonlinearity of two measurement models are compared using the curvature measure theory from differential geometry. Finally, a simulation of star-tracker-based attitude determination is presented to confirm the superiority of the unit vector measurement model. Simulation results show that the magnitude of curvature nonlinearity measurement is consistent with the filter performance, and the unit vector measurement model yields higher estimation precision and faster convergence properties.
[Evaluation of using statistical methods in selected national medical journals].
Sych, Z
1996-01-01
The paper covers the performed evaluation of frequency with which the statistical methods were applied in analyzed works having been published in six selected, national medical journals in the years 1988-1992. For analysis the following journals were chosen, namely: Klinika Oczna, Medycyna Pracy, Pediatria Polska, Polski Tygodnik Lekarski, Roczniki Państwowego Zakładu Higieny, Zdrowie Publiczne. Appropriate number of works up to the average in the remaining medical journals was randomly selected from respective volumes of Pol. Tyg. Lek. The studies did not include works wherein the statistical analysis was not implemented, which referred both to national and international publications. That exemption was also extended to review papers, casuistic ones, reviews of books, handbooks, monographies, reports from scientific congresses, as well as papers on historical topics. The number of works was defined in each volume. Next, analysis was performed to establish the mode of finding out a suitable sample in respective studies, differentiating two categories: random and target selections. Attention was also paid to the presence of control sample in the individual works. In the analysis attention was also focussed on the existence of sample characteristics, setting up three categories: complete, partial and lacking. In evaluating the analyzed works an effort was made to present the results of studies in tables and figures (Tab. 1, 3). Analysis was accomplished with regard to the rate of employing statistical methods in analyzed works in relevant volumes of six selected, national medical journals for the years 1988-1992, simultaneously determining the number of works, in which no statistical methods were used. Concurrently the frequency of applying the individual statistical methods was analyzed in the scrutinized works. Prominence was given to fundamental statistical methods in the field of descriptive statistics (measures of position, measures of dispersion) as well as most important methods of mathematical statistics such as parametric tests of significance, analysis of variance (in single and dual classifications). non-parametric tests of significance, correlation and regression. The works, in which use was made of either multiple correlation or multiple regression or else more complex methods of studying the relationship for two or more numbers of variables, were incorporated into the works whose statistical methods were constituted by correlation and regression as well as other methods, e.g. statistical methods being used in epidemiology (coefficients of incidence and morbidity, standardization of coefficients, survival tables) factor analysis conducted by Jacobi-Hotellng's method, taxonomic methods and others. On the basis of the performed studies it has been established that the frequency of employing statistical methods in the six selected national, medical journals in the years 1988-1992 was 61.1-66.0% of the analyzed works (Tab. 3), and they generally were almost similar to the frequency provided in English language medical journals. On a whole, no significant differences were disclosed in the frequency of applied statistical methods (Tab. 4) as well as in frequency of random tests (Tab. 3) in the analyzed works, appearing in the medical journals in respective years 1988-1992. The most frequently used statistical methods in analyzed works for 1988-1992 were the measures of position 44.2-55.6% and measures of dispersion 32.5-38.5% as well as parametric tests of significance 26.3-33.1% of the works analyzed (Tab. 4). For the purpose of increasing the frequency and reliability of the used statistical methods, the didactics should be widened in the field of biostatistics at medical studies and postgraduation training designed for physicians and scientific-didactic workers.
CTS/Comstar communications link characterization experiment
NASA Technical Reports Server (NTRS)
Hodge, D. B.; Taylor, R. C.
1980-01-01
Measurements of angle of arrival and amplitude fluctuations on millimeter wavelength Earth-space communication links are described. Measurement of rainfall attenuation and radiometric temperature statistics and the assessment of the performance of a self-phased array as a receive antenna on an Earth-space link are also included.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
2004-05-11
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transaction-level event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances
NASA Astrophysics Data System (ADS)
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-01
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.
Experimental Test of Heisenberg's Measurement Uncertainty Relation Based on Statistical Distances.
Ma, Wenchao; Ma, Zhihao; Wang, Hengyan; Chen, Zhihua; Liu, Ying; Kong, Fei; Li, Zhaokai; Peng, Xinhua; Shi, Mingjun; Shi, Fazhan; Fei, Shao-Ming; Du, Jiangfeng
2016-04-22
Incompatible observables can be approximated by compatible observables in joint measurement or measured sequentially, with constrained accuracy as implied by Heisenberg's original formulation of the uncertainty principle. Recently, Busch, Lahti, and Werner proposed inaccuracy trade-off relations based on statistical distances between probability distributions of measurement outcomes [P. Busch et al., Phys. Rev. Lett. 111, 160405 (2013); P. Busch et al., Phys. Rev. A 89, 012129 (2014)]. Here we reformulate their theoretical framework, derive an improved relation for qubit measurement, and perform an experimental test on a spin system. The relation reveals that the worst-case inaccuracy is tightly bounded from below by the incompatibility of target observables, and is verified by the experiment employing joint measurement in which two compatible observables designed to approximate two incompatible observables on one qubit are measured simultaneously.
Probing the Statistical Properties of Unknown Texts: Application to the Voynich Manuscript
Amancio, Diego R.; Altmann, Eduardo G.; Rybski, Diego; Oliveira, Osvaldo N.; Costa, Luciano da F.
2013-01-01
While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications. PMID:23844002
Probing the statistical properties of unknown texts: application to the Voynich Manuscript.
Amancio, Diego R; Altmann, Eduardo G; Rybski, Diego; Oliveira, Osvaldo N; Costa, Luciano da F
2013-01-01
While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications.
Visualizations of Travel Time Performance Based on Vehicle Reidentification Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Stanley Ernest; Sharifi, Elham; Day, Christopher M.
This paper provides a visual reference of the breadth of arterial performance phenomena based on travel time measures obtained from reidentification technology that has proliferated in the past 5 years. These graphical performance measures are revealed through overlay charts and statistical distribution as revealed through cumulative frequency diagrams (CFDs). With overlays of vehicle travel times from multiple days, dominant traffic patterns over a 24-h period are reinforced and reveal the traffic behavior induced primarily by the operation of traffic control at signalized intersections. A cumulative distribution function in the statistical literature provides a method for comparing traffic patterns from variousmore » time frames or locations in a compact visual format that provides intuitive feedback on arterial performance. The CFD may be accumulated hourly, by peak periods, or by time periods specific to signal timing plans that are in effect. Combined, overlay charts and CFDs provide visual tools with which to assess the quality and consistency of traffic movement for various periods throughout the day efficiently, without sacrificing detail, which is a typical byproduct of numeric-based performance measures. These methods are particularly effective for comparing before-and-after median travel times, as well as changes in interquartile range, to assess travel time reliability.« less
Raunig, David L; McShane, Lisa M; Pennello, Gene; Gatsonis, Constantine; Carson, Paul L; Voyvodic, James T; Wahl, Richard L; Kurland, Brenda F; Schwarz, Adam J; Gönen, Mithat; Zahlmann, Gudrun; Kondratovich, Marina V; O'Donnell, Kevin; Petrick, Nicholas; Cole, Patricia E; Garra, Brian; Sullivan, Daniel C
2015-02-01
Technological developments and greater rigor in the quantitative measurement of biological features in medical images have given rise to an increased interest in using quantitative imaging biomarkers to measure changes in these features. Critical to the performance of a quantitative imaging biomarker in preclinical or clinical settings are three primary metrology areas of interest: measurement linearity and bias, repeatability, and the ability to consistently reproduce equivalent results when conditions change, as would be expected in any clinical trial. Unfortunately, performance studies to date differ greatly in designs, analysis method, and metrics used to assess a quantitative imaging biomarker for clinical use. It is therefore difficult or not possible to integrate results from different studies or to use reported results to design studies. The Radiological Society of North America and the Quantitative Imaging Biomarker Alliance with technical, radiological, and statistical experts developed a set of technical performance analysis methods, metrics, and study designs that provide terminology, metrics, and methods consistent with widely accepted metrological standards. This document provides a consistent framework for the conduct and evaluation of quantitative imaging biomarker performance studies so that results from multiple studies can be compared, contrasted, or combined. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Mulvaney, Sean W; Lynch, James H; de Leeuw, Jason; Schroeder, Matthew; Kane, Shawn
2015-05-01
To measure key neurocognitive performance effects following stellate ganglion block (SGB) administered to treat post-traumatic stress disorder (PTSD) symptoms. Eleven patients diagnosed, screened, and scheduled for SGB to treat their PTSD symptoms were administered a panel of eight cognitive measures before and 1 to 3 weeks after undergoing this procedure. PTSD symptoms were evaluated using the Posttraumatic Stress Disorder Checklist-Military. One to three weeks post-SGB, none of the patients showed any statistically significant decline in neurocognitive performance. Rather, there was a clear trend in improvement, with four out of eight measures reaching statistical significance following SGB. All patients improved in PTSD symptoms with a mean improvement on Posttraumatic Stress Disorder Checklist-Military of 29. In this case series of 11 patients, SGB effectively treated PTSD symptoms and did not impair reaction time, memory, or concentration. Therefore, SGB should be considered as a viable treatment option for personnel with PTSD symptoms who will be placed in demanding conditions such as combat. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Waites, Anthony B; Mannfolk, Peter; Shaw, Marnie E; Olsrud, Johan; Jackson, Graeme D
2007-02-01
Clinical functional magnetic resonance imaging (fMRI) occasionally fails to detect significant activation, often due to variability in task performance. The present study seeks to test whether a more flexible statistical analysis can better detect activation, by accounting for variance associated with variable compliance to the task over time. Experimental results and simulated data both confirm that even at 80% compliance to the task, such a flexible model outperforms standard statistical analysis when assessed using the extent of activation (experimental data), goodness of fit (experimental data), and area under the operator characteristic curve (simulated data). Furthermore, retrospective examination of 14 clinical fMRI examinations reveals that in patients where the standard statistical approach yields activation, there is a measurable gain in model performance in adopting the flexible statistical model, with little or no penalty in lost sensitivity. This indicates that a flexible model should be considered, particularly for clinical patients who may have difficulty complying fully with the study task.
A Divergence Statistics Extension to VTK for Performance Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pebay, Philippe Pierre; Bennett, Janine Camille
This report follows the series of previous documents ([PT08, BPRT09b, PT09, BPT09, PT10, PB13], where we presented the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k -means, order and auto-correlative statistics engines which we developed within the Visualization Tool Kit ( VTK ) as a scalable, parallel and versatile statistics package. We now report on a new engine which we developed for the calculation of divergence statistics, a concept which we hereafter explain and whose main goal is to quantify the discrepancy, in a stasticial manner akin to measuring a distance, between an observed empirical distribution and a theoretical,more » "ideal" one. The ease of use of the new diverence statistics engine is illustrated by the means of C++ code snippets. Although this new engine does not yet have a parallel implementation, it has already been applied to HPC performance analysis, of which we provide an example.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less
ERIC Educational Resources Information Center
Martin, Tammy Faith
2012-01-01
The purpose of this study was to examine principal leadership styles and their influence on school performance as measured by adequate yearly progress at selected Title I schools in South Carolina. The main focus of the research study was to complete descriptive statistics on principal leadership styles in schools that met or did not meet adequate…
ERIC Educational Resources Information Center
Eugene, Michael; Carlson, Robert; Hrowal, Heidi; Fahey, John; Ronnei, Jean; Young, Steve; Gomez, Joseph; Thomas, Michael
2007-01-01
This report describes 50 initial statistical indicators developed by the Council of the Great City Schools and its member districts to measure big-city school performance on a range of operational and business functions, and presents data city-by-city on those indicators. The analysis marks the first time that such indicators have been developed…
Multiple performance measures are needed to evaluate triage systems in the emergency department.
Zachariasse, Joany M; Nieboer, Daan; Oostenbrink, Rianne; Moll, Henriëtte A; Steyerberg, Ewout W
2018-02-01
Emergency department triage systems can be considered prediction rules with an ordinal outcome, where different directions of misclassification have different clinical consequences. We evaluated strategies to compare the performance of triage systems and aimed to propose a set of performance measures that should be used in future studies. We identified performance measures based on literature review and expert knowledge. Their properties are illustrated in a case study evaluating two triage modifications in a cohort of 14,485 pediatric emergency department visits. Strengths and weaknesses of the performance measures were systematically appraised. Commonly reported performance measures are measures of statistical association (34/60 studies) and diagnostic accuracy (17/60 studies). The case study illustrates that none of the performance measures fulfills all criteria for triage evaluation. Decision curves are the performance measures with the most attractive features but require dichotomization. In addition, paired diagnostic accuracy measures can be recommended for dichotomized analysis, and the triage-weighted kappa and Nagelkerke's R 2 for ordinal analyses. Other performance measures provide limited additional information. When comparing modifications of triage systems, decision curves and diagnostic accuracy measures should be used in a dichotomized analysis, and the triage-weighted kappa and Nagelkerke's R 2 in an ordinal approach. Copyright © 2017 Elsevier Inc. All rights reserved.
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the...
Cluster Detection Tests in Spatial Epidemiology: A Global Indicator for Performance Assessment
Guttmann, Aline; Li, Xinran; Feschet, Fabien; Gaudart, Jean; Demongeot, Jacques; Boire, Jean-Yves; Ouchchane, Lemlih
2015-01-01
In cluster detection of disease, the use of local cluster detection tests (CDTs) is current. These methods aim both at locating likely clusters and testing for their statistical significance. New or improved CDTs are regularly proposed to epidemiologists and must be subjected to performance assessment. Because location accuracy has to be considered, performance assessment goes beyond the raw estimation of type I or II errors. As no consensus exists for performance evaluations, heterogeneous methods are used, and therefore studies are rarely comparable. A global indicator of performance, which assesses both spatial accuracy and usual power, would facilitate the exploration of CDTs behaviour and help between-studies comparisons. The Tanimoto coefficient (TC) is a well-known measure of similarity that can assess location accuracy but only for one detected cluster. In a simulation study, performance is measured for many tests. From the TC, we here propose two statistics, the averaged TC and the cumulated TC, as indicators able to provide a global overview of CDTs performance for both usual power and location accuracy. We evidence the properties of these two indicators and the superiority of the cumulated TC to assess performance. We tested these indicators to conduct a systematic spatial assessment displayed through performance maps. PMID:26086911
Ion Channel Conductance Measurements on a Silicon-Based Platform
2006-01-01
calculated using the molecular dynamics code, GROMACS . Reasonable agreement is obtained in the simulated versus measured conductance over the range of...measurements of the lipid giga-seal characteristics have been performed, including AC conductance measurements and statistical analysis in order to...Dynamics kernel self-consistently coupled to Poisson equations using a P3M force field scheme and the GROMACS description of protein structure and
Haładaj, Robert; Pingot, Mariusz; Polguj, Michał; Wysiadecki, Grzegorz; Topol, Mirosław
2015-01-01
Background The aim of this study was to determine relationships between piriformis muscle (PM) and sciatic nerve (SN) with reference to sex and anatomical variations. Material/Methods Deep dissection of the gluteal region was performed on 30 randomized, formalin-fixed human lower limbs of adults of both sexes of the Polish population. Anthropometric measurements were taken and then statistically analyzed. Results The conducted research revealed that, apart from the typical structure of the piriformis muscle, the most common variation was division of the piriformis muscle into two heads, with the common peroneal nerve running between them (20%). The group with anatomical variations of the sciatic nerve course displayed greater diversity of morphometric measurement results. There was a statistically significant correlation between the lower limb length and the distance from the sciatic nerve to the greater trochanter in the male specimens. On the other hand, in the female specimens, a statistically significant correlation was observed between the lower limb length and the distance from the sciatic nerve to the ischial tuberosity. The shortest distance from the sciatic nerve to the greater trochanter measured at the level of the inferior edge of the piriformis was 21 mm, while the shortest distance to the ischial tuberosity was 63 mm. Such correlations should be taken into account during invasive medical procedures performed in the gluteal region. Conclusions It is possible to distinguish several anatomical variations of the sciatic nerve course within the deep gluteal region. The statistically significant correlations between some anthropometric measurements were only present within particular groups of male and female limbs. PMID:26629744
Measuring X-Ray Polarization in the Presence of Systematic Effects: Known Background
NASA Technical Reports Server (NTRS)
Elsner, Ronald F.; O'Dell, Stephen L.; Weisskopf, Martin C.
2012-01-01
The prospects for accomplishing x-ray polarization measurements of astronomical sources have grown in recent years, after a hiatus of more than 37 years. Unfortunately, accompanying this long hiatus has been some confusion over the statistical uncertainties associated with x-ray polarization measurements of these sources. We have initiated a program to perform the detailed calculations that will offer insights into the uncertainties associated with x-ray polarization measurements. Here we describe a mathematical formalism for determining the 1- and 2-parameter errors in the magnitude and position angle of x-ray (linear) polarization in the presence of a (polarized or unpolarized) background. We further review relevant statistics including clearly distinguishing between the Minimum Detectable Polarization (MDP) and the accuracy of a polarization measurement.
An automated system for chromosome analysis. Volume 1: Goals, system design, and performance
NASA Technical Reports Server (NTRS)
Castleman, K. R.; Melnyk, J. H.
1975-01-01
The design, construction, and testing of a complete system to produce karyotypes and chromosome measurement data from human blood samples, and a basis for statistical analysis of quantitative chromosome measurement data is described. The prototype was assembled, tested, and evaluated on clinical material and thoroughly documented.
A Measurement of Alienation in College Student Marihuana Users and Non-Users.
ERIC Educational Resources Information Center
Harris, Eileen M.
A three part questionnaire was administered to 1380 Southern Illinois University students to: (1) elicit demographic data; (2) determine the extent of experience with marihuana; and (3) measure alienation utilizing Dean's scale. In addition, the Minnesota Multiphasic Personality Lie Inventory was given. Statistical analyses were performed to…
California: The State of Our Children 1993. Data Supplement.
ERIC Educational Resources Information Center
Children Now, Oakland, CA.
This report informs the public about the welfare of California's children. Measuring the effectiveness of the efforts of all Californians, not just those of government agencies or other organizations responsible for children, the report measures California's performance on 27 benchmarks that, taken together, provide a statistical portrait of the…
The Asymmetry Parameter and Branching Ratio of Sigma Plus Radiative Decay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foucher, Maurice Emile
1992-05-01
We have measured the asymmetry parameter and branching ratio of themore » $$\\Sigma^+$$ radiative decay. This high statistics experiment (FNAL 761) was performed in the Proton Center charged hyperon beam at Fermi National Accelerator Laboratory in Batavia, Illinois. We find for the asymmetry parameter -0.720 $$\\pm$$ 0.086 $$\\pm$$ 0.045 where the first error is statistical and the second is systematic. This result is based on a sample of 34754 $$\\pm$$ 212 events. We find a preliminary value for the branching ratio $$Br ( \\Sigma^+ \\to p\\gamma )$$ $$/ Br ( \\Sigma^+ \\to p \\pi^0 )$$ = (2.14 $$\\pm$$ 0.07 $$\\pm$$ 0.11) x $$10^{-3}$$ where the first error is statistical and the second is systematic. This result is based on a sample of 31040 $$\\pm$$ 650 events. Both results are in agreement with previous low statistics measurements.« less
NASA Technical Reports Server (NTRS)
Butler, C. M.; Hogge, J. E.
1978-01-01
Air quality sampling was conducted. Data for air quality parameters, recorded on written forms, punched cards or magnetic tape, are available for 1972 through 1975. Computer software was developed to (1) calculate several daily statistical measures of location, (2) plot time histories of data or the calculated daily statistics, (3) calculate simple correlation coefficients, and (4) plot scatter diagrams. Computer software was developed for processing air quality data to include time series analysis and goodness of fit tests. Computer software was developed to (1) calculate a larger number of daily statistical measures of location, and a number of daily monthly and yearly measures of location, dispersion, skewness and kurtosis, (2) decompose the extended time series model and (3) perform some goodness of fit tests. The computer program is described, documented and illustrated by examples. Recommendations are made for continuation of the development of research on processing air quality data.
[The metrology of uncertainty: a study of vital statistics from Chile and Brazil].
Carvajal, Yuri; Kottow, Miguel
2012-11-01
This paper addresses the issue of uncertainty in the measurements used in public health analysis and decision-making. The Shannon-Wiener entropy measure was adapted to express the uncertainty contained in counting causes of death in official vital statistics from Chile. Based on the findings, the authors conclude that metrological requirements in public health are as important as the measurements themselves. The study also considers and argues for the existence of uncertainty associated with the statistics' performative properties, both by the way the data are structured as a sort of syntax of reality and by exclusion of what remains beyond the quantitative modeling used in each case. Following the legacy of pragmatic thinking and using conceptual tools from the sociology of translation, the authors emphasize that by taking uncertainty into account, public health can contribute to a discussion on the relationship between technology, democracy, and formation of a participatory public.
Smith, Laura M; Anderson, Wayne L; Lines, Lisa M; Pronier, Cristalle; Thornburg, Vanessa; Butler, Janelle P; Teichman, Lori; Dean-Whittaker, Debra; Goldstein, Elizabeth
2017-01-01
We examined the effects of provider characteristics on home health agency performance on patient experience of care (Home Health CAHPS) and process (OASIS) measures. Descriptive, multivariate, and factor analyses were used. While agencies score high on both domains, factor analyses showed that the underlying items represent separate constructs. Freestanding and Visiting Nurse Association agencies, higher number of home health aides per 100 episodes, and urban location were statistically significant predictors of lower performance. Lack of variation in composite measures potentially led to counterintuitive results for effects of organizational characteristics. This exploratory study showed the value of having separate quality domains.
Molshatzki, Noa; Drory, Yaacov; Myers, Vicki; Goldbourt, Uri; Benyamini, Yael; Steinberg, David M; Gerber, Yariv
2011-07-01
The relationship of risk factors to outcomes has traditionally been assessed by measures of association such as odds ratio or hazard ratio and their statistical significance from an adjusted model. However, a strong, highly significant association does not guarantee a gain in stratification capacity. Using recently developed model performance indices, we evaluated the incremental discriminatory power of individual and neighborhood socioeconomic status (SES) measures after myocardial infarction (MI). Consecutive patients aged ≤65 years (N=1178) discharged from 8 hospitals in central Israel after incident MI in 1992 to 1993 were followed-up through 2005. A basic model (demographic variables, traditional cardiovascular risk factors, and disease severity indicators) was compared with an extended model including SES measures (education, income, employment, living with a steady partner, and neighborhood SES) in terms of Harrell c statistic, integrated discrimination improvement (IDI), and net reclassification improvement (NRI). During the 13-year follow-up, 326 (28%) patients died. Cox proportional hazards models showed that all SES measures were significantly and independently associated with mortality. Furthermore, compared with the basic model, the extended model yielded substantial gains (all P<0.001) in c statistic (0.723 to 0.757), NRI (15.2%), IDI (5.9%), and relative IDI (32%). Improvement was observed both for sensitivity (classification of events) and specificity (classification of nonevents). This study illustrates the additional insights that can be gained from considering the IDI and NRI measures of model performance and suggests that, among community patients with incident MI, incorporating SES measures into a clinical-based model substantially improves long-term mortality risk prediction.
A Total Quality-Control Plan with Right-Sized Statistical Quality-Control.
Westgard, James O
2017-03-01
A new Clinical Laboratory Improvement Amendments option for risk-based quality-control (QC) plans became effective in January, 2016. Called an Individualized QC Plan, this option requires the laboratory to perform a risk assessment, develop a QC plan, and implement a QC program to monitor ongoing performance of the QC plan. Difficulties in performing a risk assessment may limit validity of an Individualized QC Plan. A better alternative is to develop a Total QC Plan including a right-sized statistical QC procedure to detect medically important errors. Westgard Sigma Rules provides a simple way to select the right control rules and the right number of control measurements. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Zayat, Maya; Kalb, Luther; Wodka, Ericka L.
2011-01-01
Performance patterns on verbal subtests from the WISC-IV were compared between a clinically-referred sample of children with either autism spectrum disorders (ASD) or attention deficit/hyperactivity disorder (ADHD). Children with ASD demonstrated a statistically significant stepwise pattern where performance on Similarities was best, followed by…
ERIC Educational Resources Information Center
Airola, Denise Tobin
2011-01-01
Changes to state tests impact the ability of State Education Agencies (SEAs) to monitor change in performance over time. The purpose of this study was to evaluate the Standardized Performance Growth Index (PGIz), a proposed statistical model for measuring change in student and school performance, across transitions in tests. The PGIz is a…
Validation of the PVSyst Performance Model for the Concentrix CPV Technology
NASA Astrophysics Data System (ADS)
Gerstmaier, Tobias; Gomez, María; Gombert, Andreas; Mermoud, André; Lejeune, Thibault
2011-12-01
The accuracy of the two-stage PVSyst model for the Concentrix CPV Technology is determined by comparing modeled to measured values. For both stages, i) the module model and ii) the power plant model, the underlying approaches are explained and methods for obtaining the model parameters are presented. The performance of both models is quantified using 19 months of outdoor measurements for the module model and 9 months of measurements at four different sites for the power plant model. Results are presented by giving statistical quantities for the model accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowder, Stephen V.
This document outlines a statistical framework for establishing a shelf-life program for components whose performance is measured by the value of a continuous variable such as voltage or function time. The approach applies to both single measurement devices and repeated measurement devices, although additional process control charts may be useful in the case of repeated measurements. The approach is to choose a sample size that protects the margin associated with a particular variable over the life of the component. Deviations from expected performance of the measured variable are detected prior to the complete loss of margin. This ensures the reliabilitymore » of the component over its lifetime.« less
Statistical methods for convergence detection of multi-objective evolutionary algorithms.
Trautmann, H; Wagner, T; Naujoks, B; Preuss, M; Mehnen, J
2009-01-01
In this paper, two approaches for estimating the generation in which a multi-objective evolutionary algorithm (MOEA) shows statistically significant signs of convergence are introduced. A set-based perspective is taken where convergence is measured by performance indicators. The proposed techniques fulfill the requirements of proper statistical assessment on the one hand and efficient optimisation for real-world problems on the other hand. The first approach accounts for the stochastic nature of the MOEA by repeating the optimisation runs for increasing generation numbers and analysing the performance indicators using statistical tools. This technique results in a very robust offline procedure. Moreover, an online convergence detection method is introduced as well. This method automatically stops the MOEA when either the variance of the performance indicators falls below a specified threshold or a stagnation of their overall trend is detected. Both methods are analysed and compared for two MOEA and on different classes of benchmark functions. It is shown that the methods successfully operate on all stated problems needing less function evaluations while preserving good approximation quality at the same time.
Nonparametric predictive inference for combining diagnostic tests with parametric copula
NASA Astrophysics Data System (ADS)
Muhammad, Noryanti; Coolen, F. P. A.; Coolen-Maturi, T.
2017-09-01
Measuring the accuracy of diagnostic tests is crucial in many application areas including medicine and health care. The Receiver Operating Characteristic (ROC) curve is a popular statistical tool for describing the performance of diagnostic tests. The area under the ROC curve (AUC) is often used as a measure of the overall performance of the diagnostic test. In this paper, we interest in developing strategies for combining test results in order to increase the diagnostic accuracy. We introduce nonparametric predictive inference (NPI) for combining two diagnostic test results with considering dependence structure using parametric copula. NPI is a frequentist statistical framework for inference on a future observation based on past data observations. NPI uses lower and upper probabilities to quantify uncertainty and is based on only a few modelling assumptions. While copula is a well-known statistical concept for modelling dependence of random variables. A copula is a joint distribution function whose marginals are all uniformly distributed and it can be used to model the dependence separately from the marginal distributions. In this research, we estimate the copula density using a parametric method which is maximum likelihood estimator (MLE). We investigate the performance of this proposed method via data sets from the literature and discuss results to show how our method performs for different family of copulas. Finally, we briefly outline related challenges and opportunities for future research.
Generalization of Entropy Based Divergence Measures for Symbolic Sequence Analysis
Ré, Miguel A.; Azad, Rajeev K.
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms. PMID:24728338
Generalization of entropy based divergence measures for symbolic sequence analysis.
Ré, Miguel A; Azad, Rajeev K
2014-01-01
Entropy based measures have been frequently used in symbolic sequence analysis. A symmetrized and smoothed form of Kullback-Leibler divergence or relative entropy, the Jensen-Shannon divergence (JSD), is of particular interest because of its sharing properties with families of other divergence measures and its interpretability in different domains including statistical physics, information theory and mathematical statistics. The uniqueness and versatility of this measure arise because of a number of attributes including generalization to any number of probability distributions and association of weights to the distributions. Furthermore, its entropic formulation allows its generalization in different statistical frameworks, such as, non-extensive Tsallis statistics and higher order Markovian statistics. We revisit these generalizations and propose a new generalization of JSD in the integrated Tsallis and Markovian statistical framework. We show that this generalization can be interpreted in terms of mutual information. We also investigate the performance of different JSD generalizations in deconstructing chimeric DNA sequences assembled from bacterial genomes including that of E. coli, S. enterica typhi, Y. pestis and H. influenzae. Our results show that the JSD generalizations bring in more pronounced improvements when the sequences being compared are from phylogenetically proximal organisms, which are often difficult to distinguish because of their compositional similarity. While small but noticeable improvements were observed with the Tsallis statistical JSD generalization, relatively large improvements were observed with the Markovian generalization. In contrast, the proposed Tsallis-Markovian generalization yielded more pronounced improvements relative to the Tsallis and Markovian generalizations, specifically when the sequences being compared arose from phylogenetically proximal organisms.
The most dangerous hospital or the most dangerous equation?
Tu, Yu-Kang; Gilthorpe, Mark S
2007-11-15
Hospital mortality rates are one of the most frequently selected indicators for measuring the performance of NHS Trusts. A recent article in a national newspaper named the hospital with the highest or lowest mortality in the 2005/6 financial year; a report by the organization Dr Foster Intelligence provided information with regard to the performance of all NHS Trusts in England. Basic statistical theory and computer simulations were used to explore the relationship between the variations in the performance of NHS Trusts and the sizes of the Trusts. Data of hospital standardised mortality ratio (HSMR) of 152 English NHS Trusts for 2005/6 were re-analysed. A close examination of the information reveals a pattern which is consistent with a statistical phenomenon, discovered by the French mathematician de Moivre nearly 300 years ago, described in every introductory statistics textbook: namely that variation in performance indicators is expected to be greater in small Trusts and smaller in large Trusts. From a statistical viewpoint, the number of deaths in a hospital is not in proportion to the size of the hospital, but is proportional to the square root of its size. Therefore, it is not surprising to note that small hospitals are more likely to occur at the top and the bottom of league tables, whilst mortality rates are independent of hospital sizes. This statistical phenomenon needs to be taken into account in the comparison of hospital Trusts performance, especially with regard to policy decisions.
Zaki, Rafdzah; Bulgiba, Awang; Nordin, Noorhaire; Azina Ismail, Noor
2013-06-01
Reliability measures precision or the extent to which test results can be replicated. This is the first ever systematic review to identify statistical methods used to measure reliability of equipment measuring continuous variables. This studyalso aims to highlight the inappropriate statistical method used in the reliability analysis and its implication in the medical practice. In 2010, five electronic databases were searched between 2007 and 2009 to look for reliability studies. A total of 5,795 titles were initially identified. Only 282 titles were potentially related, and finally 42 fitted the inclusion criteria. The Intra-class Correlation Coefficient (ICC) is the most popular method with 25 (60%) studies having used this method followed by the comparing means (8 or 19%). Out of 25 studies using the ICC, only 7 (28%) reported the confidence intervals and types of ICC used. Most studies (71%) also tested the agreement of instruments. This study finds that the Intra-class Correlation Coefficient is the most popular method used to assess the reliability of medical instruments measuring continuous outcomes. There are also inappropriate applications and interpretations of statistical methods in some studies. It is important for medical researchers to be aware of this issue, and be able to correctly perform analysis in reliability studies.
PET image reconstruction: a robust state space approach.
Liu, Huafeng; Tian, Yi; Shi, Pengcheng
2005-01-01
Statistical iterative reconstruction algorithms have shown improved image quality over conventional nonstatistical methods in PET by using accurate system response models and measurement noise models. Strictly speaking, however, PET measurements, pre-corrected for accidental coincidences, are neither Poisson nor Gaussian distributed and thus do not meet basic assumptions of these algorithms. In addition, the difficulty in determining the proper system response model also greatly affects the quality of the reconstructed images. In this paper, we explore the usage of state space principles for the estimation of activity map in tomographic PET imaging. The proposed strategy formulates the organ activity distribution through tracer kinetics models, and the photon-counting measurements through observation equations, thus makes it possible to unify the dynamic reconstruction problem and static reconstruction problem into a general framework. Further, it coherently treats the uncertainties of the statistical model of the imaging system and the noisy nature of measurement data. Since H(infinity) filter seeks minimummaximum-error estimates without any assumptions on the system and data noise statistics, it is particular suited for PET image reconstruction where the statistical properties of measurement data and the system model are very complicated. The performance of the proposed framework is evaluated using Shepp-Logan simulated phantom data and real phantom data with favorable results.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
Angular Baryon Acoustic Oscillation measure at z=2.225 from the SDSS quasar survey
NASA Astrophysics Data System (ADS)
de Carvalho, E.; Bernui, A.; Carvalho, G. C.; Novaes, C. P.; Xavier, H. S.
2018-04-01
Following a quasi model-independent approach we measure the transversal BAO mode at high redshift using the two-point angular correlation function (2PACF). The analyses done here are only possible now with the quasar catalogue from the twelfth data release (DR12Q) from the Sloan Digital Sky Survey, because it is spatially dense enough to allow the measurement of the angular BAO signature with moderate statistical significance and acceptable precision. Our analyses with quasars in the redshift interval z in [2.20,2.25] produce the angular BAO scale θBAO = 1.77° ± 0.31° with a statistical significance of 2.12 σ (i.e., 97% confidence level), calculated through a likelihood analysis performed using the theoretical covariance matrix sourced by the analytical power spectra expected in the ΛCDM concordance model. Additionally, we show that the BAO signal is robust—although with less statistical significance—under diverse bin-size choices and under small displacements of the quasars' angular coordinates. Finally, we also performed cosmological parameter analyses comparing the θBAO predictions for wCDM and w(a)CDM models with angular BAO data available in the literature, including the measurement obtained here, jointly with CMB data. The constraints on the parameters ΩM, w0 and wa are in excellent agreement with the ΛCDM concordance model.
Statistical methodologies for the control of dynamic remapping
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Nicol, D. M.
1986-01-01
Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.
Tolerancing aspheres based on manufacturing statistics
NASA Astrophysics Data System (ADS)
Wickenhagen, S.; Möhl, A.; Fuchs, U.
2017-11-01
A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.
Functional constraints on tooth morphology in carnivorous mammals
2012-01-01
Background The range of potential morphologies resulting from evolution is limited by complex interacting processes, ranging from development to function. Quantifying these interactions is important for understanding adaptation and convergent evolution. Using three-dimensional reconstructions of carnivoran and dasyuromorph tooth rows, we compared statistical models of the relationship between tooth row shape and the opposing tooth row, a static feature, as well as measures of mandibular motion during chewing (occlusion), which are kinetic features. This is a new approach to quantifying functional integration because we use measures of movement and displacement, such as the amount the mandible translates laterally during occlusion, as opposed to conventional morphological measures, such as mandible length and geometric landmarks. By sampling two distantly related groups of ecologically similar mammals, we study carnivorous mammals in general rather than a specific group of mammals. Results Statistical model comparisons demonstrate that the best performing models always include some measure of mandibular motion, indicating that functional and statistical models of tooth shape as purely a function of the opposing tooth row are too simple and that increased model complexity provides a better understanding of tooth form. The predictors of the best performing models always included the opposing tooth row shape and a relative linear measure of mandibular motion. Conclusions Our results provide quantitative support of long-standing hypotheses of tooth row shape as being influenced by mandibular motion in addition to the opposing tooth row. Additionally, this study illustrates the utility and necessity of including kinetic features in analyses of morphological integration. PMID:22899809
High order statistical signatures from source-driven measurements of subcritical fissile systems
NASA Astrophysics Data System (ADS)
Mattingly, John Kelly
1998-11-01
This research focuses on the development and application of high order statistical analyses applied to measurements performed with subcritical fissile systems driven by an introduced neutron source. The signatures presented are derived from counting statistics of the introduced source and radiation detectors that observe the response of the fissile system. It is demonstrated that successively higher order counting statistics possess progressively higher sensitivity to reactivity. Consequently, these signatures are more sensitive to changes in the composition, fissile mass, and configuration of the fissile assembly. Furthermore, it is shown that these techniques are capable of distinguishing the response of the fissile system to the introduced source from its response to any internal or inherent sources. This ability combined with the enhanced sensitivity of higher order signatures indicates that these techniques will be of significant utility in a variety of applications. Potential applications include enhanced radiation signature identification of weapons components for nuclear disarmament and safeguards applications and augmented nondestructive analysis of spent nuclear fuel. In general, these techniques expand present capabilities in the analysis of subcritical measurements.
NASA Astrophysics Data System (ADS)
Szyjka, Sebastian P.
The purpose of this study was to determine the extent to which six cognitive and attitudinal variables predicted pre-service elementary teachers' performance on line graphing. Predictors included Illinois teacher education basic skills sub-component scores in reading comprehension and mathematics, logical thinking performance scores, as well as measures of attitudes toward science, mathematics and graphing. This study also determined the strength of the relationship between each prospective predictor variable and the line graphing performance variable, as well as the extent to which measures of attitude towards science, mathematics and graphing mediated relationships between scores on mathematics, reading, logical thinking and line graphing. Ninety-four pre-service elementary education teachers enrolled in two different elementary science methods courses during the spring 2009 semester at Southern Illinois University Carbondale participated in this study. Each subject completed five different instruments designed to assess science, mathematics and graphing attitudes as well as logical thinking and graphing ability. Sixty subjects provided copies of primary basic skills score reports that listed subset scores for both reading comprehension and mathematics. The remaining scores were supplied by a faculty member who had access to a database from which the scores were drawn. Seven subjects, whose scores could not be found, were eliminated from final data analysis. Confirmatory factor analysis (CFA) was conducted in order to establish validity and reliability of the Questionnaire of Attitude Toward Line Graphs in Science (QALGS) instrument. CFA tested the statistical hypothesis that the five main factor structures within the Questionnaire of Attitude Toward Statistical Graphs (QASG) would be maintained in the revised QALGS. Stepwise Regression Analysis with backward elimination was conducted in order to generate a parsimonious and precise predictive model. This procedure allowed the researcher to explore the relationships among the affective and cognitive variables that were included in the regression analysis. The results for CFA indicated that the revised QALGS measure was sound in its psychometric properties when tested against the QASG. Reliability statistics indicated that the overall reliability for the 32 items in the QALGS was .90. The learning preferences construct had the lowest reliability (.67), while enjoyment (.89), confidence (.86) and usefulness (.77) constructs had moderate to high reliabilities. The first four measurement models fit the data well as indicated by the appropriate descriptive and statistical indices. However, the fifth measurement model did not fit the data well statistically, and only fit well with two descriptive indices. The results addressing the research question indicated that mathematical and logical thinking ability were significant predictors of line graph performance among the remaining group of variables. These predictors accounted for 41% of the total variability on the line graph performance variable. Partial correlation coefficients indicated that mathematics ability accounted for 20.5% of the variance on the line graphing performance variable when removing the effect of logical thinking. The logical thinking variable accounted for 4.7% of the variance on the line graphing performance variable when removing the effect of mathematics ability.
Identifying customer-focused performance measures : final report 655.
DOT National Transportation Integrated Search
2010-10-01
The Arizona Department of Transportation (ADOT) completed a comprehensive customer satisfaction : assessment in July 2009. ADOT commissioned the assessment to acquire statistically valid data from residents : and community leaders to help it identify...
Anthropometric and performance measures to study talent detection in youth volleyball.
Melchiorri, Giovanni; Viero, Valerio; Triossi, Tamara; Annino, Giuseppe; Padua, Elvira; Tancredi, Virginia
2017-12-01
The aim of this work was to study anthropometric and performance measurements in 60 young male volleyball players (YV) and 60 youth not active in the sport (YNA) to assess which of these would be more useful to study the characteristics of potential performers. Eight measures to assess anthropometric characteristics, six performance measures and two tests for joint mobility were used. Also relative age and level of maturation were assessed. The anthropometric variables, relative age and level of maturation measured did not show statistically significant differences between groups. The YV and YNA groups showed differences in the performance measures. YV group was characterized by a better performance of the ability to repeat short sprints, of the upper limbs, abdominal muscles and lower limbs, with a medium effect size (Shuttle Running Test: 0.6; Push-Up: 0.5; Sit-Up: 0.4; counter movement jump: 0.4). These performance variables were very sensitive and specific: the SRT measurement had the best positive likelihood ratio that indicates the utility of the test in identifying type of players (YV and YNA). In talent detection in youth volleyball, in the 11-13 age range, performance variables should be preferred to anthropometric ones.
Discriminative Random Field Models for Subsurface Contamination Uncertainty Quantification
NASA Astrophysics Data System (ADS)
Arshadi, M.; Abriola, L. M.; Miller, E. L.; De Paolis Kaluza, C.
2017-12-01
Application of flow and transport simulators for prediction of the release, entrapment, and persistence of dense non-aqueous phase liquids (DNAPLs) and associated contaminant plumes is a computationally intensive process that requires specification of a large number of material properties and hydrologic/chemical parameters. Given its computational burden, this direct simulation approach is particularly ill-suited for quantifying both the expected performance and uncertainty associated with candidate remediation strategies under real field conditions. Prediction uncertainties primarily arise from limited information about contaminant mass distributions, as well as the spatial distribution of subsurface hydrologic properties. Application of direct simulation to quantify uncertainty would, thus, typically require simulating multiphase flow and transport for a large number of permeability and release scenarios to collect statistics associated with remedial effectiveness, a computationally prohibitive process. The primary objective of this work is to develop and demonstrate a methodology that employs measured field data to produce equi-probable stochastic representations of a subsurface source zone that capture the spatial distribution and uncertainty associated with key features that control remediation performance (i.e., permeability and contamination mass). Here we employ probabilistic models known as discriminative random fields (DRFs) to synthesize stochastic realizations of initial mass distributions consistent with known, and typically limited, site characterization data. Using a limited number of full scale simulations as training data, a statistical model is developed for predicting the distribution of contaminant mass (e.g., DNAPL saturation and aqueous concentration) across a heterogeneous domain. Monte-Carlo sampling methods are then employed, in conjunction with the trained statistical model, to generate realizations conditioned on measured borehole data. Performance of the statistical model is illustrated through comparisons of generated realizations with the `true' numerical simulations. Finally, we demonstrate how these realizations can be used to determine statistically optimal locations for further interrogation of the subsurface.
NASA Astrophysics Data System (ADS)
Zhou, Weimin; Anastasio, Mark A.
2018-03-01
It has been advocated that task-based measures of image quality (IQ) should be employed to evaluate and optimize imaging systems. Task-based measures of IQ quantify the performance of an observer on a medically relevant task. The Bayesian Ideal Observer (IO), which employs complete statistical information of the object and noise, achieves the upper limit of the performance for a binary signal classification task. However, computing the IO performance is generally analytically intractable and can be computationally burdensome when Markov-chain Monte Carlo (MCMC) techniques are employed. In this paper, supervised learning with convolutional neural networks (CNNs) is employed to approximate the IO test statistics for a signal-known-exactly and background-known-exactly (SKE/BKE) binary detection task. The receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) are compared to those produced by the analytically computed IO. The advantages of the proposed supervised learning approach for approximating the IO are demonstrated.
On the use of attractor dimension as a feature in structural health monitoring
Nichols, J.M.; Virgin, L.N.; Todd, M.D.; Nichols, J.D.
2003-01-01
Recent works in the vibration-based structural health monitoring community have emphasised the use of correlation dimension as a discriminating statistic in seperating a damaged from undamaged response. This paper explores the utility of attractor dimension as a 'feature' and offers some comparisons between different metrics reflecting dimension. This focus is on evaluating the performance of two different measures of dimension as damage indicators in a structural health monitoring context. Results indicate that the correlation dimension is probably a poor choice of statistic for the purpose of signal discrimination. Other measures of dimension may be used for the same purposes with a higher degree of statistical reliability. The question of competing methodologies is placed in a hypothesis testing framework and answered with experimental data taken from a cantilivered beam.
RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics
Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo
2007-01-01
Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253
NASA Astrophysics Data System (ADS)
Goyal, Sandeep K.; Singh, Rajeev; Ghosh, Sibasish
2016-01-01
Mixed states of a quantum system, represented by density operators, can be decomposed as a statistical mixture of pure states in a number of ways where each decomposition can be viewed as a different preparation recipe. However the fact that the density matrix contains full information about the ensemble makes it impossible to estimate the preparation basis for the quantum system. Here we present a measurement scheme to (seemingly) improve the performance of unsharp measurements. We argue that in some situations this scheme is capable of providing statistics from a single copy of the quantum system, thus making it possible to perform state tomography from a single copy. One of the by-products of the scheme is a way to distinguish between different preparation methods used to prepare the state of the quantum system. However, our numerical simulations disagree with our intuitive predictions. We show that a counterintuitive property of a biased classical random walk is responsible for the proposed mechanism not working.
Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus
2015-10-01
In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.
Using Performance Methods to Enhance Students' Reading Fluency
ERIC Educational Resources Information Center
Young, Chase; Valadez, Corinne; Gandara, Cori
2016-01-01
The quasi-experimental study examined the effects of pairing Rock and Read with Readers Theater and only Rock and Read on second grade students' reading fluency scores. The 51 subjects were pre- and post-tested on five different reading fluency measures. A series of 3 × 2 repeated measures ANOVAs revealed statistically significant interaction…
Calibrated Noise Measurements with Induced Receiver Gain Fluctuations
NASA Technical Reports Server (NTRS)
Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly
2011-01-01
The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.
Evaluation on the use of cerium in the NBL Titrimetric Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zebrowski, J.P.; Orlowicz, G.J.; Johnson, K.D.
An alternative to potassium dichromate as titrant in the New Brunswick Laboratory Titrimetric Method for uranium analysis was sought since chromium in the waste makes disposal difficult. Substitution of a ceric-based titrant was statistically evaluated. Analysis of the data indicated statistically equivalent precisions for the two methods, but a significant overall bias of +0.035% for the ceric titrant procedure. The cause of the bias was investigated, alterations to the procedure were made, and a second statistical study was performed. This second study revealed no statistically significant bias, nor any analyst-to-analyst variation in the ceric titration procedure. A statistically significant day-to-daymore » variation was detected, but this was physically small (0.01 5%) and was only detected because of the within-day precision of the method. The added mean and standard deviation of the %RD for a single measurement was found to be 0.031%. A comparison with quality control blind dichromate titration data again indicated similar overall precision. Effects of ten elements on the ceric titration`s performance was determined. Co, Ti, Cu, Ni, Na, Mg, Gd, Zn, Cd, and Cr in previous work at NBL these impurities did not interfere with the potassium dichromate titrant. This study indicated similar results for the ceric titrant, with the exception of Ti. All the elements (excluding Ti and Cr), caused no statistically significant bias in uranium measurements at levels of 10 mg impurity per 20-40 mg uranium. The presence of Ti was found to cause a bias of {minus}0.05%; this is attributed to the presence of sulfate ions, resulting in precipitation of titanium sulfate and occlusion of uranium. A negative bias of 0.012% was also statistically observed in the samples containing chromium impurities.« less
Model Performance Evaluation and Scenario Analysis ...
This tool consists of two parts: model performance evaluation and scenario analysis (MPESA). The model performance evaluation consists of two components: model performance evaluation metrics and model diagnostics. These metrics provides modelers with statistical goodness-of-fit measures that capture magnitude only, sequence only, and combined magnitude and sequence errors. The performance measures include error analysis, coefficient of determination, Nash-Sutcliffe efficiency, and a new weighted rank method. These performance metrics only provide useful information about the overall model performance. Note that MPESA is based on the separation of observed and simulated time series into magnitude and sequence components. The separation of time series into magnitude and sequence components and the reconstruction back to time series provides diagnostic insights to modelers. For example, traditional approaches lack the capability to identify if the source of uncertainty in the simulated data is due to the quality of the input data or the way the analyst adjusted the model parameters. This report presents a suite of model diagnostics that identify if mismatches between observed and simulated data result from magnitude or sequence related errors. MPESA offers graphical and statistical options that allow HSPF users to compare observed and simulated time series and identify the parameter values to adjust or the input data to modify. The scenario analysis part of the too
Assessing the performance of sewer rehabilitation on the reduction of infiltration and inflow.
Staufer, P; Scheidegger, A; Rieckermann, J
2012-10-15
Inflow and Infiltration (I/I) into sewer systems is generally unwanted, because, among other things, it decreases the performance of wastewater treatment plants and increases combined sewage overflows. As sewer rehabilitation to reduce I/I is very expensive, water managers not only need methods to accurately measure I/I, but also they need sound approaches to assess the actual performance of implemented rehabilitation measures. However, such performance assessment is rarely performed. On the one hand, it is challenging to adequately take into account the variability of influential factors, such as hydro-meteorological conditions. On the other hand, it is currently not clear how experimental data can indeed support robust evidence for reduced I/I. In this paper, we therefore statistically assess the performance of rehabilitation measures to reduce I/I. This is possible by using observations in a suitable reference catchment as a control group and assessing the significance of the observed effect by regression analysis, which is well established in other disciplines. We successfully demonstrate the usefulness of the approach in a case study, where rehabilitation reduced groundwater infiltration by 23.9%. A reduction of stormwater inflow of 35.7%, however, was not statistically significant. Investigations into the experimental design of monitoring campaigns confirmed that the variability of the data as well as the number of observations collected before the rehabilitation impact the detection limit of the effect. This implies that it is difficult to improve the data quality after the rehabilitation has been implemented. Therefore, future practical applications should consider a careful experimental design. Further developments could employ more sophisticated monitoring methods, such as stable environmental isotopes, to directly observe the individual infiltration components. In addition, water managers should develop strategies to effectively communicate statistically not significant I/I reduction ratios to decision makers. Copyright © 2012 Elsevier Ltd. All rights reserved.
Variability in reaction time performance of younger and older adults.
Hultsch, David F; MacDonald, Stuart W S; Dixon, Roger A
2002-03-01
Age differences in three basic types of variability were examined: variability between persons (diversity), variability within persons across tasks (dispersion), and variability within persons across time (inconsistency). Measures of variability were based on latency performance from four measures of reaction time (RT) performed by a total of 99 younger adults (ages 17--36 years) and 763 older adults (ages 54--94 years). Results indicated that all three types of variability were greater in older compared with younger participants even when group differences in speed were statistically controlled. Quantile-quantile plots showed age and task differences in the shape of the inconsistency distributions. Measures of within-person variability (dispersion and inconsistency) were positively correlated. Individual differences in RT inconsistency correlated negatively with level of performance on measures of perceptual speed, working memory, episodic memory, and crystallized abilities. Partial set correlation analyses indicated that inconsistency predicted cognitive performance independent of level of performance. The results indicate that variability of performance is an important indicator of cognitive functioning and aging.
Mentoring perception and academic performance: an Academic Health Science Centre survey.
Athanasiou, Thanos; Patel, Vanash; Garas, George; Ashrafian, Hutan; Shetty, Kunal; Sevdalis, Nick; Panzarasa, Pietro; Darzi, Ara; Paroutis, Sotirios
2016-10-01
To determine the association between professors' self-perception of mentoring skills and their academic performance. Two hundred and fifteen professors from Imperial College London, the first Academic Health Science Centre (AHSC) in the UK, were surveyed. The instrument adopted was the Mentorship Skills Self-Assessment Survey. Statement scores were aggregated to provide a score for each shared core, mentor-specific and mentee-specific skill. Univariate and multivariate regression analyses were used to evaluate their relationship with quantitative measures of academic performance (publications, citations and h-index). There were 104 professors that responded (response rate 48%). There were no statistically significant negative correlations between any mentoring statement and any performance measure. In contrast, several mentoring survey items were positively correlated with academic performance. The total survey score for frequency of application of mentoring skills had a statistically significant positive association with number of publications (B=0.012, SE=0.004, p=0.006), as did the frequency of acquiring mentors with number of citations (B=1.572, SE=0.702, p=0.030). Building trust and managing risks had a statistically significant positive association with h-index (B=0.941, SE=0.460, p=0.047 and B=0.613, SE=0.287, p=0.038, respectively). This study supports the view that mentoring is associated with high academic performance. Importantly, it suggests that frequent use of mentoring skills and quality of mentoring have positive effects on academic performance. Formal mentoring programmes should be considered a fundamental part of all AHSCs' configuration. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Factors contributing to academic achievement: a Bayesian structure equation modelling study
NASA Astrophysics Data System (ADS)
Payandeh Najafabadi, Amir T.; Omidi Najafabadi, Maryam; Farid-Rohani, Mohammad Reza
2013-06-01
In Iran, high school graduates enter university after taking a very difficult entrance exam called the Konkoor. Therefore, only the top-performing students are admitted by universities to continue their bachelor's education in statistics. Surprisingly, statistically, most of such students fall into the following categories: (1) do not succeed in their education despite their excellent performance on the Konkoor and in high school; (2) graduate with a grade point average (GPA) that is considerably lower than their high school GPA; (3) continue their master's education in majors other than statistics and (4) try to find jobs unrelated to statistics. This article employs the well-known and powerful statistical technique, the Bayesian structural equation modelling (SEM), to study the academic success of recent graduates who have studied statistics at Shahid Beheshti University in Iran. This research: (i) considered academic success as a latent variable, which was measured by GPA and other academic success (see below) of students in the target population; (ii) employed the Bayesian SEM, which works properly for small sample sizes and ordinal variables; (iii), which is taken from the literature, developed five main factors that affected academic success and (iv) considered several standard psychological tests and measured characteristics such as 'self-esteem' and 'anxiety'. We then study the impact of such factors on the academic success of the target population. Six factors that positively impact student academic success were identified in the following order of relative impact (from greatest to least): 'Teaching-Evaluation', 'Learner', 'Environment', 'Family', 'Curriculum' and 'Teaching Knowledge'. Particularly, influential variables within each factor have also been noted.
Statistical similarity measures for link prediction in heterogeneous complex networks
NASA Astrophysics Data System (ADS)
Shakibian, Hadi; Charkari, Nasrollah Moghadam
2018-07-01
The majority of the link prediction measures in heterogeneous complex networks rely on the nodes connectivities while less attention has been paid to the importance of the nodes and paths. In this paper, we propose some new meta-path based statistical similarity measures to properly perform link prediction task. The main idea in the proposed measures is to drive some co-occurrence events in a number of co-occurrence matrices that are occurred between the visited nodes obeying a meta-path. The extracted co-occurrence matrices are analyzed in terms of the energy, inertia, local homogeneity, correlation, and information measure of correlation to determine various information theoretic measures. We evaluate the proposed measures, denoted as link energy, link inertia, link local homogeneity, link correlation, and link information measure of correlation, using a standard DBLP network data set. The results of the AUC score and Precision rate indicate the validity and accuracy of the proposed measures in comparison to the popular meta-path based similarity measures.
Statistical learning in social action contexts.
Monroy, Claire; Meyer, Marlene; Gerson, Sarah; Hunnius, Sabine
2017-01-01
Sensitivity to the regularities and structure contained within sequential, goal-directed actions is an important building block for generating expectations about the actions we observe. Until now, research on statistical learning for actions has solely focused on individual action sequences, but many actions in daily life involve multiple actors in various interaction contexts. The current study is the first to investigate the role of statistical learning in tracking regularities between actions performed by different actors, and whether the social context characterizing their interaction influences learning. That is, are observers more likely to track regularities across actors if they are perceived as acting jointly as opposed to in parallel? We tested adults and toddlers to explore whether social context guides statistical learning and-if so-whether it does so from early in development. In a between-subjects eye-tracking experiment, participants were primed with a social context cue between two actors who either shared a goal of playing together ('Joint' condition) or stated the intention to act alone ('Parallel' condition). In subsequent videos, the actors performed sequential actions in which, for certain action pairs, the first actor's action reliably predicted the second actor's action. We analyzed predictive eye movements to upcoming actions as a measure of learning, and found that both adults and toddlers learned the statistical regularities across actors when their actions caused an effect. Further, adults with high statistical learning performance were sensitive to social context: those who observed actors with a shared goal were more likely to correctly predict upcoming actions. In contrast, there was no effect of social context in the toddler group, regardless of learning performance. These findings shed light on how adults and toddlers perceive statistical regularities across actors depending on the nature of the observed social situation and the resulting effects.
Statistical learning in social action contexts
Meyer, Marlene; Gerson, Sarah; Hunnius, Sabine
2017-01-01
Sensitivity to the regularities and structure contained within sequential, goal-directed actions is an important building block for generating expectations about the actions we observe. Until now, research on statistical learning for actions has solely focused on individual action sequences, but many actions in daily life involve multiple actors in various interaction contexts. The current study is the first to investigate the role of statistical learning in tracking regularities between actions performed by different actors, and whether the social context characterizing their interaction influences learning. That is, are observers more likely to track regularities across actors if they are perceived as acting jointly as opposed to in parallel? We tested adults and toddlers to explore whether social context guides statistical learning and—if so—whether it does so from early in development. In a between-subjects eye-tracking experiment, participants were primed with a social context cue between two actors who either shared a goal of playing together (‘Joint’ condition) or stated the intention to act alone (‘Parallel’ condition). In subsequent videos, the actors performed sequential actions in which, for certain action pairs, the first actor’s action reliably predicted the second actor’s action. We analyzed predictive eye movements to upcoming actions as a measure of learning, and found that both adults and toddlers learned the statistical regularities across actors when their actions caused an effect. Further, adults with high statistical learning performance were sensitive to social context: those who observed actors with a shared goal were more likely to correctly predict upcoming actions. In contrast, there was no effect of social context in the toddler group, regardless of learning performance. These findings shed light on how adults and toddlers perceive statistical regularities across actors depending on the nature of the observed social situation and the resulting effects. PMID:28475619
Test anxiety and academic performance in chiropractic students.
Zhang, Niu; Henderson, Charles N R
2014-01-01
Objective : We assessed the level of students' test anxiety, and the relationship between test anxiety and academic performance. Methods : We recruited 166 third-quarter students. The Test Anxiety Inventory (TAI) was administered to all participants. Total scores from written examinations and objective structured clinical examinations (OSCEs) were used as response variables. Results : Multiple regression analysis shows that there was a modest, but statistically significant negative correlation between TAI scores and written exam scores, but not OSCE scores. Worry and emotionality were the best predictive models for written exam scores. Mean total anxiety and emotionality scores for females were significantly higher than those for males, but not worry scores. Conclusion : Moderate-to-high test anxiety was observed in 85% of the chiropractic students examined. However, total test anxiety, as measured by the TAI score, was a very weak predictive model for written exam performance. Multiple regression analysis demonstrated that replacing total anxiety (TAI) with worry and emotionality (TAI subscales) produces a much more effective predictive model of written exam performance. Sex, age, highest current academic degree, and ethnicity contributed little additional predictive power in either regression model. Moreover, TAI scores were not found to be statistically significant predictors of physical exam skill performance, as measured by OSCEs.
Radioactivity measurement of radioactive contaminated soil by using a fiber-optic radiation sensor
NASA Astrophysics Data System (ADS)
Joo, Hanyoung; Kim, Rinah; Moon, Joo Hyun
2016-06-01
A fiber-optic radiation sensor (FORS) was developed to measure the gamma radiation from radioactive contaminated soil. The FORS was fabricated using an inorganic scintillator (Lu,Y)2SiO5:Ce (LYSO:Ce), a mixture of epoxy resin and hardener, aluminum foil, and a plastic optical fiber. Before its real application, the FORS was tested to determine if it performed adequately. The test result showed that the measurements by the FORS adequately followed the theoretically estimated values. Then, the FORS was applied to measure the gamma radiation from radioactive contaminated soil. For comparison, a commercial radiation detector was also applied to measure the same soil samples. The measurement data were analyzed by using a statistical parameter, the critical level to determine if net radioactivity statistically different from background was present in the soil sample. The analysis showed that the soil sample had radioactivity distinguishable from background.
A computational study of whole-brain connectivity in resting state and task fMRI
Goparaju, Balaji; Rana, Kunjan D.; Calabro, Finnegan J.; Vaina, Lucia Maria
2014-01-01
Background We compared the functional brain connectivity produced during resting-state in which subjects were not actively engaged in a task with that produced while they actively performed a visual motion task (task-state). Material/Methods In this paper we employed graph-theoretical measures and network statistics in novel ways to compare, in the same group of human subjects, functional brain connectivity during resting-state fMRI with brain connectivity during performance of a high level visual task. We performed a whole-brain connectivity analysis to compare network statistics in resting and task states among anatomically defined Brodmann areas to investigate how brain networks spanning the cortex changed when subjects were engaged in task performance. Results In the resting state, we found strong connectivity among the posterior cingulate cortex (PCC), precuneus, medial prefrontal cortex (MPFC), lateral parietal cortex, and hippocampal formation, consistent with previous reports of the default mode network (DMN). The connections among these areas were strengthened while subjects actively performed an event-related visual motion task, indicating a continued and strong engagement of the DMN during task processing. Regional measures such as degree (number of connections) and betweenness centrality (number of shortest paths), showed that task performance induces stronger inter-regional connections, leading to a denser processing network, but that this does not imply a more efficient system as shown by the integration measures such as path length and global efficiency, and from global measures such as small-worldness. Conclusions In spite of the maintenance of connectivity and the “hub-like” behavior of areas, our results suggest that the network paths may be rerouted when performing the task condition. PMID:24947491
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xi; School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332; Thadesar, Paragkumar A.
2014-09-15
In-situ microscale thermomechanical strain measurements have been performed in combination with synchrotron x-ray microdiffraction to understand the fundamental cause of failures in microelectronics devices with through-silicon vias. The physics behind the raster scan and data analysis of the measured strain distribution maps is explored utilizing the energies of indexed reflections from the measured data and applying them for beam intensity analysis and effective penetration depth determination. Moreover, a statistical analysis is performed for the beam intensity and strain distributions along the beam penetration path to account for the factors affecting peak search and strain refinement procedure.
Impaired Statistical Learning in Developmental Dyslexia
Thiessen, Erik D.; Holt, Lori L.
2015-01-01
Purpose Developmental dyslexia (DD) is commonly thought to arise from phonological impairments. However, an emerging perspective is that a more general procedural learning deficit, not specific to phonological processing, may underlie DD. The current study examined if individuals with DD are capable of extracting statistical regularities across sequences of passively experienced speech and nonspeech sounds. Such statistical learning is believed to be domain-general, to draw upon procedural learning systems, and to relate to language outcomes. Method DD and control groups were familiarized with a continuous stream of syllables or sine-wave tones, the ordering of which was defined by high or low transitional probabilities across adjacent stimulus pairs. Participants subsequently judged two 3-stimulus test items with either high or low statistical coherence as being the most similar to the sounds heard during familiarization. Results As with control participants, the DD group was sensitive to the transitional probability structure of the familiarization materials as evidenced by above-chance performance. However, the performance of participants with DD was significantly poorer than controls across linguistic and nonlinguistic stimuli. In addition, reading-related measures were significantly correlated with statistical learning performance of both speech and nonspeech material. Conclusion Results are discussed in light of procedural learning impairments among participants with DD. PMID:25860795
Tankevicius, Gediminas; Lankaite, Doanata; Krisciunas, Aleksandras
2013-08-01
The lack of knowledge about isometric ankle testing indicates the need for research in this area. to assess test-retest reliability and to determine the optimal position for isometric ankle-eversion and -inversion testing. Test-retest reliability study. Isometric ankle eversion and inversion were assessed in 3 different dynamometer foot-plate positions: 0°, 7°, and 14° of inversion. Two maximal repetitions were performed at each angle. Both limbs were tested (40 ankles in total). The test was performed 2 times with a period of 7 d between the tests. University hospital. The study was carried out on 20 healthy athletes with no history of ankle sprains. Reliability was assessed using intraclass correlation coefficient (ICC2,1); minimal detectable change (MDC) was calculated using a 95% confidence interval. Paired t test was used to measure statistically significant changes, and P <.05 was considered statistically significant. Eversion and inversion peak torques showed high ICCs in all 3 angles (ICC values .87-.96, MDC values 3.09-6.81 Nm). Eversion peak torque was the smallest when testing at the 0° angle and gradually increased, reaching maximum values at 14° angle. The increase of eversion peak torque was statistically significant at 7 ° and 14° of inversion. Inversion peak torque showed an opposite pattern-it was the smallest when measured at the 14° angle and increased at the other 2 angles; statistically significant changes were seen only between measures taken at 0° and 14°. Isometric eversion and inversion testing using the Biodex 4 Pro system is a reliable method. The authors suggest that the angle of 7° of inversion is the best for isometric eversion and inversion testing.
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
Statistical approach for selection of biologically informative genes.
Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N
2018-05-20
Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.
Specialized data analysis of SSME and advanced propulsion system vibration measurements
NASA Technical Reports Server (NTRS)
Coffin, Thomas; Swanson, Wayne L.; Jong, Yen-Yi
1993-01-01
The basic objectives of this contract were to perform detailed analysis and evaluation of dynamic data obtained during Space Shuttle Main Engine (SSME) test and flight operations, including analytical/statistical assessment of component dynamic performance, and to continue the development and implementation of analytical/statistical models to effectively define nominal component dynamic characteristics, detect anomalous behavior, and assess machinery operational conditions. This study was to provide timely assessment of engine component operational status, identify probable causes of malfunction, and define feasible engineering solutions. The work was performed under three broad tasks: (1) Analysis, Evaluation, and Documentation of SSME Dynamic Test Results; (2) Data Base and Analytical Model Development and Application; and (3) Development and Application of Vibration Signature Analysis Techniques.
Estimation of the POD function and the LOD of a qualitative microbiological measurement method.
Wilrich, Cordula; Wilrich, Peter-Theodor
2009-01-01
Qualitative microbiological measurement methods in which the measurement results are either 0 (microorganism not detected) or 1 (microorganism detected) are discussed. The performance of such a measurement method is described by its probability of detection as a function of the contamination (CFU/g or CFU/mL) of the test material, or by the LOD(p), i.e., the contamination that is detected (measurement result 1) with a specified probability p. A complementary log-log model was used to statistically estimate these performance characteristics. An intralaboratory experiment for the detection of Listeria monocytogenes in various food matrixes illustrates the method. The estimate of LOD50% is compared with the Spearman-Kaerber method.
NASA Astrophysics Data System (ADS)
Agus, M.; Hitchcott, P. K.; Penna, M. P.; Peró-Cebollero, M.; Guàrdia-Olmos, J.
2016-11-01
Many studies have investigated the features of probabilistic reasoning developed in relation to different formats of problem presentation, showing that it is affected by various individual and contextual factors. Incomplete understanding of the identity and role of these factors may explain the inconsistent evidence concerning the effect of problem presentation format. Thus, superior performance has sometimes been observed for graphically, rather than verbally, presented problems. The present study was undertaken to address this issue. Psychology undergraduates without any statistical expertise (N = 173 in Italy; N = 118 in Spain; N = 55 in England) were administered statistical problems in two formats (verbal-numerical and graphical-pictorial) under a condition of time pressure. Students also completed additional measures indexing several potentially relevant individual dimensions (statistical ability, statistical anxiety, attitudes towards statistics and confidence). Interestingly, a facilitatory effect of graphical presentation was observed in the Italian and Spanish samples but not in the English one. Significantly, the individual dimensions predicting statistical performance also differed between the samples, highlighting a different role of confidence. Hence, these findings confirm previous observations concerning problem presentation format while simultaneously highlighting the importance of individual dimensions.
NASA Astrophysics Data System (ADS)
Biswas, Sayan; Qiao, Li
2017-03-01
A detailed statistical assessment of seedless velocity measurement using Schlieren Image Velocimetry (SIV) was explored using open source Robust Phase Correlation (RPC) algorithm. A well-known flow field, an axisymmetric turbulent helium jet, was analyzed near and intermediate region (0≤ x/d≤ 20) for two different Reynolds numbers, Re d = 11,000 and Re d = 22,000 using schlieren with horizontal knife-edge, schlieren with vertical knife-edge and shadowgraph technique, and the resulted velocity fields from SIV techniques were compared to traditional Particle Image Velocimetry (PIV) measurements. A novel, inexpensive, easy to setup two-camera SIV technique had been demonstrated to measure high-velocity turbulent jet, with jet exit velocities 304 m/s (Mach = 0.3) and 611 m/s (Mach = 0.6), respectively. Several image restoration and enhancement techniques were tested to improve signal to noise ratio (SNR) in schlieren and shadowgraph images. Processing and post-processing parameters for SIV techniques were examined in detail. A quantitative comparison between self-seeded SIV techniques and traditional PIV had been made using correlation statistics. While the resulted flow field from schlieren with horizontal knife-edge and shadowgraph showed excellent agreement with PIV measurements, schlieren with vertical knife-edge performed poorly. The performance of spatial cross-correlations at different jet locations using SIV techniques and PIV was evaluated. Turbulence quantities like turbulence intensity, mean velocity fields, Reynolds shear stress influenced spatial correlations and correlation plane SNR heavily. Several performance metrics such as primary peak ratio (PPR), peak to correlation energy (PCE), the probability distribution of signal and noise were used to compare capability and potential of different SIV techniques.
NASA Astrophysics Data System (ADS)
El Sharif, H.; Teegavarapu, R. S.
2012-12-01
Spatial interpolation methods used for estimation of missing precipitation data at a site seldom check for their ability to preserve site and regional statistics. Such statistics are primarily defined by spatial correlations and other site-to-site statistics in a region. Preservation of site and regional statistics represents a means of assessing the validity of missing precipitation estimates at a site. This study evaluates the efficacy of a fuzzy-logic methodology for infilling missing historical daily precipitation data in preserving site and regional statistics. Rain gauge sites in the state of Kentucky, USA, are used as a case study for evaluation of this newly proposed method in comparison to traditional data infilling techniques. Several error and performance measures will be used to evaluate the methods and trade-offs in accuracy of estimation and preservation of site and regional statistics.
Age differences in the performance of basketball dribbling by elementary school boys.
Caterino, M C
1991-08-01
Age differences in hand contact time and ball-to-floor distance during the performance of a basketball dribbling task by 30 5- to 6-, 7- to 8-, and 9- to 10-yr.-old boys were studied. Each age group included 10 boys, five with high rhythm audiation skill and five with low rhythm audiation skill, as measured on Gordon's Primary or Intermediate Measures of Music Audiation. Performance during eight bounces was filmed with a 16-mm camera and analyzed with a stop-action projector. Analysis of variance indicated no statistically significant differences. Observed dribbling behaviors are discussed.
Koenig, Lane; Soltoff, Samuel A; Demiralp, Berna; Demehin, Akinluwa A; Foster, Nancy E; Steinberg, Caroline Rossi; Vaz, Christopher; Wetzel, Scott; Xu, Susan
In 2016, Medicare's Hospital-Acquired Condition Reduction Program (HAC-RP) will reduce hospital payments by $364 million. Although observers have questioned the validity of certain HAC-RP measures, less attention has been paid to the determination of low-performing hospitals (bottom quartile) and the assignment of penalties. This study investigated possible bias in the HAC-RP by simulating hospitals' likelihood of being in the worst-performing quartile for 8 patient safety measures, assuming identical expected complication rates across hospitals. Simulated likelihood of being a poor performer varied with hospital size. This relationship depended on the measure's complication rate. For 3 of 8 measures examined, the equal-quality simulation identified poor performers similarly to empirical data (c-statistic approximately 0.7 or higher) and explained most of the variation in empirical performance by size (Efron's R 2 > 0.85). The Centers for Medicare & Medicaid Services could address potential bias in the HAC-RP by stratifying by hospital size or using a broader "all-harm" measure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hawkins, C.A.
Tests of QED to order ..cap alpha../sup 4/ performed with the ASP detector at PEP are presented. Measurements have been made of exclusive e/sup +/e/sup -/e/sup +/e/sup -/, e/sup +/e/sup -/..gamma gamma.. and ..gamma gamma gamma gamma.. final states with all particles above 50 milliradians with respect to the e/sup +/e/sup -/ beam line. These measurements represent a significant increase in statistics over previous measurements. All measurements agree well with theoretical predictions. 5 refs., 1 tab.
Manual tracing versus smartphone application (app) tracing: a comparative study.
Sayar, Gülşilay; Kilinc, Delal Dara
2017-11-01
This study aimed to compare the results of conventional manual cephalometric tracing with those acquired with smartphone application cephalometric tracing. The cephalometric radiographs of 55 patients (25 females and 30 males) were traced via the manual and app methods and were subsequently examined with Steiner's analysis. Five skeletal measurements, five dental measurements and two soft tissue measurements were managed based on 21 landmarks. The durations of the performances of the two methods were also compared. SNA (Sella, Nasion, A point angle) and SNB (Sella, Nasion, B point angle) values for the manual method were statistically lower (p < .001) than those for the app method. The ANB value for the manual method was statistically lower than that of app method. L1-NB (°) and upper lip protrusion values for the manual method were statistically higher than those for the app method. Go-GN/SN, U1-NA (°) and U1-NA (mm) values for manual method were statistically lower than those for the app method. No differences between the two methods were found in the L1-NB (mm), occlusal plane to SN, interincisal angle or lower lip protrusion values. Although statistically significant differences were found between the two methods, the cephalometric tracing proceeded faster with the app method than with the manual method.
Vahedi, Shahram; Farrokhi, Farahman
2011-01-01
Objective The aim of this study is to explore the confirmatory factor analysis results of the Persian adaptation of Statistics Anxiety Measure (SAM), proposed by Earp. Method The validity and reliability assessments of the scale were performed on 298 college students chosen randomly from Tabriz University in Iran. Confirmatory factor analysis (CFA) was carried out to determine the factor structures of the Persian adaptation of SAM. Results As expected, the second order model provided a better fit to the data than the three alternative models. Conclusions Hence, SAM provides an equally valid measure for use among college students. The study both expands and adds support to the existing body of math anxiety literature. PMID:22952530
Willis, R T; Becerra, F E; Orozco, L A; Rolston, S L
2011-07-18
We present measurements of the polarization correlation and photon statistics of photon pairs that emerge from a laser-pumped warm rubidium vapor cell. The photon pairs occur at 780 nm and 1367 nm and are polarization entangled. We measure the autocorrelation of each of the generated fields as well as the cross-correlation function, and observe a strong violation of the two-beam Cauchy-Schwartz inequality. We evaluate the performance of the system as source of heralded single photons at a telecommunication wavelength. We measure the heralded autocorrelation and see that coincidences are suppressed by a factor of ≈ 20 from a Poissonian source at a generation rate of 1500 s(-1), a heralding efficiency of 10%, and a narrow spectral width.
Theoretical and experimental analysis of laser altimeters for barometric measurements over the ocean
NASA Technical Reports Server (NTRS)
Tsai, B. M.; Gardner, C. S.
1984-01-01
The statistical characteristics and the waveforms of ocean-reflected laser pulses are studied. The received signal is found to be corrupted by shot noise and time-resolved speckle. The statistics of time-resolved speckle and its effects on the timing accuracy of the receiver are studied in the general context of laser altimetry. For estimating the differential propagation time, various receiver timing algorithms are proposed and their performances evaluated. The results indicate that, with the parameters of a realistic altimeter, a pressure measurement accuracy of a few millibars is feasible. The data obtained from the first airborne two-color laser altimeter experiment are processed and analyzed. The results are used to verify the pressure measurement concept.
Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego
2016-06-17
Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.
Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning
Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego
2016-01-01
Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273
EFFECTS OF DIFFERENT WARM-UP PROGRAMS ON GOLF PERFORMANCE IN ELITE MALE GOLFERS
Macfarlane, Alison
2012-01-01
Background: The physical demands required of the body to execute a shot in golf are enormous. Current evidence suggests that warm-up involving static stretching is detrimental to immediate performance in golf as opposed to active dynamic stretching. However the effect of resistance exercises during warm-up before golf on immediate performance is unknown. Therefore, the purpose of this study was to assess the effects of three different warm-up programs on immediate golf performance. Methods: Fifteen elite male golfers completed three different warm-up programs over three sessions on non-consecutive days. After each warm-up program each participant hit ten maximal drives with the ball flight and swing analyzed with Flightscope® to record maximum club head speed (MCHS), maximal driving distance (MDD), driving accuracy (DA), smash factor (SF) and consistent ball strike (CBS). Results: Repeated measures ANOVA tests showed statistically significant difference within 3 of the 5 factors of performance (MDD, CBS and SF). Subsequently, a paired t-test then showed statistically significant (p<0.05) improvements occurred in each of these three factors in the group performing a combined active dynamic and functional resistance (FR) warm-up as opposed to either the active dynamic (AD) warm-up or the combined AD with weights warm-up (WT). There were no statistically significant differences observed between the AD warm-up and the WT warm-up for any of the five performance factors and no statistical significant difference between any of the warm-ups for maximum clubhead speed (MCHS) and driving accuracy (DA). Conclusion: Performing a combined AD and FR warm up with Theraband® leads to significant increase in immediate performance of certain factors of the golf drive compared to performing an AD warm-up by itself or a combined AD with WT warm-up. No significant difference was observed between the three warm-up groups when looking at immediate effect on driving accuracy or maximum club head speed. The addition of functional resistance activities to active dynamic stretching has immediate benefits to elite male golfers in relation to some factors of their performance. Level of Evidence: This study is a Quantitative Experimental design using repeated measures and multiple crossovers. It cannot be classified using the descriptive level of evidence. PMID:23936749
Attachment at work and performance.
Neustadt, Elizabeth A; Chamorro-Premuzic, Tomas; Furnham, Adrian
2011-09-01
This paper examines the relations between self-reported attachment orientation at work and personality, self-esteem, trait emotional intelligence (aka emotional self-efficacy), and independently assessed career potential and job performance. Self-report data were collected from 211 managers in an international business in the hospitality industry; independent assessments of these managers' job performance and career potential were separately obtained from the organization. A self-report measure of romantic attachment was adapted for application in the work context; a two-factor solution was found for this measure. Secure/autonomous attachment orientation at work was positively related to self-esteem, trait emotional intelligence, extraversion, agreeableness, and conscientiousness, and also to job performance. Not only was secure/autonomous attachment orientation at work statistically predictive of job performance, but the new measure also made a distinct contribution, beyond conscientiousness, to this prediction.
Obuchowski, Nancy A; Buckler, Andrew; Kinahan, Paul; Chen-Mayer, Heather; Petrick, Nicholas; Barboriak, Daniel P; Bullen, Jennifer; Barnhart, Huiman; Sullivan, Daniel C
2016-04-01
A major initiative of the Quantitative Imaging Biomarker Alliance is to develop standards-based documents called "Profiles," which describe one or more technical performance claims for a given imaging modality. The term "actor" denotes any entity (device, software, or person) whose performance must meet certain specifications for the claim to be met. The objective of this paper is to present the statistical issues in testing actors' conformance with the specifications. In particular, we present the general rationale and interpretation of the claims, the minimum requirements for testing whether an actor achieves the performance requirements, the study designs used for testing conformity, and the statistical analysis plan. We use three examples to illustrate the process: apparent diffusion coefficient in solid tumors measured by MRI, change in Perc 15 as a biomarker for the progression of emphysema, and percent change in solid tumor volume by computed tomography as a biomarker for lung cancer progression. Copyright © 2016 The Association of University Radiologists. All rights reserved.
Statistical Modeling of Natural Backgrounds in Hyperspectral LWIR Data
2016-09-06
extremely important for studying performance trades. First, we study the validity of this model using real hyperspectral data, and compare the relative...difficult to validate any statistical model created for a target of interest. However, since background measurements are plentiful, it is reasonable to...Golden, S., Less, D., Jin, X., and Rynes, P., “ Modeling and analysis of LWIR signature variability associated with 3d and BRDF effects,” 98400P (May 2016
The use and misuse of aircraft and missile RCS statistics
NASA Astrophysics Data System (ADS)
Bishop, Lee R.
1991-07-01
Both static and dynamic radar cross sections measurements are used for RCS predictions, but the static data are less complete than the dynamic. Integrated dynamics RCS data also have limitations for prediction radar detection performance. When raw static data are properly used, good first-order detection estimates are possible. The research to develop more-usable RCS statistics is reviewed, and windowing techniques for creating probability density functions from static RCS data are discussed.
Validity of a smartphone protractor to measure sagittal parameters in adult spinal deformity.
Kunkle, William Aaron; Madden, Michael; Potts, Shannon; Fogelson, Jeremy; Hershman, Stuart
2017-10-01
Smartphones have become an integral tool in the daily life of health-care professionals (Franko 2011). Their ease of use and wide availability often make smartphones the first tool surgeons use to perform measurements. This technique has been validated for certain orthopedic pathologies (Shaw 2012; Quek 2014; Milanese 2014; Milani 2014), but never to assess sagittal parameters in adult spinal deformity (ASD). This study was designed to assess the validity, reproducibility, precision, and efficiency of using a smartphone protractor application to measure sagittal parameters commonly measured in ASD assessment and surgical planning. This study aimed to (1) determine the validity of smartphone protractor applications, (2) determine the intra- and interobserver reliability of smartphone protractor applications when used to measure sagittal parameters in ASD, (3) determine the efficiency of using a smartphone protractor application to measure sagittal parameters, and (4) elucidate whether a physician's level of experience impacts the reliability or validity of using a smartphone protractor application to measure sagittal parameters in ASD. An experimental validation study was carried out. Thirty standard 36″ standing lateral radiographs were examined. Three separate measurements were performed using a marker and protractor; then at a separate time point, three separate measurements were performed using a smartphone protractor application for all 30 radiographs. The first 10 radiographs were then re-measured two more times, for a total of three measurements from both the smartphone protractor and marker and protractor. The parameters included lumbar lordosis, pelvic incidence, and pelvic tilt. Three raters performed all measurements-a junior level orthopedic resident, a senior level orthopedic resident, and a fellowship-trained spinal deformity surgeon. All data, including the time to perform the measurements, were recorded, and statistical analysis was performed to determine intra- and interobserver reliability, as well as accuracy, efficiency, and precision. Statistical analysis using the intra- and interclass correlation coefficient was calculated using R (version 3.3.2, 2016) to determine the degree of intra- and interobserver reliability. High rates of intra- and interobserver reliability were observed between the junior resident, senior resident, and attending surgeon when using the smartphone protractor application as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.874 respectively. High rates of inter- and intraobserver reliability were also seen between the junior resident, senior resident, and attending surgeon when a marker and protractor were used as demonstrated by high inter- and intra-class correlation coefficients greater than 0.909 and 0.807 respectively. The lumbar lordosis, pelvic incidence, and pelvic tilt values were accurately measured by all three raters, with excellent inter- and intra-class correlation coefficient values. When the first 10 radiographs were re-measured at different time points, a high degree of precision was noted. Measurements performed using the smartphone application were consistently faster than using a marker and protractor-this difference reached statistical significance of p<.05. Adult spinal deformity radiographic parameters can be measured accurately, precisely, reliably, and more efficiently using a smartphone protractor application than with a standard protractor and wax pencil. A high degree of intra- and interobserver reliability was seen between the residents and attending surgeon, indicating measurements made with a smartphone protractor are unaffected by an observer's level of experience. As a result, smartphone protractors may be used when planning ASD surgery. Copyright © 2017 Elsevier Inc. All rights reserved.
An adaptive approach to the dynamic allocation of buffer storage. M.S. Thesis
NASA Technical Reports Server (NTRS)
Crooke, S. C.
1970-01-01
Several strategies for the dynamic allocation of buffer storage are simulated and compared. The basic algorithms investigated, using actual statistics observed in the Univac 1108 EXEC 8 System, include the buddy method and the first-fit method. Modifications are made to the basic methods in an effort to improve and to measure allocation performance. A simulation model of an adaptive strategy is developed which permits interchanging the two different methods, the buddy and the first-fit methods with some modifications. Using an adaptive strategy, each method may be employed in the statistical environment in which its performance is superior to the other method.
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
Performance evaluation of spectral vegetation indices using a statistical sensitivity function
Ji, Lei; Peters, Albert J.
2007-01-01
A great number of spectral vegetation indices (VIs) have been developed to estimate biophysical parameters of vegetation. Traditional techniques for evaluating the performance of VIs are regression-based statistics, such as the coefficient of determination and root mean square error. These statistics, however, are not capable of quantifying the detailed relationship between VIs and biophysical parameters because the sensitivity of a VI is usually a function of the biophysical parameter instead of a constant. To better quantify this relationship, we developed a “sensitivity function” for measuring the sensitivity of a VI to biophysical parameters. The sensitivity function is defined as the first derivative of the regression function, divided by the standard error of the dependent variable prediction. The function elucidates the change in sensitivity over the range of the biophysical parameter. The Student's t- or z-statistic can be used to test the significance of VI sensitivity. Additionally, we developed a “relative sensitivity function” that compares the sensitivities of two VIs when the biophysical parameters are unavailable.
A Statistical Project Control Tool for Engineering Managers
NASA Technical Reports Server (NTRS)
Bauch, Garland T.
2001-01-01
This slide presentation reviews the use of a Statistical Project Control Tool (SPCT) for managing engineering projects. A literature review pointed to a definition of project success, (i.e., A project is successful when the cost, schedule, technical performance, and quality satisfy the customer.) The literature review also pointed to project success factors, and traditional project control tools, and performance measures that are detailed in the report. The essential problem is that with resources becoming more limited, and an increasing number or projects, project failure is increasing, there is a limitation of existing methods and systematic methods are required. The objective of the work is to provide a new statistical project control tool for project managers. Graphs using the SPCT method plotting results of 3 successful projects and 3 failed projects are reviewed, with success and failure being defined by the owner.
Statistical EMC: A new dimension electromagnetic compatibility of digital electronic systems
NASA Astrophysics Data System (ADS)
Tsaliovich, Anatoly
Electromagnetic compatibility compliance test results are used as a database for addressing three classes of electromagnetic-compatibility (EMC) related problems: statistical EMC profiles of digital electronic systems, the effect of equipment-under-test (EUT) parameters on the electromagnetic emission characteristics, and EMC measurement specifics. Open area test site (OATS) and absorber line shielded room (AR) results are compared for equipment-under-test highest radiated emissions. The suggested statistical evaluation methodology can be utilized to correlate the results of different EMC test techniques, characterize the EMC performance of electronic systems and components, and develop recommendations for electronic product optimal EMC design.
Wade, Joshua; Weitlauf, Amy; Broderick, Neill; Swanson, Amy; Zhang, Lian; Bian, Dayi; Sarkar, Medha; Warren, Zachary; Sarkar, Nilanjan
2017-11-01
Individuals with Autism Spectrum Disorder (ASD), compared to typically-developed peers, may demonstrate behaviors that are counter to safe driving. The current work examines the use of a novel simulator in two separate studies. Study 1 demonstrates statistically significant performance differences between individuals with (N = 7) and without ASD (N = 7) with regards to the number of turning-related driving errors (p < 0.01). Study 2 shows that both the performance-based feedback group (N = 9) and combined performance- and gaze-sensitive feedback group (N = 8) achieved statistically significant reductions in driving errors following training (p < 0.05). These studies are the first to present results of fine-grained measures of visual attention of drivers and an adaptive driving intervention for individuals with ASD.
ERIC Educational Resources Information Center
Rojas, Mariano
2011-01-01
In 2009 the Stiglitz Commission presented its report on the Measurement of Progress in Societies. The report was commissioned by President Sarkozy of France in 2008. Among its members, the Commission had five Nobel laureates. The report emphasizes three areas which require further attention by statistical offices and policy makers: A better…
Onboard Acoustic Data-Processing for the Statistical Analysis of Array Beam-Noise,
1980-12-15
performance of the sonar system as a measurement tool and others that can assess the character of the ambient- noise field at the time of the measurement. In...the plot as would "dead" hydrophones. A reduction in sensitivity of a hydrophone, a faulty preamplifier , or any other fault in the acoustic channel
Statistical analysis of target acquisition sensor modeling experiments
NASA Astrophysics Data System (ADS)
Deaver, Dawne M.; Moyer, Steve
2015-05-01
The U.S. Army RDECOM CERDEC NVESD Modeling and Simulation Division is charged with the development and advancement of military target acquisition models to estimate expected soldier performance when using all types of imaging sensors. Two elements of sensor modeling are (1) laboratory-based psychophysical experiments used to measure task performance and calibrate the various models and (2) field-based experiments used to verify the model estimates for specific sensors. In both types of experiments, it is common practice to control or measure environmental, sensor, and target physical parameters in order to minimize uncertainty of the physics based modeling. Predicting the minimum number of test subjects required to calibrate or validate the model should be, but is not always, done during test planning. The objective of this analysis is to develop guidelines for test planners which recommend the number and types of test samples required to yield a statistically significant result.
Version pressure feedback mechanisms for speculative versioning caches
Eichenberger, Alexandre E.; Gara, Alan; O& #x27; Brien, Kathryn M.; Ohmacht, Martin; Zhuang, Xiaotong
2013-03-12
Mechanisms are provided for controlling version pressure on a speculative versioning cache. Raw version pressure data is collected based on one or more threads accessing cache lines of the speculative versioning cache. One or more statistical measures of version pressure are generated based on the collected raw version pressure data. A determination is made as to whether one or more modifications to an operation of a data processing system are to be performed based on the one or more statistical measures of version pressure, the one or more modifications affecting version pressure exerted on the speculative versioning cache. An operation of the data processing system is modified based on the one or more determined modifications, in response to a determination that one or more modifications to the operation of the data processing system are to be performed, to affect the version pressure exerted on the speculative versioning cache.
A LES-based Eulerian-Lagrangian approach to predict the dynamics of bubble plumes
NASA Astrophysics Data System (ADS)
Fraga, Bruño; Stoesser, Thorsten; Lai, Chris C. K.; Socolofsky, Scott A.
2016-01-01
An approach for Eulerian-Lagrangian large-eddy simulation of bubble plume dynamics is presented and its performance evaluated. The main numerical novelties consist in defining the gas-liquid coupling based on the bubble size to mesh resolution ratio (Dp/Δx) and the interpolation between Eulerian and Lagrangian frameworks through the use of delta functions. The model's performance is thoroughly validated for a bubble plume in a cubic tank in initially quiescent water using experimental data obtained from high-resolution ADV and PIV measurements. The predicted time-averaged velocities and second-order statistics show good agreement with the measurements, including the reproduction of the anisotropic nature of the plume's turbulence. Further, the predicted Eulerian and Lagrangian velocity fields, second-order turbulence statistics and interfacial gas-liquid forces are quantified and discussed as well as the visualization of the time-averaged primary and secondary flow structure in the tank.
NASA Astrophysics Data System (ADS)
Li, Qin; Berman, Benjamin P.; Schumacher, Justin; Liang, Yongguang; Gavrielides, Marios A.; Yang, Hao; Zhao, Binsheng; Petrick, Nicholas
2017-03-01
Tumor volume measured from computed tomography images is considered a biomarker for disease progression or treatment response. The estimation of the tumor volume depends on the imaging system parameters selected, as well as lesion characteristics. In this study, we examined how different image reconstruction methods affect the measurement of lesions in an anthropomorphic liver phantom with a non-uniform background. Iterative statistics-based and model-based reconstructions, as well as filtered back-projection, were evaluated and compared in this study. Statistics-based and filtered back-projection yielded similar estimation performance, while model-based yielded higher precision but lower accuracy in the case of small lesions. Iterative reconstructions exhibited higher signal-to-noise ratio but slightly lower contrast of the lesion relative to the background. A better understanding of lesion volumetry performance as a function of acquisition parameters and lesion characteristics can lead to its incorporation as a routine sizing tool.
Assessing Continuous Operator Workload With a Hybrid Scaffolded Neuroergonomic Modeling Approach.
Borghetti, Brett J; Giametta, Joseph J; Rusnock, Christina F
2017-02-01
We aimed to predict operator workload from neurological data using statistical learning methods to fit neurological-to-state-assessment models. Adaptive systems require real-time mental workload assessment to perform dynamic task allocations or operator augmentation as workload issues arise. Neuroergonomic measures have great potential for informing adaptive systems, and we combine these measures with models of task demand as well as information about critical events and performance to clarify the inherent ambiguity of interpretation. We use machine learning algorithms on electroencephalogram (EEG) input to infer operator workload based upon Improved Performance Research Integration Tool workload model estimates. Cross-participant models predict workload of other participants, statistically distinguishing between 62% of the workload changes. Machine learning models trained from Monte Carlo resampled workload profiles can be used in place of deterministic workload profiles for cross-participant modeling without incurring a significant decrease in machine learning model performance, suggesting that stochastic models can be used when limited training data are available. We employed a novel temporary scaffold of simulation-generated workload profile truth data during the model-fitting process. A continuous workload profile serves as the target to train our statistical machine learning models. Once trained, the workload profile scaffolding is removed and the trained model is used directly on neurophysiological data in future operator state assessments. These modeling techniques demonstrate how to use neuroergonomic methods to develop operator state assessments, which can be employed in adaptive systems.
Gupta, Munish; Kaplan, Heather C
2017-09-01
Quality improvement (QI) is based on measuring performance over time, and variation in data measured over time must be understood to guide change and make optimal improvements. Common cause variation is natural variation owing to factors inherent to any process; special cause variation is unnatural variation owing to external factors. Statistical process control methods, and particularly control charts, are robust tools for understanding data over time and identifying common and special cause variation. This review provides a practical introduction to the use of control charts in health care QI, with a focus on neonatology. Copyright © 2017 Elsevier Inc. All rights reserved.
Time-resolved measurements of statistics for a Nd:YAG laser.
Hubschmid, W; Bombach, R; Gerber, T
1994-08-20
Time-resolved measurements of the fluctuating intensity of a multimode frequency-doubled Nd:YAG laser have been performed. For various operating conditions the enhancement factors in nonlinear optical processes that use a fluctuating instead of a single-mode laser have been determined up to the sixth order. In the case of reduced flash-lamp excitation and a switched-off laser amplifier, the intensity fluctuations agree with the normalized Gaussian model for the fluctuations of the fundamental frequency, whereas strong deviations are found under usual operating conditions. The frequencydoubled light has in the latter case enhancement factors not so far from values of Gaussian statistics.
Haranas, Ioannis; Gkigkitzis, Ioannis; Kotsireas, Ilias; Austerlitz, Carlos
2017-01-01
Understanding how the brain encodes information and performs computation requires statistical and functional analysis. Given the complexity of the human brain, simple methods that facilitate the interpretation of statistical correlations among different brain regions can be very useful. In this report we introduce a numerical correlation measure that may serve the interpretation of correlational neuronal data, and may assist in the evaluation of different brain states. The description of the dynamical brain system, through a global numerical measure may indicate the presence of an action principle which may facilitate a application of physics principles in the study of the human brain and cognition.
NASA Astrophysics Data System (ADS)
Glassman, Lisa Hayley
Individuals with public speaking phobia experience fear and avoidance that can cause extreme distress, impaired speaking performance, and associated problems in psychosocial functioning. Most extant interventions for public speaking phobia focus on the reduction of anxiety and avoidance, but neglect performance. Additionally, very little is known about the relationship between verbal working memory and social performance under conditions of high anxiety. The current study compared the efficacy of two cognitive behavioral treatments, traditional Cognitive Behavioral Therapy (tCBT) and acceptance-based behavior therapy (ABBT), in enhancing public speaking performance via coping with anxiety. Verbal working memory performance, as measured by the backwards digit span (BDS), was measured to explore the relationships between treatment type, anxiety, performance, and verbal working memory. We randomized 30 individuals with high public speaking anxiety to a 90-minute ABBT or tCBT intervention. As this pilot study was underpowered, results are examined in terms of effect sizes as well as statistical significance. Assessments took place at pre and post-intervention and included self-rated and objective anxiety measurements, a behavioral assessment, ABBT and tCBT process measures, and backwards digit span verbal working memory tests. In order to examine verbal working memory during different levels of anxiety and performance pressure, we gave each participant a backwards digit span task three times during each assessment: once under calm conditions, then again while experiencing anticipatory anxiety, and finally under conditions of acute social performance anxiety in front of an audience. Participants were asked to give a video-recorded speech in front of the audience at pre- and post-intervention to examine speech performance. Results indicated that all participants experienced a very large and statistically significant decrease in anxiety (both during the speech and BDS), as well as an improvement in speech performance regardless of intervention received. While not statistically significant, participants who received an acceptance-based intervention exhibited larger improvements in observer-rated speech performance at post-treatment in comparison to tCBT (F (1,21) = 1.91, p =.18, etap2 = .08) such that individuals in the ABBT condition exhibited a considerably greater improvement in observer-rated speech performance than those in the tCBT condition. There was no differential impact of treatment condition on subjective speech anxiety or working memory task performance. Potential mediators and moderators of treatment were also examined. Results provide support for a brief 90-minute intervention for public speaking anxiety, but more research is needed in a study with a larger sample to fully understand the relationship between ABBT strategies and improvements in behavioral performance.
Relevance of the c-statistic when evaluating risk-adjustment models in surgery.
Merkow, Ryan P; Hall, Bruce L; Cohen, Mark E; Dimick, Justin B; Wang, Edward; Chow, Warren B; Ko, Clifford Y; Bilimoria, Karl Y
2012-05-01
The measurement of hospital quality based on outcomes requires risk adjustment. The c-statistic is a popular tool used to judge model performance, but can be limited, particularly when evaluating specific operations in focused populations. Our objectives were to examine the interpretation and relevance of the c-statistic when used in models with increasingly similar case mix and to consider an alternative perspective on model calibration based on a graphical depiction of model fit. From the American College of Surgeons National Surgical Quality Improvement Program (2008-2009), patients were identified who underwent a general surgery procedure, and procedure groups were increasingly restricted: colorectal-all, colorectal-elective cases only, and colorectal-elective cancer cases only. Mortality and serious morbidity outcomes were evaluated using logistic regression-based risk adjustment, and model c-statistics and calibration curves were used to compare model performance. During the study period, 323,427 general, 47,605 colorectal-all, 39,860 colorectal-elective, and 21,680 colorectal cancer patients were studied. Mortality ranged from 1.0% in general surgery to 4.1% in the colorectal-all group, and serious morbidity ranged from 3.9% in general surgery to 12.4% in the colorectal-all procedural group. As case mix was restricted, c-statistics progressively declined from the general to the colorectal cancer surgery cohorts for both mortality and serious morbidity (mortality: 0.949 to 0.866; serious morbidity: 0.861 to 0.668). Calibration was evaluated graphically by examining predicted vs observed number of events over risk deciles. For both mortality and serious morbidity, there was no qualitative difference in calibration identified between the procedure groups. In the present study, we demonstrate how the c-statistic can become less informative and, in certain circumstances, can lead to incorrect model-based conclusions, as case mix is restricted and patients become more homogenous. Although it remains an important tool, caution is advised when the c-statistic is advanced as the sole measure of a model performance. Copyright © 2012 American College of Surgeons. All rights reserved.
Advani, Aneel; Jones, Neil; Shahar, Yuval; Goldstein, Mary K; Musen, Mark A
2004-01-01
We develop a method and algorithm for deciding the optimal approach to creating quality-auditing protocols for guideline-based clinical performance measures. An important element of the audit protocol design problem is deciding which guide-line elements to audit. Specifically, the problem is how and when to aggregate individual patient case-specific guideline elements into population-based quality measures. The key statistical issue involved is the trade-off between increased reliability with more general population-based quality measures versus increased validity from individually case-adjusted but more restricted measures done at a greater audit cost. Our intelligent algorithm for auditing protocol design is based on hierarchically modeling incrementally case-adjusted quality constraints. We select quality constraints to measure using an optimization criterion based on statistical generalizability coefficients. We present results of the approach from a deployed decision support system for a hypertension guideline.
Havens, Timothy C; Roggemann, Michael C; Schulz, Timothy J; Brown, Wade W; Beyer, Jeff T; Otten, L John
2002-05-20
We discuss a method of data reduction and analysis that has been developed for a novel experiment to detect anisotropic turbulence in the tropopause and to measure the spatial statistics of these flows. The experimental concept is to make measurements of temperature at 15 points on a hexagonal grid for altitudes from 12,000 to 18,000 m while suspended from a balloon performing a controlled descent. From the temperature data, we estimate the index of refraction and study the spatial statistics of the turbulence-induced index of refraction fluctuations. We present and evaluate the performance of a processing approach to estimate the parameters of an anisotropic model for the spatial power spectrum of the turbulence-induced index of refraction fluctuations. A Gaussian correlation model and a least-squares optimization routine are used to estimate the parameters of the model from the measurements. In addition, we implemented a quick-look algorithm to have a computationally nonintensive way of viewing the autocorrelation function of the index fluctuations. The autocorrelation of the index of refraction fluctuations is binned and interpolated onto a uniform grid from the sparse points that exist in our experiment. This allows the autocorrelation to be viewed with a three-dimensional plot to determine whether anisotropy exists in a specific data slab. Simulation results presented here show that, in the presence of the anticipated levels of measurement noise, the least-squares estimation technique allows turbulence parameters to be estimated with low rms error.
Dėdelė, Audrius; Miškinytė, Auksė
2015-09-01
In many countries, road traffic is one of the main sources of air pollution associated with adverse effects on human health and environment. Nitrogen dioxide (NO2) is considered to be a measure of traffic-related air pollution, with concentrations tending to be higher near highways, along busy roads, and in the city centers, and the exceedances are mainly observed at measurement stations located close to traffic. In order to assess the air quality in the city and the air pollution impact on public health, air quality models are used. However, firstly, before the model can be used for these purposes, it is important to evaluate the accuracy of the dispersion modelling as one of the most widely used method. The monitoring and dispersion modelling are two components of air quality monitoring system (AQMS), in which statistical comparison was made in this research. The evaluation of the Atmospheric Dispersion Modelling System (ADMS-Urban) was made by comparing monthly modelled NO2 concentrations with the data of continuous air quality monitoring stations in Kaunas city. The statistical measures of model performance were calculated for annual and monthly concentrations of NO2 for each monitoring station site. The spatial analysis was made using geographic information systems (GIS). The calculation of statistical parameters indicated a good ADMS-Urban model performance for the prediction of NO2. The results of this study showed that the agreement of modelled values and observations was better for traffic monitoring stations compared to the background and residential stations.
Meier, Frederick A; Souers, Rhona J; Howanitz, Peter J; Tworek, Joseph A; Perrotta, Peter L; Nakhleh, Raouf E; Karcher, Donald S; Bashleben, Christine; Darcy, Teresa P; Schifman, Ron B; Jones, Bruce A
2015-06-01
Many production systems employ standardized statistical monitors that measure defect rates and cycle times, as indices of performance quality. Clinical laboratory testing, a system that produces test results, is amenable to such monitoring. To demonstrate patterns in clinical laboratory testing defect rates and cycle time using 7 College of American Pathologists Q-Tracks program monitors. Subscribers measured monthly rates of outpatient order-entry errors, identification band defects, and specimen rejections; median troponin order-to-report cycle times and rates of STAT test receipt-to-report turnaround time outliers; and critical values reporting event defects, and corrected reports. From these submissions Q-Tracks program staff produced quarterly and annual reports. These charted each subscriber's performance relative to other participating laboratories and aggregate and subgroup performance over time, dividing participants into best and median performers and performers with the most room to improve. Each monitor's patterns of change present percentile distributions of subscribers' performance in relation to monitoring durations and numbers of participating subscribers. Changes over time in defect frequencies and the cycle duration quantify effects on performance of monitor participation. All monitors showed significant decreases in defect rates as the 7 monitors ran variously for 6, 6, 7, 11, 12, 13, and 13 years. The most striking decreases occurred among performers who initially had the most room to improve and among subscribers who participated the longest. All 7 monitors registered significant improvement. Participation effects improved between 0.85% and 5.1% per quarter of participation. Using statistical quality measures, collecting data monthly, and receiving reports quarterly and yearly, subscribers to a comparative monitoring program documented significant decreases in defect rates and shortening of a cycle time for 6 to 13 years in all 7 ongoing clinical laboratory quality monitors.
Silverman, Michael J
2007-01-01
Educational and therapeutic objectives are often paired with music to facilitate the recall of information. The purpose of this study was to isolate and determine the effect of paired pitch, rhythm, and speech on undergraduate's memory as measured by sequential digit recall performance. Participants (N = 120) listened to 4 completely counterbalanced treatment conditions each consisting of 9 randomized monosyllabic digits paired with speech, pitch, rhythm, and the combination of pitch and rhythm. No statistically significant learning or order effects were found across the 4 trials. A 3-way repeated-measures ANOVA indicated a statistically significant difference in digit recall performance across treatment conditions, positions, groups, and treatment by position. No other comparisons resulted in statistically significant differences. Participants were able to recall digits from the rhythm condition most accurately while recalling digits from the speech and pitch only conditions the least accurately. Consistent with previous research, the music major participants scored significantly higher than non-music major participants and the main effect associated with serial position indicated that recall performance was best during primacy and recency positions. Analyses indicated an interaction between serial position and treatment condition, also a result consistent with previous research. The results of this study suggest that pairing information with rhythm can facilitate recall but pairing information with pitch or the combination of pitch and rhythm may not enhance recall more than speech when participants listen to an unfamiliar musical selection only once. Implications for practice in therapy and education are made as well as suggestions for future research.
Chung, Chi-Jung; Kuo, Yu-Chen; Hsieh, Yun-Yu; Li, Tsai-Chung; Lin, Cheng-Chieh; Liang, Wen-Miin; Liao, Li-Na; Li, Chia-Ing; Lin, Hsueh-Chun
2017-11-01
This study applied open source technology to establish a subject-enabled analytics model that can enhance measurement statistics of case studies with the public health data in cloud computing. The infrastructure of the proposed model comprises three domains: 1) the health measurement data warehouse (HMDW) for the case study repository, 2) the self-developed modules of online health risk information statistics (HRIStat) for cloud computing, and 3) the prototype of a Web-based process automation system in statistics (PASIS) for the health risk assessment of case studies with subject-enabled evaluation. The system design employed freeware including Java applications, MySQL, and R packages to drive a health risk expert system (HRES). In the design, the HRIStat modules enforce the typical analytics methods for biomedical statistics, and the PASIS interfaces enable process automation of the HRES for cloud computing. The Web-based model supports both modes, step-by-step analysis and auto-computing process, respectively for preliminary evaluation and real time computation. The proposed model was evaluated by computing prior researches in relation to the epidemiological measurement of diseases that were caused by either heavy metal exposures in the environment or clinical complications in hospital. The simulation validity was approved by the commercial statistics software. The model was installed in a stand-alone computer and in a cloud-server workstation to verify computing performance for a data amount of more than 230K sets. Both setups reached efficiency of about 10 5 sets per second. The Web-based PASIS interface can be used for cloud computing, and the HRIStat module can be flexibly expanded with advanced subjects for measurement statistics. The analytics procedure of the HRES prototype is capable of providing assessment criteria prior to estimating the potential risk to public health. Copyright © 2017 Elsevier B.V. All rights reserved.
Finch, S J; Chen, C H; Gordon, D; Mendell, N R
2001-12-01
This study compared the performance of the maximum lod (MLOD), maximum heterogeneity lod (MHLOD), maximum non-parametric linkage score (MNPL), maximum Kong and Cox linear extension (MKC(lin)) of NPL, and maximum Kong and Cox exponential extension (MKC(exp)) of NPL as calculated in Genehunter 1.2 and Genehunter-Plus. Our performance measure was the distance between the marker with maximum value for each linkage statistic and the trait locus. We performed a simulation study considering: 1) four modes of transmission, 2) 100 replicates for each model, 3) 58 pedigrees (with 592 subjects) per replicate, 4) three linked marker loci each having three equally frequent alleles, and 5) either 0% unlinked families (linkage homogeneity) or 50% unlinked families (linkage heterogeneity). For each replicate, we obtained the Haldane map position of the location at which each of the five statistics is maximized. The MLOD and MHLOD were obtained by maximizing over penetrances, phenocopy rate, and risk-allele frequencies. For the models simulated, MHLOD appeared to be the best statistic both in terms of identifying a marker locus having the smallest mean distance from the trait locus and in terms of the strongest negative correlation between maximum linkage statistic and distance of the identified position and the trait locus. The marker loci with maximum value of the Kong and Cox extensions of the NPL statistic also were closer to the trait locus than the marker locus with maximum value of the NPL statistic. Copyright 2001 Wiley-Liss, Inc.
Britto, Ingrid Schwach Werneck; Sananes, Nicolas; Olutoye, Oluyinka O; Cass, Darrell L; Sangi-Haghpeykar, Haleh; Lee, Timothy C; Cassady, Christopher I; Mehollin-Ray, Amy; Welty, Stephen; Fernandes, Caraciolo; Belfort, Michael A; Lee, Wesley; Ruano, Rodrigo
2015-10-01
The purpose of this study was to evaluate the impact of standardization of the lung-to-head ratio measurements in isolated congenital diaphragmatic hernia on prediction of neonatal outcomes and reproducibility. We conducted a retrospective cohort study of 77 cases of isolated congenital diaphragmatic hernia managed in a single center between 2004 and 2012. We compared lung-to-head ratio measurements that were performed prospectively in our institution without standardization to standardized measurements performed according to a defined protocol. The standardized lung-to-head ratio measurements were statistically more accurate than the nonstandardized measurements for predicting neonatal mortality (area under the receiver operating characteristic curve, 0.85 versus 0.732; P = .003). After standardization, there were no statistical differences in accuracy between measurements regardless of whether we considered observed-to-expected values (P > .05). Standardization of the lung-to-head ratio did not improve prediction of the need for extracorporeal membrane oxygenation (P> .05). Both intraoperator and interoperator reproducibility were good for the standardized lung-to-head ratio (intraclass correlation coefficient, 0.98 [95% confidence interval, 0.97-0.99]; bias, 0.02 [limits of agreement, -0.11 to +0.15], respectively). Standardization of lung-to-head ratio measurements improves prediction of neonatal outcomes. Further studies are needed to confirm these results and to assess the utility of standardization of other prognostic parameters.
NASA Technical Reports Server (NTRS)
Carr, James L.; Madani, Houria
2007-01-01
Geostationary Operational Environmental Satellite (GOES) Image Navigation and Registration (INR) performance is specified at the 3- level, meaning that 99.7% of a collection of individual measurements must comply with specification thresholds. Landmarks are measured by the Replacement Product Monitor (RPM), part of the operational GOES ground system, to assess INR performance and to close the INR loop. The RPM automatically discriminates between valid and invalid measurements enabling it to run without human supervision. In general, this screening is reliable, but a small population of invalid measurements will be falsely identified as valid. Even a small population of invalid measurements can create problems when assessing performance at the 3-sigma level. This paper describes an additional layer of quality control whereby landmarks of the highest quality ("platinum") are identified by their self-consistency. The platinum screening criteria are not simple statistical outlier tests against sigma values in populations of INR errors. In-orbit INR performance metrics for GOES-12 and GOES-13 are presented using the platinum landmark methodology.
Intracalibration of particle detectors on a three-axis stabilized geostationary platform
NASA Astrophysics Data System (ADS)
Rowland, W.; Weigel, R. S.
2012-11-01
We describe an algorithm for intracalibration of measurements from plasma or energetic particle detectors on a three-axis stabilized platform. Modeling and forecasting of Earth's radiation belt environment requires data from particle instruments, and these data depend on measurements which have an inherent calibration uncertainty. Pre-launch calibration is typically performed, but on-orbit changes in the instrument often necessitate adjustment of calibration parameters to mitigate the effect of these changes on the measurements. On-orbit calibration practices for particle detectors aboard spin-stabilized spacecraft are well established. Three-axis stabilized platforms, however, pose unique challenges even when comparisons are being performed between multiple telescopes measuring the same energy ranges aboard the same satellite. This algorithm identifies time intervals when different telescopes are measuring particles with the same pitch angles. These measurements are used to compute scale factors which can be multiplied by the pre-launch geometric factor to correct any changes. The approach is first tested using measurements from GOES-13 MAGED particle detectors over a 5-month time period in 2010. We find statistically significant variations which are generally on the order of 5% or less. These results do not appear to be dependent on Poisson statistics nor upon whether a dead time correction was performed. When applied to data from a 5-month interval in 2011, one telescope shows a 10% shift from the 2010 scale factors. This technique has potential for operational use to help maintain relative calibration between multiple telescopes aboard a single satellite. It should also be extensible to inter-calibration between multiple satellites.
Robustness of S1 statistic with Hodges-Lehmann for skewed distributions
NASA Astrophysics Data System (ADS)
Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping
2016-10-01
Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.
Referenceless perceptual fog density prediction model
NASA Astrophysics Data System (ADS)
Choi, Lark Kwon; You, Jaehee; Bovik, Alan C.
2014-02-01
We propose a perceptual fog density prediction model based on natural scene statistics (NSS) and "fog aware" statistical features, which can predict the visibility in a foggy scene from a single image without reference to a corresponding fogless image, without side geographical camera information, without training on human-rated judgments, and without dependency on salient objects such as lane markings or traffic signs. The proposed fog density predictor only makes use of measurable deviations from statistical regularities observed in natural foggy and fog-free images. A fog aware collection of statistical features is derived from a corpus of foggy and fog-free images by using a space domain NSS model and observed characteristics of foggy images such as low contrast, faint color, and shifted intensity. The proposed model not only predicts perceptual fog density for the entire image but also provides a local fog density index for each patch. The predicted fog density of the model correlates well with the measured visibility in a foggy scene as measured by judgments taken in a human subjective study on a large foggy image database. As one application, the proposed model accurately evaluates the performance of defog algorithms designed to enhance the visibility of foggy images.
Barber, Julie A; Thompson, Simon G
1998-01-01
Objective To review critically the statistical methods used for health economic evaluations in randomised controlled trials where an estimate of cost is available for each patient in the study. Design Survey of published randomised trials including an economic evaluation with cost values suitable for statistical analysis; 45 such trials published in 1995 were identified from Medline. Main outcome measures The use of statistical methods for cost data was assessed in terms of the descriptive statistics reported, use of statistical inference, and whether the reported conclusions were justified. Results Although all 45 trials reviewed apparently had cost data for each patient, only 9 (20%) reported adequate measures of variability for these data and only 25 (56%) gave results of statistical tests or a measure of precision for the comparison of costs between the randomised groups. Only 16 (36%) of the articles gave conclusions which were justified on the basis of results presented in the paper. No paper reported sample size calculations for costs. Conclusions The analysis and interpretation of cost data from published trials reveal a lack of statistical awareness. Strong and potentially misleading conclusions about the relative costs of alternative therapies have often been reported in the absence of supporting statistical evidence. Improvements in the analysis and reporting of health economic assessments are urgently required. Health economic guidelines need to be revised to incorporate more detailed statistical advice. Key messagesHealth economic evaluations required for important healthcare policy decisions are often carried out in randomised controlled trialsA review of such published economic evaluations assessed whether statistical methods for cost outcomes have been appropriately used and interpretedFew publications presented adequate descriptive information for costs or performed appropriate statistical analysesIn at least two thirds of the papers, the main conclusions regarding costs were not justifiedThe analysis and reporting of health economic assessments within randomised controlled trials urgently need improving PMID:9794854
NASA Technical Reports Server (NTRS)
Haas, Evan; DeLuccia, Frank
2016-01-01
In evaluating GOES-R Advanced Baseline Imager (ABI) image navigation quality, upsampled sub-images of ABI images are translated against downsampled Landsat 8 images of localized, high contrast earth scenes to determine the translations in the East-West and North-South directions that provide maximum correlation. The native Landsat resolution is much finer than that of ABI, and Landsat navigation accuracy is much better than ABI required navigation accuracy and expected performance. Therefore, Landsat images are considered to provide ground truth for comparison with ABI images, and the translations of ABI sub-images that produce maximum correlation with Landsat localized images are interpreted as ABI navigation errors. The measured local navigation errors from registration of numerous sub-images with the Landsat images are averaged to provide a statistically reliable measurement of the overall navigation error of the ABI image. The dispersion of the local navigation errors is also of great interest, since ABI navigation requirements are specified as bounds on the 99.73rd percentile of the magnitudes of per pixel navigation errors. However, the measurement uncertainty inherent in the use of image registration techniques tends to broaden the dispersion in measured local navigation errors, masking the true navigation performance of the ABI system. We have devised a novel and simple method for estimating the magnitude of the measurement uncertainty in registration error for any pair of images of the same earth scene. We use these measurement uncertainty estimates to filter out the higher quality measurements of local navigation error for inclusion in statistics. In so doing, we substantially reduce the dispersion in measured local navigation errors, thereby better approximating the true navigation performance of the ABI system.
Reliability and Validity of the Turkish Version of the Job Performance Scale Instrument.
Harmanci Seren, Arzu Kader; Tuna, Rujnan; Eskin Bacaksiz, Feride
2018-02-01
Objective measurement of the job performance of nursing staff using valid and reliable instruments is important in the evaluation of healthcare quality. A current, valid, and reliable instrument that specifically measures the performance of nurses is required for this purpose. The aim of this study was to determine the validity and reliability of the Turkish version of the Job Performance Instrument. This study used a methodological design and a sample of 240 nurses working at different units in four hospitals in Istanbul, Turkey. A descriptive data form, the Job Performance Scale, and the Employee Performance Scale were used to collect data. Data were analyzed using IBM SPSS Statistics Version 21.0 and LISREL Version 8.51. On the basis of the data analysis, the instrument was revised. Some items were deleted, and subscales were combined. The Turkish version of the Job Performance Instrument was determined to be valid and reliable to measure the performance of nurses. The instrument is suitable for evaluating current nursing roles.
NASA Astrophysics Data System (ADS)
Boning, Duane S.; Chung, James E.
1998-11-01
Advanced process technology will require more detailed understanding and tighter control of variation in devices and interconnects. The purpose of statistical metrology is to provide methods to measure and characterize variation, to model systematic and random components of that variation, and to understand the impact of variation on both yield and performance of advanced circuits. Of particular concern are spatial or pattern-dependencies within individual chips; such systematic variation within the chip can have a much larger impact on performance than wafer-level random variation. Statistical metrology methods will play an important role in the creation of design rules for advanced technologies. For example, a key issue in multilayer interconnect is the uniformity of interlevel dielectric (ILD) thickness within the chip. For the case of ILD thickness, we describe phases of statistical metrology development and application to understanding and modeling thickness variation arising from chemical-mechanical polishing (CMP). These phases include screening experiments including design of test structures and test masks to gather electrical or optical data, techniques for statistical decomposition and analysis of the data, and approaches to calibrating empirical and physical variation models. These models can be integrated with circuit CAD tools to evaluate different process integration or design rule strategies. One focus for the generation of interconnect design rules are guidelines for the use of "dummy fill" or "metal fill" to improve the uniformity of underlying metal density and thus improve the uniformity of oxide thickness within the die. Trade-offs that can be evaluated via statistical metrology include the improvements to uniformity possible versus the effect of increased capacitance due to additional metal.
Cevasco, Marisa; Mick, Stephanie L; Kwon, Michael; Lee, Lawrence S; Chen, Edward P; Chen, Frederick Y
2013-05-01
Currently, there is no universal standard for sizing bioprosthetic aortic valves. Hence, a standardized comparison was performed to clarify this issue. Every size of four commercially available bioprosthetic aortic valves marketed in the United States (Biocor Supra; Mosaic Ultra; Magna Ease; Mitroflow) was obtained. Subsequently, custom sizers were created that were accurate to 0.0025 mm to represent aortic roots 18 mm through 32 mm, and these were used to measure the external diameter of each valve. Using the effective orifice area (EOA) and transvalvular pressure gradient (TPG) data submitted to the FDA, a comparison was made between the hemodynamic properties of valves with equivalent manufacturer stated sizes and valves with equivalent measured external diameters. Based on manufacturer size alone, the valves at first seemed to be hemodynamically different from each other, with Mitroflow valves appearing to be hemodynamically superior, having a large EOA and equivalent or superior TPG (p < 0.05). However, Mitroflow valves had a larger measured external diameter than the other valves of a given numerical manufacturer size. Valves with equivalent external diameters were then compared, regardless of the stated manufacturer sizes. For truly equivalently sized valves (i.e., by measured external diameter) there was no clear hemodynamic difference. There was no statistical difference in the EOAs between the Biocor Supra, Mosaic Ultra, and Mitroflow valves, and the Magna Ease valve had a statistically smaller EOA (p < 0.05). On comparing the mean TPG, the Biocor Supra and Mitroflow valves had statistically equivalent gradients to each other, as did the Mosaic Ultra and Magna Ease valves. When comparing valves of the same numerical manufacturer size, there appears to be a difference in hemodynamic performance across different manufacturers' valves according to FDA data. However, comparing equivalently measured valves eliminates the differences between valves produced by different manufacturers.
The CTS 11.7 GHz angle of arrival experiment
NASA Technical Reports Server (NTRS)
Kwan, B. W.; Hodge, D. B.
1981-01-01
The objective of the experiment was to determine the statistical behavior of attenuation and angle of arrival on an Earth-space propagation path using the CTS 11.7 GHz beacon. Measurements performed from 1976 to 1978 form the data base for analysis. The statistics of the signal attenuation and phase variations due to atmospheric disturbances are presented. Rainfall rate distributions are also included to provide a link between the above effects on wave propagation and meteorological conditions.
ERIC Educational Resources Information Center
Agus, Mirian; Peró-Cebollero, Maribel; Penna, Maria Pietronilla; Guàrdia-Olmos, Joan
2015-01-01
This study aims to investigate about the existence of a graphical facilitation effect on probabilistic reasoning. Measures of undergraduates' performances on problems presented in both verbal-numerical and graphical-pictorial formats have been related to visuo-spatial and numerical prerequisites, to statistical anxiety, to attitudes towards…
Choroidal Thickness Analysis in Patients with Usher Syndrome Type 2 Using EDI OCT.
Colombo, L; Sala, B; Montesano, G; Pierrottet, C; De Cillà, S; Maltese, P; Bertelli, M; Rossetti, L
2015-01-01
To portray Usher Syndrome type 2, analyzing choroidal thickness and comparing data reported in published literature on RP and healthy subjects. Methods. 20 eyes of 10 patients with clinical signs and genetic diagnosis of Usher Syndrome type 2. Each patient underwent a complete ophthalmologic examination including Best Corrected Visual Acuity (BCVA), intraocular pressure (IOP), axial length (AL), automated visual field (VF), and EDI OCT. Both retinal and choroidal measures were measured. Statistical analysis was performed to correlate choroidal thickness with age, BCVA, IOP, AL, VF, and RT. Comparison with data about healthy people and nonsyndromic RP patients was performed. Results. Mean subfoveal choroidal thickness (SFCT) was 248.21 ± 79.88 microns. SFCT was statistically significant correlated with age (correlation coefficient -0.7248179, p < 0.01). No statistically significant correlation was found between SFCT and BCVA, IOP, AL, VF, and RT. SFCT was reduced if compared to healthy subjects (p < 0.01). No difference was found when compared to choroidal thickness from nonsyndromic RP patients (p = 0.2138). Conclusions. Our study demonstrated in vivo choroidal thickness reduction in patients with Usher Syndrome type 2. These data are important for the comprehension of mechanisms of disease and for the evaluation of therapeutic approaches.
Kulesz, Paulina A; Tian, Siva; Juranek, Jenifer; Fletcher, Jack M; Francis, David J
2015-03-01
Weak structure-function relations for brain and behavior may stem from problems in estimating these relations in small clinical samples with frequently occurring outliers. In the current project, we focused on the utility of using alternative statistics to estimate these relations. Fifty-four children with spina bifida meningomyelocele performed attention tasks and received MRI of the brain. Using a bootstrap sampling process, the Pearson product-moment correlation was compared with 4 robust correlations: the percentage bend correlation, the Winsorized correlation, the skipped correlation using the Donoho-Gasko median, and the skipped correlation using the minimum volume ellipsoid estimator. All methods yielded similar estimates of the relations between measures of brain volume and attention performance. The similarity of estimates across correlation methods suggested that the weak structure-function relations previously found in many studies are not readily attributable to the presence of outlying observations and other factors that violate the assumptions behind the Pearson correlation. Given the difficulty of assembling large samples for brain-behavior studies, estimating correlations using multiple, robust methods may enhance the statistical conclusion validity of studies yielding small, but often clinically significant, correlations. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Luster measurements of lips treated with lipstick formulations.
Yadav, Santosh; Issa, Nevine; Streuli, David; McMullen, Roger; Fares, Hani
2011-01-01
In this study, digital photography in combination with image analysis was used to measure the luster of several lipstick formulations containing varying amounts and types of polymers. A weighed amount of lipstick was applied to a mannequin's lips and the mannequin was illuminated by a uniform beam of a white light source. Digital images of the mannequin were captured with a high-resolution camera and the images were analyzed using image analysis software. Luster analysis was performed using Stamm (L(Stamm)) and Reich-Robbins (L(R-R)) luster parameters. Statistical analysis was performed on each luster parameter (L(Stamm) and L(R-R)), peak height, and peak width. Peak heights for lipstick formulation containing 11% and 5% VP/eicosene copolymer were statistically different from those of the control. The L(Stamm) and L(R-R) parameters for the treatment containing 11% VP/eicosene copolymer were statistically different from these of the control. Based on the results obtained in this study, we are able to determine whether a polymer is a good pigment dispersant and contributes to visually detected shine of a lipstick upon application. The methodology presented in this paper could serve as a tool for investigators to screen their ingredients for shine in lipstick formulations.
Manufacturing Execution Systems: Examples of Performance Indicator and Operational Robustness Tools.
Gendre, Yannick; Waridel, Gérard; Guyon, Myrtille; Demuth, Jean-François; Guelpa, Hervé; Humbert, Thierry
Manufacturing Execution Systems (MES) are computerized systems used to measure production performance in terms of productivity, yield, and quality. In the first part, performance indicator and overall equipment effectiveness (OEE), process robustness tools and statistical process control are described. The second part details some tools to help process robustness and control by operators by preventing deviations from target control charts. MES was developed by Syngenta together with CIMO for automation.
Structural health monitoring feature design by genetic programming
NASA Astrophysics Data System (ADS)
Harvey, Dustin Y.; Todd, Michael D.
2014-09-01
Structural health monitoring (SHM) systems provide real-time damage and performance information for civil, aerospace, and other high-capital or life-safety critical structures. Conventional data processing involves pre-processing and extraction of low-dimensional features from in situ time series measurements. The features are then input to a statistical pattern recognition algorithm to perform the relevant classification or regression task necessary to facilitate decisions by the SHM system. Traditional design of signal processing and feature extraction algorithms can be an expensive and time-consuming process requiring extensive system knowledge and domain expertise. Genetic programming, a heuristic program search method from evolutionary computation, was recently adapted by the authors to perform automated, data-driven design of signal processing and feature extraction algorithms for statistical pattern recognition applications. The proposed method, called Autofead, is particularly suitable to handle the challenges inherent in algorithm design for SHM problems where the manifestation of damage in structural response measurements is often unclear or unknown. Autofead mines a training database of response measurements to discover information-rich features specific to the problem at hand. This study provides experimental validation on three SHM applications including ultrasonic damage detection, bearing damage classification for rotating machinery, and vibration-based structural health monitoring. Performance comparisons with common feature choices for each problem area are provided demonstrating the versatility of Autofead to produce significant algorithm improvements on a wide range of problems.
Shilts, Mical Kay; Lamp, Cathi; Horowitz, Marcel; Townsend, Marilyn S
2009-01-01
Investigate the impact of a nutrition education program on student academic performance as measured by achievement of education standards. Quasi-experimental crossover-controlled study. California Central Valley suburban elementary school (58% qualified for free or reduced-priced lunch). All sixth-grade students (n = 84) in the elementary school clustered in 3 classrooms. 9-lesson intervention with an emphasis on guided goal setting and driven by the Social Cognitive Theory. Multiple-choice survey assessing 5 education standards for sixth-grade mathematics and English at 3 time points: baseline (T1), 5 weeks (T2), and 10 weeks (T3). Repeated measures, paired t test, and analysis of covariance. Changes in total scores were statistically different (P < .05), with treatment scores (T3 - T2) generating more gains. The change scores for 1 English (P < .01) and 2 mathematics standards (P < .05; P < .001) were statistically greater for the treatment period (T3 - T2) compared to the control period (T2 - T1). Using standardized tests, results of this pilot study suggest that EatFit can improve academic performance measured by achievement of specific mathematics and English education standards. Nutrition educators can show school administrators and wellness committee members that this program can positively impact academic performance, concomitant to its primary objective of promoting healthful eating and physical activity.
Pharmacy students' test-taking motivation-effort on a low-stakes standardized test.
Waskiewicz, Rhonda A
2011-04-11
To measure third-year pharmacy students' level of motivation while completing the Pharmacy Curriculum Outcomes Assessment (PCOA) administered as a low-stakes test to better understand use of the PCOA as a measure of student content knowledge. Student motivation was manipulated through an incentive (ie, personal letter from the dean) and a process of statistical motivation filtering. Data were analyzed to determine any differences between the experimental and control groups in PCOA test performance, motivation to perform well, and test performance after filtering for low motivation-effort. Incentivizing students diminished the need for filtering PCOA scores for low effort. Where filtering was used, performance scores improved, providing a more realistic measure of aggregate student performance. To ensure that PCOA scores are an accurate reflection of student knowledge, incentivizing and/or filtering for low motivation-effort among pharmacy students should be considered fundamental best practice when the PCOA is administered as a low-stakes test.
Statistical methods for quantitative mass spectrometry proteomic experiments with labeling.
Oberg, Ann L; Mahoney, Douglas W
2012-01-01
Mass Spectrometry utilizing labeling allows multiple specimens to be subjected to mass spectrometry simultaneously. As a result, between-experiment variability is reduced. Here we describe use of fundamental concepts of statistical experimental design in the labeling framework in order to minimize variability and avoid biases. We demonstrate how to export data in the format that is most efficient for statistical analysis. We demonstrate how to assess the need for normalization, perform normalization, and check whether it worked. We describe how to build a model explaining the observed values and test for differential protein abundance along with descriptive statistics and measures of reliability of the findings. Concepts are illustrated through the use of three case studies utilizing the iTRAQ 4-plex labeling protocol.
Towards Enhanced Underwater Lidar Detection via Source Separation
NASA Astrophysics Data System (ADS)
Illig, David W.
Interest in underwater optical sensors has grown as technologies enabling autonomous underwater vehicles have been developed. Propagation of light through water is complicated by the dual challenges of absorption and scattering. While absorption can be reduced by operating in the blue-green region of the visible spectrum, reducing scattering is a more significant challenge. Collection of scattered light negatively impacts underwater optical ranging, imaging, and communications applications. This thesis concentrates on the ranging application, where scattering reduces operating range as well as range accuracy. The focus of this thesis is on the problem of backscatter, which can create a "clutter" return that may obscure submerged target(s) of interest. The main contributions of this thesis are explorations of signal processing approaches to increase the separation between the target and backscatter returns. Increasing this separation allows detection of weak targets in the presence of strong scatter, increasing both operating range and range accuracy. Simulation and experimental results will be presented for a variety of approaches as functions of water clarity and target position. This work provides several novel contributions to the underwater lidar field: 1. Quantification of temporal separation approaches: While temporal separation has been studied extensively, this work provides a quantitative assessment of the extent to which both high frequency modulation and spatial filter approaches improve the separation between target and backscatter. 2. Development and assessment of frequency separation: This work includes the first frequency-based separation approach for underwater lidar, in which the channel frequency response is measured with a wideband waveform. Transforming to the time-domain gives a channel impulse response, in which target and backscatter returns may appear in unique range bins and thus be separated. 3. Development and assessment of statistical separation: The first investigations of statistical separation approaches for underwater lidar are presented. By demonstrating that target and backscatter returns have different statistical properties, a new separation axis is opened. This work investigates and quantifies performance of three statistical separation approaches. 4. Application of detection theory to underwater lidar: While many similar applications use detection theory to assess performance, less development has occurred in the underwater lidar field. This work applies these concepts to statistical separation approaches, providing another perspective in which to assess performance. In addition, by using detection theory approaches, statistical metrics can be used to associate a level of confidence in each ranging measurement. 5. Preliminary investigation of forward scatter suppression: If backscatter is sufficiently suppressed, forward scattering becomes a performance-limiting factor. This work presents a proof-of-concept demonstration of the potential for statistical separation approaches to suppress both forward and backward scatter. These results provide a demonstration of the capability that signal processing has to improve separation between target and backscatter. Separation capability improves in the transition from temporal to frequency to statistical separation approaches, with the statistical separation approaches improving target detection sensitivity by as much as 30 dB. Ranging and detection results demonstrate the enhanced performance this would allow in ranging applications. This increased performance is an important step in moving underwater lidar capability towards the requirements of the next generation of sensors.
Physique and Performance of Young Wheelchair Basketball Players in Relation with Classification
Zancanaro, Carlo
2015-01-01
The relationships among physical characteristics, performance, and functional ability classification of younger wheelchair basketball players have been barely investigated to date. The purpose of this work was to assess anthropometry, body composition, and performance in sport-specific field tests in a national sample of Italian younger wheelchair basketball players as well as to evaluate the association of these variables with the players’ functional ability classification and game-related statistics. Several anthropometric measurements were obtained for 52 out of 91 eligible players nationwide. Performance was assessed in seven sport-specific field tests (5m sprint, 20m sprint with ball, suicide, maximal pass, pass for accuracy, spot shot and lay-ups) and game-related statistics (free-throw points scored per match, two- and three-point field-goals scored per match, and their sum). Association between variables, and predictivity was assessed by correlation and regression analysis, respectively. Players were grouped into four Classes of increasing functional ability (A-D). One-way ANOVA with Bonferroni’s correction for multiple comparisons was used to assess differences between Classes. Sitting height and functional ability Class especially correlated with performance outcomes, but wheelchair basketball experience and skinfolds did not. Game-related statistics and sport-specific field-test scores all showed significant correlation with each other. Upper arm circumference and/or maximal pass and lay-ups test scores were able to explain 42 to 59% of variance in game-related statistics (P<0.001). A clear difference in performance was only found for functional ability Class A and D. Conclusion: In younger wheelchair basketball players, sitting height positively contributes to performance. The maximal pass and lay-ups test should be carefully considered in younger wheelchair basketball training plans. Functional ability Class reflects to a limited extent the actual differences in performance. PMID:26606681
Characterization of the Body-to-Body Propagation Channel for Subjects during Sports Activities.
Mohamed, Marshed; Cheffena, Michael; Moldsvor, Arild
2018-02-18
Body-to-body wireless networks (BBWNs) have great potential to find applications in team sports activities among others. However, successful design of such systems requires great understanding of the communication channel as the movement of the body components causes time-varying shadowing and fading effects. In this study, we present results of the measurement campaign of BBWN during running and cycling activities. Among others, the results indicated the presence of good and bad states with each state following a specific distribution for the considered propagation scenarios. This motivated the development of two-state semi-Markov model, for simulation of the communication channels. The simulation model was validated using the available measurement data in terms of first and second order statistics and have shown good agreement. The first order statistics obtained from the simulation model as well as the measured results were then used to analyze the performance of the BBWNs channels under running and cycling activities in terms of capacity and outage probability. Cycling channels showed better performance than running, having higher channel capacity and lower outage probability, regardless of the speed of the subjects involved in the measurement campaign.
Comparison of RF spectrum prediction methods for dynamic spectrum access
NASA Astrophysics Data System (ADS)
Kovarskiy, Jacob A.; Martone, Anthony F.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Narayanan, Ram M.
2017-05-01
Dynamic spectrum access (DSA) refers to the adaptive utilization of today's busy electromagnetic spectrum. Cognitive radio/radar technologies require DSA to intelligently transmit and receive information in changing environments. Predicting radio frequency (RF) activity reduces sensing time and energy consumption for identifying usable spectrum. Typical spectrum prediction methods involve modeling spectral statistics with Hidden Markov Models (HMM) or various neural network structures. HMMs describe the time-varying state probabilities of Markov processes as a dynamic Bayesian network. Neural Networks model biological brain neuron connections to perform a wide range of complex and often non-linear computations. This work compares HMM, Multilayer Perceptron (MLP), and Recurrent Neural Network (RNN) algorithms and their ability to perform RF channel state prediction. Monte Carlo simulations on both measured and simulated spectrum data evaluate the performance of these algorithms. Generalizing spectrum occupancy as an alternating renewal process allows Poisson random variables to generate simulated data while energy detection determines the occupancy state of measured RF spectrum data for testing. The results suggest that neural networks achieve better prediction accuracy and prove more adaptable to changing spectral statistics than HMMs given sufficient training data.
Statistical Exposé of a Multiple-Compartment Anaerobic Reactor Treating Domestic Wastewater.
Pfluger, Andrew R; Hahn, Martha J; Hering, Amanda S; Munakata-Marr, Junko; Figueroa, Linda
2018-06-01
Mainstream anaerobic treatment of domestic wastewater is a promising energy-generating treatment strategy; however, such reactors operated in colder regions are not well characterized. Performance data from a pilot-scale, multiple-compartment anaerobic reactor taken over 786 days were subjected to comprehensive statistical analyses. Results suggest that chemical oxygen demand (COD) was a poor proxy for organics in anaerobic systems as oxygen demand from dissolved inorganic material, dissolved methane, and colloidal material influence dissolved and particulate COD measurements. Additionally, univariate and functional boxplots were useful in visualizing variability in contaminant concentrations and identifying statistical outliers. Further, significantly different dissolved organic removal and methane production was observed between operational years, suggesting that anaerobic reactor systems may not achieve steady-state performance within one year. Last, modeling multiple-compartment reactor systems will require data collected over at least two years to capture seasonal variations of the major anaerobic microbial functions occurring within each reactor compartment.
Approximating Long-Term Statistics Early in the Global Precipitation Measurement Era
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Kirschbaum, Dalia B.; Huffman, George J.; Adler, Robert F.
2017-01-01
Long-term precipitation records are vital to many applications, especially the study of extreme events. The Tropical Rainfall Measuring Mission (TRMM) has served this need, but TRMMs successor mission, Global Precipitation Measurement (GPM), does not yet provide a long-term record. Quantile mapping, the conversion of values across paired empirical distributions, offers a simple, established means to approximate such long-term statistics, but only within appropriately defined domains. This method was applied to a case study in Central America, demonstrating that quantile mapping between TRMM and GPM data maintains the performance of a real-time landslide model. Use of quantile mapping could bring the benefits of the latest satellite-based precipitation dataset to existing user communities such as those for hazard assessment, crop forecasting, numerical weather prediction, and disease tracking.
Sensorimotor abilities predict on-field performance in professional baseball.
Burris, Kyle; Vittetoe, Kelly; Ramger, Benjamin; Suresh, Sunith; Tokdar, Surya T; Reiter, Jerome P; Appelbaum, L Gregory
2018-01-08
Baseball players must be able to see and react in an instant, yet it is hotly debated whether superior performance is associated with superior sensorimotor abilities. In this study, we compare sensorimotor abilities, measured through 8 psychomotor tasks comprising the Nike Sensory Station assessment battery, and game statistics in a sample of 252 professional baseball players to evaluate the links between sensorimotor skills and on-field performance. For this purpose, we develop a series of Bayesian hierarchical latent variable models enabling us to compare statistics across professional baseball leagues. Within this framework, we find that sensorimotor abilities are significant predictors of on-base percentage, walk rate and strikeout rate, accounting for age, position, and league. We find no such relationship for either slugging percentage or fielder-independent pitching. The pattern of results suggests performance contributions from both visual-sensory and visual-motor abilities and indicates that sensorimotor screenings may be useful for player scouting.
Transient probabilities for queues with applications to hospital waiting list management.
Joy, Mark; Jones, Simon
2005-08-01
In this paper we study queuing systems within the NHS. Recently imposed government performance targets lead NHS executives to investigate and instigate alternative management strategies, thereby imposing structural changes on the queues. Under such circumstances, it is most unlikely that such systems are in equilibrium. It is crucial, in our opinion, to recognise this state of affairs in order to make a balanced assessment of the role of queue management in the modern NHS. From a mathematical perspective it should be emphasised that measures of the state of a queue based upon the assumption of statistical equilibrium (a pervasive methodology in the study of queues) are simply wrong in the above scenario. To base strategic decisions around such ideas is therefore highly questionable and it is one of the purposes of this paper to offer alternatives: we present some (recent) research whose results generate performance measures and measures of risk, for example, of waiting-times growing unacceptably large; we emphasise that these results concern the transient behaviour of the queueing model-there is no asssumption of statistical equilibrium. We also demonstrate that our results are computationally tractable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sen, Satyabrata; Rao, Nageswara S; Wu, Qishi
There have been increasingly large deployments of radiation detection networks that require computationally fast algorithms to produce prompt results over ad-hoc sub-networks of mobile devices, such as smart-phones. These algorithms are in sharp contrast to complex network algorithms that necessitate all measurements to be sent to powerful central servers. In this work, at individual sensors, we employ Wald-statistic based detection algorithms which are computationally very fast, and are implemented as one of three Z-tests and four chi-square tests. At fusion center, we apply the K-out-of-N fusion to combine the sensors hard decisions. We characterize the performance of detection methods bymore » deriving analytical expressions for the distributions of underlying test statistics, and by analyzing the fusion performances in terms of K, N, and the false-alarm rates of individual detectors. We experimentally validate our methods using measurements from indoor and outdoor characterization tests of the Intelligence Radiation Sensors Systems (IRSS) program. In particular, utilizing the outdoor measurements, we construct two important real-life scenarios, boundary surveillance and portal monitoring, and present the results of our algorithms.« less
Scaling of plane-wave functions in statistically optimized near-field acoustic holography.
Hald, Jørgen
2014-11-01
Statistically Optimized Near-field Acoustic Holography (SONAH) is a Patch Holography method, meaning that it can be applied in cases where the measurement area covers only part of the source surface. The method performs projections directly in the spatial domain, avoiding the use of spatial discrete Fourier transforms and the associated errors. First, an inverse problem is solved using regularization. For each calculation point a multiplication must then be performed with two transfer vectors--one to get the sound pressure and the other to get the particle velocity. Considering SONAH based on sound pressure measurements, existing derivations consider only pressure reconstruction when setting up the inverse problem, so the evanescent wave amplification associated with the calculation of particle velocity is not taken into account in the regularized solution of the inverse problem. The present paper introduces a scaling of the applied plane wave functions that takes the amplification into account, and it is shown that the previously published virtual source-plane retraction has almost the same effect. The effectiveness of the different solutions is verified through a set of simulated measurements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... intellectual ability. (b) Creativity/Divergent Thinking means scoring in the top 5 percent of performance on a statistically valid and reliable measurement tool of creativity/divergent thinking. (c) Academic Aptitude...
Ultrasound-enhanced bioscouring of greige cotton: regression analysis of process factors
USDA-ARS?s Scientific Manuscript database
Ultrasound-enhanced bioscouring process factors for greige cotton fabric are examined using custom experimental design utilizing statistical principles. An equation is presented which predicts bioscouring performance based upon percent reflectance values obtained from UV-Vis measurements of rutheniu...
Code of Federal Regulations, 2014 CFR
2014-04-01
... intellectual ability. (b) Creativity/Divergent Thinking means scoring in the top 5 percent of performance on a statistically valid and reliable measurement tool of creativity/divergent thinking. (c) Academic Aptitude...
Code of Federal Regulations, 2013 CFR
2013-04-01
... intellectual ability. (b) Creativity/Divergent Thinking means scoring in the top 5 percent of performance on a statistically valid and reliable measurement tool of creativity/divergent thinking. (c) Academic Aptitude...
Code of Federal Regulations, 2012 CFR
2012-04-01
... intellectual ability. (b) Creativity/Divergent Thinking means scoring in the top 5 percent of performance on a statistically valid and reliable measurement tool of creativity/divergent thinking. (c) Academic Aptitude...
Performance following a sudden awakening from daytime nap induced by zaleplon.
Whitmore, Jeffrey N; Fischer, Joseph R; Barton, Emily C; Storm, William F
2004-01-01
Zaleplon appears to be a prime candidate for assisting individuals in obtaining sleep in situations not conducive to rest (i.e., a short period during the day). However, should an early unexpected awakening and return to duty be required, the effect on performance is not known. Zaleplon (10 mg) would negatively affect human performance for some duration, compared with placebo, after a sudden awakening from a short period (1 h) of daytime sleep. There were 16 participants, 8 men and 8 women, who volunteered to participate in this study. The study was conducted using a counterbalanced, double-blind, repeated measures design. At 1 h prior to drug administration, and at each of 7 h postdrug, performance measures (cognition, memory, balance, and strength) and subjective symptom reports were recorded. Zaleplon had a statistically significant (p < 0.05) negative impact on balance through the first 2 h postdose when compared with placebo. In addition, symptoms related to "drowsiness" were statistically more prevalent under zaleplon than under placebo through the first 3 h postdrug. Of the eight measures of cognitive performance, six were significantly negatively impacted in the zaleplon condition through 2 h postdose when compared with placebo, with one remaining significantly degraded through 3 h postdose. Zaleplon also had a significantly negative impact on memory at 1 h and 4 h postdose. Zaleplon (10 mg), when used as a daytime sleep aid, causes drowsiness (and related symptoms) up to 3 h postdose, and may impact task performance, especially more complex tasks, for at least 2-3 h postdose.
NASA Astrophysics Data System (ADS)
Zhong, Ke; Lei, Xia; Li, Shaoqian
2013-12-01
Statistics-based intercarrier interference (ICI) mitigation algorithm is proposed for orthogonal frequency division multiplexing systems in presence of both nonstationary and stationary phase noises. By utilizing the statistics of phase noise, which can be obtained from measurements or data sheets, a Wiener filter preprocessing algorithm for ICI mitigation is proposed. The proposed algorithm can be regarded as a performance-improving technique for the previous researches on phase noise cancelation. Simulation results show that the proposed algorithm can effectively mitigate ICI and lower the error floor, and therefore significantly improve the performances of previous researches on phase noise cancelation, especially in the presence of severe phase noise.
NASA Astrophysics Data System (ADS)
Clerc, F.; Njiki-Menga, G.-H.; Witschger, O.
2013-04-01
Most of the measurement strategies that are suggested at the international level to assess workplace exposure to nanomaterials rely on devices measuring, in real time, airborne particles concentrations (according different metrics). Since none of the instruments to measure aerosols can distinguish a particle of interest to the background aerosol, the statistical analysis of time resolved data requires special attention. So far, very few approaches have been used for statistical analysis in the literature. This ranges from simple qualitative analysis of graphs to the implementation of more complex statistical models. To date, there is still no consensus on a particular approach and the current period is always looking for an appropriate and robust method. In this context, this exploratory study investigates a statistical method to analyse time resolved data based on a Bayesian probabilistic approach. To investigate and illustrate the use of the this statistical method, particle number concentration data from a workplace study that investigated the potential for exposure via inhalation from cleanout operations by sandpapering of a reactor producing nanocomposite thin films have been used. In this workplace study, the background issue has been addressed through the near-field and far-field approaches and several size integrated and time resolved devices have been used. The analysis of the results presented here focuses only on data obtained with two handheld condensation particle counters. While one was measuring at the source of the released particles, the other one was measuring in parallel far-field. The Bayesian probabilistic approach allows a probabilistic modelling of data series, and the observed task is modelled in the form of probability distributions. The probability distributions issuing from time resolved data obtained at the source can be compared with the probability distributions issuing from the time resolved data obtained far-field, leading in a quantitative estimation of the airborne particles released at the source when the task is performed. Beyond obtained results, this exploratory study indicates that the analysis of the results requires specific experience in statistics.
Safaie, Ammar; Wendzel, Aaron; Ge, Zhongfu; Nevers, Meredith; Whitman, Richard L.; Corsi, Steven R.; Phanikumar, Mantha S.
2016-01-01
Statistical and mechanistic models are popular tools for predicting the levels of indicator bacteria at recreational beaches. Researchers tend to use one class of model or the other, and it is difficult to generalize statements about their relative performance due to differences in how the models are developed, tested, and used. We describe a cooperative modeling approach for freshwater beaches impacted by point sources in which insights derived from mechanistic modeling were used to further improve the statistical models and vice versa. The statistical models provided a basis for assessing the mechanistic models which were further improved using probability distributions to generate high-resolution time series data at the source, long-term “tracer” transport modeling based on observed electrical conductivity, better assimilation of meteorological data, and the use of unstructured-grids to better resolve nearshore features. This approach resulted in improved models of comparable performance for both classes including a parsimonious statistical model suitable for real-time predictions based on an easily measurable environmental variable (turbidity). The modeling approach outlined here can be used at other sites impacted by point sources and has the potential to improve water quality predictions resulting in more accurate estimates of beach closures.
Linear retrieval and global measurements of wind speed from the Seasat SMMR
NASA Technical Reports Server (NTRS)
Pandey, P. C.
1983-01-01
Retrievals of wind speed (WS) from Seasat Scanning Multichannel Microwave Radiometer (SMMR) were performed using a two-step statistical technique. Nine subsets of two to five SMMR channels were examined for wind speed retrieval. These subsets were derived by using a leaps and bound procedure based on the coefficient of determination selection criteria to a statistical data base of brightness temperatures and geophysical parameters. Analysis of Monsoon Experiment and ocean station PAPA data showed a strong correlation between sea surface temperature and water vapor. This relation was used in generating the statistical data base. Global maps of WS were produced for one and three month periods.
Learning physics concepts as a function of colloquial language usage
NASA Astrophysics Data System (ADS)
Maier, Steven J.
Data from two sections of college introductory, algebra-based physics courses (n1 = 139, n2 = 91) were collected using three separate instruments to investigate the relationships between reasoning ability, conceptual gain and colloquial language usage. To obtain a measure of reasoning ability, Lawson's Classroom Test of Scientific Reasoning Ability (TSR) was administered once near mid-term for each sample. The Force Concept Inventory (FCI) was administered at the beginning and at the end of the term for pre- and post-test measures. Pre- and post-test data from the Mechanics Language Usage instrument were also collected in conjunction with FCI data collection at the beginning and end of the term. The MLU was developed specifically for this study prior to data collection, and results of a pilot test to establish validity and reliability are reported. T-tests were performed on the data collected to compare the means from each sample. In addition, correlations among the measures were investigated between the samples separately and combined. Results from these investigations served as justification for combining the samples into a single sample of 230 for performing further statistical analyses. The primary objective of this study was to determine if scientific reasoning ability (a function of developmental stage) and conceptual gains in Newtonian mechanics predict students' usages of "force" as measured by the MLU. Regression analyses were performed to evaluate these mediated relationships among TSR and FCI performance as a predictor of MLU performance. Statistically significant correlations and relationships existed among several of the measures, which are discussed at length in the body of the narrative. The findings of this research are that although there exists a discernable relationship between reasoning ability and conceptual change, more work needs to be done to establish improved quantitative measures of the role language usage has in developing understandings of course content.
Correcting for Optimistic Prediction in Small Data Sets
Smith, Gordon C. S.; Seaman, Shaun R.; Wood, Angela M.; Royston, Patrick; White, Ian R.
2014-01-01
The C statistic is a commonly reported measure of screening test performance. Optimistic estimation of the C statistic is a frequent problem because of overfitting of statistical models in small data sets, and methods exist to correct for this issue. However, many studies do not use such methods, and those that do correct for optimism use diverse methods, some of which are known to be biased. We used clinical data sets (United Kingdom Down syndrome screening data from Glasgow (1991–2003), Edinburgh (1999–2003), and Cambridge (1990–2006), as well as Scottish national pregnancy discharge data (2004–2007)) to evaluate different approaches to adjustment for optimism. We found that sample splitting, cross-validation without replication, and leave-1-out cross-validation produced optimism-adjusted estimates of the C statistic that were biased and/or associated with greater absolute error than other available methods. Cross-validation with replication, bootstrapping, and a new method (leave-pair-out cross-validation) all generated unbiased optimism-adjusted estimates of the C statistic and had similar absolute errors in the clinical data set. Larger simulation studies confirmed that all 3 methods performed similarly with 10 or more events per variable, or when the C statistic was 0.9 or greater. However, with lower events per variable or lower C statistics, bootstrapping tended to be optimistic but with lower absolute and mean squared errors than both methods of cross-validation. PMID:24966219
NASA Astrophysics Data System (ADS)
Namjou, K.; Roller, C. B.; Reich, T. E.; Jeffers, J. D.; McMillen, G. L.; McCann, P. J.; Camp, M. A.
2006-11-01
A liquid-nitrogen free mid-infrared tunable diode laser absorption spectroscopy (TDLAS) system equipped with a folded-optical-path astigmatic Herriott cell was used to measure levels of exhaled nitric oxide (eNO) and exhaled carbon dioxide (eCO2) in breath. Quantification of absolute eNO concentrations was performed using NO/CO2 absorption ratios measured by the TDLAS system coupled with absolute eCO2 concentrations measured with a non-dispersive infrared sensor. This technique eliminated the need for routine calibrations using standard cylinder gases. The TDLAS system was used to measure eNO in children and adults (n=799, ages 5 to 64) over a period of more than one year as part of a field study. Volunteers for the study self-reported data including age, height, weight, and health status. The resulting data were used to assess system performance and to generate eNO and eCO2 distributions, which were found to be log-normal and Gaussian, respectively. There were statistically significant differences in mean eNO levels for males and females as well as for healthy and steroid naïve asthmatic volunteers not taking corticosteroid therapies. Ambient NO levels affected measured eNO concentrations only slightly, but this effect was not statistically significant.
Detailed Uncertainty Analysis of the ZEM-3 Measurement System
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The measurement of Seebeck coefficient and electrical resistivity are critical to the investigation of all thermoelectric systems. Therefore, it stands that the measurement uncertainty must be well understood to report ZT values which are accurate and trustworthy. A detailed uncertainty analysis of the ZEM-3 measurement system has been performed. The uncertainty analysis calculates error in the electrical resistivity measurement as a result of sample geometry tolerance, probe geometry tolerance, statistical error, and multi-meter uncertainty. The uncertainty on Seebeck coefficient includes probe wire correction factors, statistical error, multi-meter uncertainty, and most importantly the cold-finger effect. The cold-finger effect plagues all potentiometric (four-probe) Seebeck measurement systems, as heat parasitically transfers through thermocouple probes. The effect leads to an asymmetric over-estimation of the Seebeck coefficient. A thermal finite element analysis allows for quantification of the phenomenon, and provides an estimate on the uncertainty of the Seebeck coefficient. The thermoelectric power factor has been found to have an uncertainty of +9-14 at high temperature and 9 near room temperature.
Herbert, Vanessa; Kyle, Simon D; Pratt, Daniel
2018-06-01
Individuals with insomnia report difficulties pertaining to their cognitive functioning. Cognitive behavioural therapy for insomnia (CBT-I) is associated with robust, long-term improvements in sleep parameters, however less is known about the impact of CBT-I on the daytime correlates of the disorder. A systematic review and narrative synthesis was conducted in order to summarise and evaluate the evidence regarding the impact of CBT-I on cognitive functioning. Reference databases were searched and studies were included if they assessed cognitive performance as an outcome of CBT-I, using either self-report questionnaires or cognitive tests. Eighteen studies met inclusion criteria, comprising 923 individuals with insomnia symptoms. The standardised mean difference was calculated at post-intervention and follow-up. We found preliminary evidence for small to moderate effects of CBT-I on subjective measures of cognitive functioning. Few of the effects were statistically significant, likely due to small sample sizes and limited statistical power. There is a lack of evidence with regards to the impact of CBT-I on objective cognitive performance, primarily due to the small number of studies that administered an objective measure (n = 4). We conclude that adequately powered randomised controlled trials, utilising both subjective and objective measures of cognitive functioning are required. Copyright © 2017 Elsevier Ltd. All rights reserved.
Fault Diagnosis Strategies for SOFC-Based Power Generation Plants
Costamagna, Paola; De Giorgi, Andrea; Gotelli, Alberto; Magistri, Loredana; Moser, Gabriele; Sciaccaluga, Emanuele; Trucco, Andrea
2016-01-01
The success of distributed power generation by plants based on solid oxide fuel cells (SOFCs) is hindered by reliability problems that can be mitigated through an effective fault detection and isolation (FDI) system. However, the numerous operating conditions under which such plants can operate and the random size of the possible faults make identifying damaged plant components starting from the physical variables measured in the plant very difficult. In this context, we assess two classical FDI strategies (model-based with fault signature matrix and data-driven with statistical classification) and the combination of them. For this assessment, a quantitative model of the SOFC-based plant, which is able to simulate regular and faulty conditions, is used. Moreover, a hybrid approach based on the random forest (RF) classification method is introduced to address the discrimination of regular and faulty situations due to its practical advantages. Working with a common dataset, the FDI performances obtained using the aforementioned strategies, with different sets of monitored variables, are observed and compared. We conclude that the hybrid FDI strategy, realized by combining a model-based scheme with a statistical classifier, outperforms the other strategies. In addition, the inclusion of two physical variables that should be measured inside the SOFCs can significantly improve the FDI performance, despite the actual difficulty in performing such measurements. PMID:27556472
Online neural monitoring of statistical learning
Batterink, Laura J.; Paller, Ken A.
2017-01-01
The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the RT task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. PMID:28324696
Effect of fastskin suits on performance, drag, and energy cost of swimming.
Chatard, Jean-Claude; Wilson, Barry
2008-06-01
To investigate the effect of fastskin suits on 25- to 800-m performances, drag, and energy cost of swimming. The performances, stroke rate and distance per stroke, were measured for 14 competitive swimmers in a 25-m pool, when wearing a normal suit (N) and when wearing a full-body suit (FB) or a waist-to-ankle suit (L). Passive drag, oxygen uptake, blood lactate, and the perceived exertion were measured in a flume. There was a 3.2% +/- 2.4% performance benefit for all subjects over the six distances covered at maximal speed wearing FB and L when compared with N. When wearing L, the gain was significantly lower (1.8% +/- 2.5%, P < 0.01) than when wearing FB compared with N. The exercise perception was significantly lower when wearing FB than N, whereas there was no statistical difference when wearing L. The distance per stroke was significantly higher when wearing FB and L, whereas the differences in stroke rate were not statistically significant. There was a significant reduction in drag when wearing FB and L of 6.2% +/- 7.9% and 4.7% +/- 4.4%, respectively (P < 0.01), whereas the energy cost of swimming was significantly reduced when wearing FB and L by 4.5% +/- 5.4% and 5.5% +/- 3.1%, respectively (P < 0.01). However, the differences between FB and L were not statistically significant for drag and oxygen uptake. FB and L significantly reduced passive drag, and this was associated with a decreased energy cost of submaximal swimming and an increased distance per stroke, at the same stroke rates, and reduced freestyle performance time.
Effect of Kınesıotapıng and Knee Brace on Functıonal Performance in Recreatıonal Athletes
Ulusoy, Burak; İldiz, Bülent; Tunay, Volga Bayrakçı
2014-01-01
Objectives: Kinesiotaping is a popular taping method that is used for both therapeutic and performance enchancement purposes. Knee braces are widely used for prevention in sport injuries but their performance effectiveness is still controversial. The aim of this study was to determine whether kinesiotape or brace was more effective on functional performance. Methods: A total twenty male recreational football players (Mean±Standart Deviation (SD) age: 22.5±0.68 years, height: 175.15±3.37 cm, body weight: 74.52±12.41 kg), voluntarily participated in this study. Participants were tested with kinesiotape, with brace and without kinesiotape and brace. Tests were applied one day after patellar kinesiotaping (correction technique). Balance property measured with Modified Y balance Test (dynamic test), agility measured by T test, muscle strength and anaerobic power assessed by vertical jump and triple hop tests. Wilcoxon signed rank test was employed for determining the statistical significance of tests with kinesiotape, with brace and without kinesiotape and brace. Results: In analysis; There were statistically significant differences found in Triple hop test with kinesiotaping and without kinesiotaping and brace, in T test with bracing and kinesiotaping, in vertical jump with kinesiotaping and without kinesiotaping and brace (p<0.001) (in the favour of kinesiotaping in all tests) No statistically significant difference was found in modified Y balance test all groups (p> 0.05). Conclusion: Consequently, kinesiotaping had positive effects on agility and muscle strength but had no effects on balance in football players. On the other hand, brace had no effects on functional performance tests.
Lee, L.; Helsel, D.
2005-01-01
Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.
Color image encryption using random transforms, phase retrieval, chaotic maps, and diffusion
NASA Astrophysics Data System (ADS)
Annaby, M. H.; Rushdi, M. A.; Nehary, E. A.
2018-04-01
The recent tremendous proliferation of color imaging applications has been accompanied by growing research in data encryption to secure color images against adversary attacks. While recent color image encryption techniques perform reasonably well, they still exhibit vulnerabilities and deficiencies in terms of statistical security measures due to image data redundancy and inherent weaknesses. This paper proposes two encryption algorithms that largely treat these deficiencies and boost the security strength through novel integration of the random fractional Fourier transforms, phase retrieval algorithms, as well as chaotic scrambling and diffusion. We show through detailed experiments and statistical analysis that the proposed enhancements significantly improve security measures and immunity to attacks.
A Generalized Approach for Measuring Relationships Among Genes.
Wang, Lijun; Ahsan, Md Asif; Chen, Ming
2017-07-21
Several methods for identifying relationships among pairs of genes have been developed. In this article, we present a generalized approach for measuring relationships between any pairs of genes, which is based on statistical prediction. We derive two particular versions of the generalized approach, least squares estimation (LSE) and nearest neighbors prediction (NNP). According to mathematical proof, LSE is equivalent to the methods based on correlation; and NNP is approximate to one popular method called the maximal information coefficient (MIC) according to the performances in simulations and real dataset. Moreover, the approach based on statistical prediction can be extended from two-genes relationships to multi-genes relationships. This application would help to identify relationships among multi-genes.
Statistical analysis of data and modeling of Nanodust measured by STEREO/WAVES at 1AU
NASA Astrophysics Data System (ADS)
Belheouane, S.; Zaslavsky, A.; Meyer-Vernet, N.; Issautier, K.; Czechowski, A.; Mann, I.; Le Chat, G.; Zouganelis, I.; Maksimovic, M.
2012-12-01
We study the flux of dust particles of nanometer size measured at 1AU by the S/WAVES instrument aboard the twin STEREO spacecraft. When they impact the spacecraft at very high speed, these nanodust particles, first detected by Meyer-Vernet et al. (2009), generate plasma clouds and produce voltage pulses measured by the electric antennas. The Time Domain Sampler (TDS) of the radio and plasma instrument produces temporal windows containing several pulses. We perform a statistical study of the distribution of pulse amplitudes and arrival times in the measuring window during the 2007-2012 period. We interpret the results using simulations of the dynamics of nanodust in the solar wind based on the model of Czechowski and Mann (2010). We also investigate the variations of nanodust fluxes while STEREO rotates about the sunward axis (Roll) ; this reveals that some directions are privilegied.
NASA Technical Reports Server (NTRS)
1981-01-01
The application of statistical methods to recorded ozone measurements. The effects of a long term depletion of ozone at magnitudes predicted by the NAS is harmful to most forms of life. Empirical prewhitening filters the derivation of which is independent of the underlying physical mechanisms were analyzed. Statistical analysis performs a checks and balances effort. Time series filters variations into systematic and random parts, errors are uncorrelated, and significant phase lag dependencies are identified. The use of time series modeling to enhance the capability of detecting trends is discussed.
Delayed Implants Outcome in Maxillary Molar Region.
Crespi, Roberto; Capparè, Paolo; Crespi, Giovanni; Gastaldi, Giorgio; Gherlone, Enrico F
2017-04-01
The aim of the present study was to assess bone volume changes in maxillary molar regions after delayed implants placement. Patients presented large bone defects after tooth extractions. Reactive soft tissue was left into the defects. No grafts were used. Cone beam computed tomography (CBCT) scans were performed before tooth extractions, at implant placement (at 3 months from extraction) and 3 years after implant placement, bone volume measurements were assessed. Bucco-lingual width showed a statistically significant decrease (p = .013) at implant placement, 3 months after extraction. Moreover, a statistically significant increase (p < .01) was measured 3 years after implant placement. No statistically significant differences (p > .05) were found between baseline values (before extraction) and at 3 years from implant placement. Vertical dimension showed no statistically significant differences (p > .05) at implant placement, 3 months after extraction. Statistically significant differences (p < .0001) were found between baseline values (before extraction) and at 3 months from implant placement as well as between implant placement values and 3 years later. CT scans presented successful outcome of delayed implants placed in large bone defects at 3-year follow-up. © 2016 Wiley Periodicals, Inc.
Shih, Shirley L; Zafonte, Ross; Bates, David W; Gerrard, Paul; Goldstein, Richard; Mix, Jacqueline; Niewczyk, Paulette; Greysen, S Ryan; Kazis, Lewis; Ryan, Colleen M; Schneider, Jeffrey C
2016-10-01
Functional status is associated with patient outcomes, but is rarely included in hospital readmission risk models. The objective of this study was to determine whether functional status is a better predictor of 30-day acute care readmission than traditionally investigated variables including demographics and comorbidities. Retrospective database analysis between 2002 and 2011. 1158 US inpatient rehabilitation facilities. 4,199,002 inpatient rehabilitation facility admissions comprising patients from 16 impairment groups within the Uniform Data System for Medical Rehabilitation database. Logistic regression models predicting 30-day readmission were developed based on age, gender, comorbidities (Elixhauser comorbidity index, Deyo-Charlson comorbidity index, and Medicare comorbidity tier system), and functional status [Functional Independence Measure (FIM)]. We hypothesized that (1) function-based models would outperform demographic- and comorbidity-based models and (2) the addition of demographic and comorbidity data would not significantly enhance function-based models. For each impairment group, Function Only Models were compared against Demographic-Comorbidity Models and Function Plus Models (Function-Demographic-Comorbidity Models). The primary outcome was 30-day readmission, and the primary measure of model performance was the c-statistic. All-cause 30-day readmission rate from inpatient rehabilitation facilities to acute care hospitals was 9.87%. C-statistics for the Function Only Models were 0.64 to 0.70. For all 16 impairment groups, the Function Only Model demonstrated better c-statistics than the Demographic-Comorbidity Models (c-statistic difference: 0.03-0.12). The best-performing Function Plus Models exhibited negligible improvements in model performance compared to Function Only Models, with c-statistic improvements of only 0.01 to 0.05. Readmissions are currently used as a marker of hospital performance, with recent financial penalties to hospitals for excessive readmissions. Function-based readmission models outperform models based only on demographics and comorbidities. Readmission risk models would benefit from the inclusion of functional status as a primary predictor. Copyright © 2016 AMDA – The Society for Post-Acute and Long-Term Care Medicine. Published by Elsevier Inc. All rights reserved.
Cannon, Edward O; Amini, Ata; Bender, Andreas; Sternberg, Michael J E; Muggleton, Stephen H; Glen, Robert C; Mitchell, John B O
2007-05-01
We investigate the classification performance of circular fingerprints in combination with the Naive Bayes Classifier (MP2D), Inductive Logic Programming (ILP) and Support Vector Inductive Logic Programming (SVILP) on a standard molecular benchmark dataset comprising 11 activity classes and about 102,000 structures. The Naive Bayes Classifier treats features independently while ILP combines structural fragments, and then creates new features with higher predictive power. SVILP is a very recently presented method which adds a support vector machine after common ILP procedures. The performance of the methods is evaluated via a number of statistical measures, namely recall, specificity, precision, F-measure, Matthews Correlation Coefficient, area under the Receiver Operating Characteristic (ROC) curve and enrichment factor (EF). According to the F-measure, which takes both recall and precision into account, SVILP is for seven out of the 11 classes the superior method. The results show that the Bayes Classifier gives the best recall performance for eight of the 11 targets, but has a much lower precision, specificity and F-measure. The SVILP model on the other hand has the highest recall for only three of the 11 classes, but generally far superior specificity and precision. To evaluate the statistical significance of the SVILP superiority, we employ McNemar's test which shows that SVILP performs significantly (p < 5%) better than both other methods for six out of 11 activity classes, while being superior with less significance for three of the remaining classes. While previously the Bayes Classifier was shown to perform very well in molecular classification studies, these results suggest that SVILP is able to extract additional knowledge from the data, thus improving classification results further.
A new measure for gene expression biclustering based on non-parametric correlation.
Flores, Jose L; Inza, Iñaki; Larrañaga, Pedro; Calvo, Borja
2013-12-01
One of the emerging techniques for performing the analysis of the DNA microarray data known as biclustering is the search of subsets of genes and conditions which are coherently expressed. These subgroups provide clues about the main biological processes. Until now, different approaches to this problem have been proposed. Most of them use the mean squared residue as quality measure but relevant and interesting patterns can not be detected such as shifting, or scaling patterns. Furthermore, recent papers show that there exist new coherence patterns involved in different kinds of cancer and tumors such as inverse relationships between genes which can not be captured. The proposed measure is called Spearman's biclustering measure (SBM) which performs an estimation of the quality of a bicluster based on the non-linear correlation among genes and conditions simultaneously. The search of biclusters is performed by using a evolutionary technique called estimation of distribution algorithms which uses the SBM measure as fitness function. This approach has been examined from different points of view by using artificial and real microarrays. The assessment process has involved the use of quality indexes, a set of bicluster patterns of reference including new patterns and a set of statistical tests. It has been also examined the performance using real microarrays and comparing to different algorithmic approaches such as Bimax, CC, OPSM, Plaid and xMotifs. SBM shows several advantages such as the ability to recognize more complex coherence patterns such as shifting, scaling and inversion and the capability to selectively marginalize genes and conditions depending on the statistical significance. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Functional status predicts acute care readmission in the traumatic spinal cord injury population.
Huang, Donna; Slocum, Chloe; Silver, Julie K; Morgan, James W; Goldstein, Richard; Zafonte, Ross; Schneider, Jeffrey C
2018-03-29
Context/objective Acute care readmission has been identified as an important marker of healthcare quality. Most previous models assessing risk prediction of readmission incorporate variables for medical comorbidity. We hypothesized that functional status is a more robust predictor of readmission in the spinal cord injury population than medical comorbidities. Design Retrospective cross-sectional analysis. Setting Inpatient rehabilitation facilities, Uniform Data System for Medical Rehabilitation data from 2002 to 2012 Participants traumatic spinal cord injury patients. Outcome measures A logistic regression model for predicting acute care readmission based on demographic variables and functional status (Functional Model) was compared with models incorporating demographics, functional status, and medical comorbidities (Functional-Plus) or models including demographics and medical comorbidities (Demographic-Comorbidity). The primary outcomes were 3- and 30-day readmission, and the primary measure of model performance was the c-statistic. Results There were a total of 68,395 patients with 1,469 (2.15%) readmitted at 3 days and 7,081 (10.35%) readmitted at 30 days. The c-statistics for the Functional Model were 0.703 and 0.654 for 3 and 30 days. The Functional Model outperformed Demographic-Comorbidity models at 3 days (c-statistic difference: 0.066-0.096) and outperformed two of the three Demographic-Comorbidity models at 30 days (c-statistic difference: 0.029-0.056). The Functional-Plus models exhibited negligible improvements (0.002-0.010) in model performance compared to the Functional models. Conclusion Readmissions are used as a marker of hospital performance. Function-based readmission models in the spinal cord injury population outperform models incorporating medical comorbidities. Readmission risk models for this population would benefit from the inclusion of functional status.
Hypoxia and flight performance of military instructor pilots in a flight simulator.
Temme, Leonard A; Still, David L; Acromite, Michael T
2010-07-01
Military aircrew and other operational personnel frequently perform their duties at altitudes posing a significant hypoxia risk, often with limited access to supplemental oxygen. Despite the significant risk hypoxia poses, there are few studies relating it to primary flight performance, which is the purpose of the present study. Objective, quantitative measures of aircraft control were collected from 14 experienced, active duty instructor pilot volunteers as they breathed an air/nitrogen mix that provided an oxygen partial pressure equivalent to the atmosphere at 18,000 ft (5486.4 m) above mean sea level. The flight task required holding a constant airspeed, altitude, and heading at an airspeed significantly slower than the aircraft's minimum drag speed. The simulated aircraft's inherent instability at the target speed challenged the pilot to maintain constant control of the aircraft in order to minimize deviations from the assigned flight parameters. Each pilot's flight performance was evaluated by measuring all deviations from assigned target values. Hypoxia degraded the pilot's precision of altitude and airspeed control by 53%, a statistically significant decrease in flight performance. The effect on heading control effects was not statistically significant. There was no evidence of performance differences when breathing room air pre- and post-hypoxia. Moderate levels of hypoxia degraded the ability of military instructor pilots to perform a precision slow flight task. This is one of a small number of studies to quantify an effect of hypoxia on primary flight performance.
Miller, Aaron E; Cohen, Bruce A; Krieger, Stephen C; Markowitz, Clyde E; Mattson, David H; Tselentis, Helen N
2014-01-01
Symptom management remains a challenging clinical aspect of MS. To design a performance improvement continuing medical education (PI CME) activity for better clinical management of multiple sclerosis (MS)-related depression, fatigue, mobility impairment/falls, and spasticity. Ten volunteer MS centers participated in a three-stage PI CME model: A) baseline assessment; B) practice improvement CME intervention; C) reassessment. Expert faculty developed performance measures and activity intervention tools. Designated MS center champions reviewed patient charts and entered data into an online database. Stage C data were collected eight weeks after implementation of the intervention and compared with Stage A baseline data to measure change in performance. Aggregate data from the 10 participating MS centers (405 patient charts) revealed performance improvements in the assessment of all four MS-related symptoms. Statistically significant improvements were found in the documented assessment of mobility impairment/falls (p=0.003) and spasticity (p<0.001). For documentation of care plans, statistically significant improvements were reported for fatigue (p=0.007) and mobility impairment/falls (p=0.040); non-significant changes were noted for depression and spasticity. Our PI CME interventions demonstrated performance improvement in the management of MS-related symptoms. This PI CME model (available at www.achlpicme.org/ms/toolkit) offers a new perspective on enhancing symptom management in patients with MS.
NASA Astrophysics Data System (ADS)
Pandey, Gavendra; Sharan, Maithili
2018-01-01
Application of atmospheric dispersion models in air quality analysis requires a proper representation of the vertical and horizontal growth of the plume. For this purpose, various schemes for the parameterization of dispersion parameters σ‧s are described in both stable and unstable conditions. These schemes differ on the use of (i) extent of availability of on-site measurements (ii) formulations developed for other sites and (iii) empirical relations. The performance of these schemes is evaluated in an earlier developed IIT (Indian Institute of Technology) dispersion model with the data set in single and multiple releases conducted at Fusion Field Trials, Dugway Proving Ground, Utah 2007. Qualitative and quantitative evaluation of the relative performance of all the schemes is carried out in both stable and unstable conditions in the light of (i) peak/maximum concentrations, and (ii) overall concentration distribution. The blocked bootstrap resampling technique is adopted to investigate the statistical significance of the differences in performances of each of the schemes by computing 95% confidence limits on the parameters FB and NMSE. The various analysis based on some selected statistical measures indicated consistency in the qualitative and quantitative performances of σ schemes. The scheme which is based on standard deviation of wind velocity fluctuations and Lagrangian time scales exhibits a relatively better performance in predicting the peak as well as the lateral spread.
van Mierlo, Trevor; Hyatt, Douglas; Ching, Andrew T
2016-01-01
Digital Health Social Networks (DHSNs) are common; however, there are few metrics that can be used to identify participation inequality. The objective of this study was to investigate whether the Gini coefficient, an economic measure of statistical dispersion traditionally used to measure income inequality, could be employed to measure DHSN inequality. Quarterly Gini coefficients were derived from four long-standing DHSNs. The combined data set included 625,736 posts that were generated from 15,181 actors over 18,671 days. The range of actors (8-2323), posts (29-28,684), and Gini coefficients (0.15-0.37) varied. Pearson correlations indicated statistically significant associations between number of actors and number of posts (0.527-0.835, p < .001), and Gini coefficients and number of posts (0.342-0.725, p < .001). However, the association between Gini coefficient and number of actors was only statistically significant for the addiction networks (0.619 and 0.276, p < .036). Linear regression models had positive but mixed R 2 results (0.333-0.527). In all four regression models, the association between Gini coefficient and posts was statistically significant ( t = 3.346-7.381, p < .002). However, unlike the Pearson correlations, the association between Gini coefficient and number of actors was only statistically significant in the two mental health networks ( t = -4.305 and -5.934, p < .000). The Gini coefficient is helpful in measuring shifts in DHSN inequality. However, as a standalone metric, the Gini coefficient does not indicate optimal numbers or ratios of actors to posts, or effective network engagement. Further, mixed-methods research investigating quantitative performance metrics is required.
Busfield, Benjamin T; Kharrazi, F Daniel; Starkey, Chad; Lombardo, Stephen J; Seegmiller, Jeffrey
2009-08-01
The purpose of this study was to determine the rate of return to play and to quantify the effect on the basketball player's performance after surgical reconstruction of the anterior cruciate ligament (ACL). Surgical injuries involving the ACL were queried for a 10-year period (1993-1994 season through 2004-2005 season) from the database maintained by the National Basketball Association (NBA). Standard statistical categories and player efficiency rating (PER), a measure that accounts for positive and negative playing statistics, were calculated to determine the impact of the injury on player performance relative to a matched comparison group. Over the study period, 31 NBA players had 32 ACL reconstructions. Two patients were excluded because of multiple ACL injuries, one was excluded because he never participated in league play, and another was the result of nonathletic activity. Of the 27 players in the study group, 6 (22%) did not return to NBA competition. Of the 21 players (78%) who did return to play, 4 (15%) had an increase in the preinjury PER, 5 (19%) remained within 1 point of the preinjury PER, and the PER decreased by more than 1 point after return to play in 12 (44%). Although decreases occurred in most of the statistical categories for players returning from ACL surgery, the number of games played, field goal percentage, and number of turnovers per game were the only categories with a statistically significant decrease. Players in the comparison group had a statistically significant increase in the PER over their careers, whereas the study group had a marked, though not statistically significant, increase in the PER in the season after reconstruction. After ACL reconstruction in 27 basketball players, 22% did not return to a sanctioned NBA game. For those returning to play, performance decreased by more than 1 PER point in 44% of the patients, although the changes were not statistically significant relative to the comparison group. Level IV, therapeutic case series.
ERIC Educational Resources Information Center
Hess, Richard Wayne
Stability of performance on a criterion referenced reading test was examined for 413 students in grades one through six. The test, which measures 367 behavioral reading objectives, was administered twice to each student, with an interval of at least three weeks between the first and second administrations. Three statistical indices of permanence…
Development of Instructor Support Feature Guidelines. Volume 1.
1986-05-01
dated) Flight Objectives Pamphlet (8/84) TAC Syllabus (8/84) Gradesheet B-52 Training Program WST Coursebook (not dated) Console Familiarization Course...Wordstar Lesson Plans (1984) Gradesheets Instructor Handbook (3/82) KC-135 Pilot WST Coursebook (1/84) Navigator WST Coursebook (1/84) 2 T-37 Instrument...time aircrew performance measurement and instructor feedback, and post-mission data retrieval and analysis. Various levels of statistical performance
ERIC Educational Resources Information Center
Lawrence, Jason S.; Charbonneau, Joseph
2009-01-01
Two studies showed that the link between how much students base their self-worth on academics and their math performance depends on whether their identification with math was statistically controlled and whether the task measured ability or not. Study 1 showed that, when math identification was uncontrolled and the task was ability-diagnostic,…
Statistical Validation for Clinical Measures: Repeatability and Agreement of Kinect™-Based Software.
Lopez, Natalia; Perez, Elisa; Tello, Emanuel; Rodrigo, Alejandro; Valentinuzzi, Max E
2018-01-01
The rehabilitation process is a fundamental stage for recovery of people's capabilities. However, the evaluation of the process is performed by physiatrists and medical doctors, mostly based on their observations, that is, a subjective appreciation of the patient's evolution. This paper proposes a tracking platform of the movement made by an individual's upper limb using Kinect sensor(s) to be applied for the patient during the rehabilitation process. The main contribution is the development of quantifying software and the statistical validation of its performance, repeatability, and clinical use in the rehabilitation process. The software determines joint angles and upper limb trajectories for the construction of a specific rehabilitation protocol and quantifies the treatment evolution. In turn, the information is presented via a graphical interface that allows the recording, storage, and report of the patient's data. For clinical purposes, the software information is statistically validated with three different methodologies, comparing the measures with a goniometer in terms of agreement and repeatability. The agreement of joint angles measured with the proposed software and goniometer is evaluated with Bland-Altman plots; all measurements fell well within the limits of agreement, meaning interchangeability of both techniques. Additionally, the results of Bland-Altman analysis of repeatability show 95% confidence. Finally, the physiotherapists' qualitative assessment shows encouraging results for the clinical use. The main conclusion is that the software is capable of offering a clinical history of the patient and is useful for quantification of the rehabilitation success. The simplicity, low cost, and visualization possibilities enhance the use of the software Kinect for rehabilitation and other applications, and the expert's opinion endorses the choice of our approach for clinical practice. Comparison of the new measurement technique with established goniometric methods determines that the proposed software agrees sufficiently to be used interchangeably.
Evaporation residue cross-section measurements for 48Ti-induced reactions
NASA Astrophysics Data System (ADS)
Sharma, Priya; Behera, B. R.; Mahajan, Ruchi; Thakur, Meenu; Kaur, Gurpreet; Kapoor, Kushal; Rani, Kavita; Madhavan, N.; Nath, S.; Gehlot, J.; Dubey, R.; Mazumdar, I.; Patel, S. M.; Dhibar, M.; Hosamani, M. M.; Khushboo, Kumar, Neeraj; Shamlath, A.; Mohanto, G.; Pal, Santanu
2017-09-01
Background: A significant research effort is currently aimed at understanding the synthesis of heavy elements. For this purpose, heavy ion induced fusion reactions are used and various experimental observations have indicated the influence of shell and deformation effects in the compound nucleus (CN) formation. There is a need to understand these two effects. Purpose: To investigate the effect of proton shell closure and deformation through the comparison of evaporation residue (ER) cross sections for the systems involving heavy compound nuclei around the ZCN=82 region. Methods: A systematic study of ER cross-section measurements was carried out for the 48Ti+Nd,150142 , 144Sm systems in the energy range of 140 -205 MeV . The measurement has been performed using the gas-filled mode of the hybrid recoil mass analyzer present at the Inter University Accelerator Centre (IUAC), New Delhi. Theoretical calculations based on a statistical model were carried out incorporating an adjustable barrier scaling factor to fit the experimental ER cross section. Coupled-channel calculations were also performed using the ccfull code to obtain the spin distribution of the CN, which was used as an input in the calculations. Results: Experimental ER cross sections for 48Ti+Nd,150142 were found to be considerably smaller than the statistical model predictions whereas experimental and statistical model predictions for 48Ti+144Sm were of comparable magnitudes. Conclusion: Though comparison of experimental ER cross sections with statistical model predictions indicate considerable non-compound-nuclear processes for 48Ti+Nd,150142 reactions, no such evidence is found for the 48Ti+144Sm system. Further investigations are required to understand the difference in fusion probabilities of 48Ti+142Nd and 48Ti+144Sm systems.
Quantitative evaluation of pairs and RS steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2004-06-01
We give initial results from a new project which performs statistically accurate evaluation of the reliability of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around 30,000 images we have measured the performance of these methods and suggest changes which lead to significant improvements. Particular results from the project presented here include notes on the distribution of the RS statistic, the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a substantial performance improvement, even to the extent of surpassing the RS statistic which was previously thought superior for grayscale images. We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.
Determining Functional Reliability of Pyrotechnic Mechanical Devices
NASA Technical Reports Server (NTRS)
Bement, Laurence J.; Multhaup, Herbert A.
1997-01-01
This paper describes a new approach for evaluating mechanical performance and predicting the mechanical functional reliability of pyrotechnic devices. Not included are other possible failure modes, such as the initiation of the pyrotechnic energy source. The requirement of hundreds or thousands of consecutive, successful tests on identical components for reliability predictions, using the generally accepted go/no-go statistical approach routinely ignores physics of failure. The approach described in this paper begins with measuring, understanding and controlling mechanical performance variables. Then, the energy required to accomplish the function is compared to that delivered by the pyrotechnic energy source to determine mechanical functional margin. Finally, the data collected in establishing functional margin is analyzed to predict mechanical functional reliability, using small-sample statistics. A careful application of this approach can provide considerable cost improvements and understanding over that of go/no-go statistics. Performance and the effects of variables can be defined, and reliability predictions can be made by evaluating 20 or fewer units. The application of this approach to a pin puller used on a successful NASA mission is provided as an example.
Statistical modeling of natural backgrounds in hyperspectral LWIR data
NASA Astrophysics Data System (ADS)
Truslow, Eric; Manolakis, Dimitris; Cooley, Thomas; Meola, Joseph
2016-09-01
Hyperspectral sensors operating in the long wave infrared (LWIR) have a wealth of applications including remote material identification and rare target detection. While statistical models for modeling surface reflectance in visible and near-infrared regimes have been well studied, models for the temperature and emissivity in the LWIR have not been rigorously investigated. In this paper, we investigate modeling hyperspectral LWIR data using a statistical mixture model for the emissivity and surface temperature. Statistical models for the surface parameters can be used to simulate surface radiances and at-sensor radiance which drives the variability of measured radiance and ultimately the performance of signal processing algorithms. Thus, having models that adequately capture data variation is extremely important for studying performance trades. The purpose of this paper is twofold. First, we study the validity of this model using real hyperspectral data, and compare the relative variability of hyperspectral data in the LWIR and visible and near-infrared (VNIR) regimes. Second, we illustrate how materials that are easily distinguished in the VNIR, may be difficult to separate when imaged in the LWIR.
Empirical Tryout of a New Statistic for Detecting Temporally Inconsistent Responders.
Kerry, Matthew J
2018-01-01
Statistical screening of self-report data is often advised to support the quality of analyzed responses - For example, reduction of insufficient effort responding (IER). One recently introduced index based on Mahalanobis's D for detecting outliers in cross-sectional designs replaces centered scores with difference scores between repeated-measure items: Termed person temporal consistency ( D 2 ptc ). Although the adapted D 2 ptc index demonstrated usefulness in simulation datasets, it has not been applied to empirical data. The current study addresses D 2 ptc 's low uptake by critically appraising its performance across three empirical applications. Independent samples were selected to represent a range of scenarios commonly encountered by organizational researchers. First, in Sample 1, a repeat-measure of future time perspective (FTP) inexperienced working adults (age >40-years; n = 620) indicated that temporal inconsistency was significantly related to respondent age and item reverse-scoring. Second, in repeat-measure of team efficacy aggregations, D 2 ptc successfully detected team-level inconsistency across repeat-performance cycles. Thirdly, the usefulness of the D 2 ptc was examined in an experimental study dataset of subjective life expectancy indicated significantly more stable responding in experimental conditions compared to controls. The empirical findings support D 2 ptc 's flexible and useful application to distinct study designs. Discussion centers on current limitations and further extensions that may be of value to psychologists screening self-report data for strengthening response quality and meaningfulness of inferences from repeated-measures self-reports. Taken together, the findings support the usefulness of the newly devised statistic for detecting IER and other extreme response patterns.
Implementation of statistical process control for proteomic experiments via LC MS/MS.
Bereman, Michael S; Johnson, Richard; Bollinger, James; Boss, Yuval; Shulman, Nick; MacLean, Brendan; Hoofnagle, Andrew N; MacCoss, Michael J
2014-04-01
Statistical process control (SPC) is a robust set of tools that aids in the visualization, detection, and identification of assignable causes of variation in any process that creates products, services, or information. A tool has been developed termed Statistical Process Control in Proteomics (SProCoP) which implements aspects of SPC (e.g., control charts and Pareto analysis) into the Skyline proteomics software. It monitors five quality control metrics in a shotgun or targeted proteomic workflow. None of these metrics require peptide identification. The source code, written in the R statistical language, runs directly from the Skyline interface, which supports the use of raw data files from several of the mass spectrometry vendors. It provides real time evaluation of the chromatographic performance (e.g., retention time reproducibility, peak asymmetry, and resolution), and mass spectrometric performance (targeted peptide ion intensity and mass measurement accuracy for high resolving power instruments) via control charts. Thresholds are experiment- and instrument-specific and are determined empirically from user-defined quality control standards that enable the separation of random noise and systematic error. Finally, Pareto analysis provides a summary of performance metrics and guides the user to metrics with high variance. The utility of these charts to evaluate proteomic experiments is illustrated in two case studies.
Robot Trajectories Comparison: A Statistical Approach
Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618
Evaluation of the Air Void Analyzer
2013-07-01
lack of measurement would help explain the difference in values shown. Brief descriptions of other unpublished testing (Wang et al. 2008) CTL Group...structure measurements taken from the controlled laboratory mixtures. A three-phase approach was used to evaluate the machine. First, a global ...method. Hypothesis testing using t-statistics was performed to increase understanding of the data collected globally in terms of the processes used for
Temporal Comparisons of Internet Topology
2014-06-01
Number CAIDA Cooperative Association of Internet Data Analysis CDN Content Delivery Network CI Confidence Interval DoS denial of service GMT Greenwich...the CAIDA data. Our methods include analysis of graph theoretical measures as well as complex network and statistical measures that will quantify the...tool that probes the Internet for topology analysis and performance [26]. Scamper uses network diagnostic tools, such as traceroute and ping, to probe
A simple biota removal algorithm for 35 GHz cloud radar measurements
NASA Astrophysics Data System (ADS)
Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas
2018-03-01
Cloud radar reflectivity profiles can be an important measurement for the investigation of cloud vertical structure (CVS). However, extracting intended meteorological cloud content from the measurement often demands an effective technique or algorithm that can reduce error and observational uncertainties in the recorded data. In this work, a technique is proposed to identify and separate cloud and non-hydrometeor echoes using the radar Doppler spectral moments profile measurements. The point and volume target-based theoretical radar sensitivity curves are used for removing the receiver noise floor and identified radar echoes are scrutinized according to the signal decorrelation period. Here, it is hypothesized that cloud echoes are observed to be temporally more coherent and homogenous and have a longer correlation period than biota. That can be checked statistically using ˜ 4 s sliding mean and standard deviation value of reflectivity profiles. The above step helps in screen out clouds critically by filtering out the biota. The final important step strives for the retrieval of cloud height. The proposed algorithm potentially identifies cloud height solely through the systematic characterization of Z variability using the local atmospheric vertical structure knowledge besides to the theoretical, statistical and echo tracing tools. Thus, characterization of high-resolution cloud radar reflectivity profile measurements has been done with the theoretical echo sensitivity curves and observed echo statistics for the true cloud height tracking (TEST). TEST showed superior performance in screening out clouds and filtering out isolated insects. TEST constrained with polarimetric measurements was found to be more promising under high-density biota whereas TEST combined with linear depolarization ratio and spectral width perform potentially to filter out biota within the highly turbulent shallow cumulus clouds in the convective boundary layer (CBL). This TEST technique is promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.
Sauvé, Jean-François; Beaudry, Charles; Bégin, Denis; Dion, Chantal; Gérin, Michel; Lavoué, Jérôme
2012-09-01
A quantitative determinants-of-exposure analysis of respirable crystalline silica (RCS) levels in the construction industry was performed using a database compiled from an extensive literature review. Statistical models were developed to predict work-shift exposure levels by trade. Monte Carlo simulation was used to recreate exposures derived from summarized measurements which were combined with single measurements for analysis. Modeling was performed using Tobit models within a multimodel inference framework, with year, sampling duration, type of environment, project purpose, project type, sampling strategy and use of exposure controls as potential predictors. 1346 RCS measurements were included in the analysis, of which 318 were non-detects and 228 were simulated from summary statistics. The model containing all the variables explained 22% of total variability. Apart from trade, sampling duration, year and strategy were the most influential predictors of RCS levels. The use of exposure controls was associated with an average decrease of 19% in exposure levels compared to none, and increased concentrations were found for industrial, demolition and renovation projects. Predicted geometric means for year 1999 were the highest for drilling rig operators (0.238 mg m(-3)) and tunnel construction workers (0.224 mg m(-3)), while the estimated exceedance fraction of the ACGIH TLV by trade ranged from 47% to 91%. The predicted geometric means in this study indicated important overexposure compared to the TLV. However, the low proportion of variability explained by the models suggests that the construction trade is only a moderate predictor of work-shift exposure levels. The impact of the different tasks performed during a work shift should also be assessed to provide better management and control of RCS exposure levels on construction sites.
Measuring hospital efficiency--comparing four European countries.
Mateus, Céu; Joaquim, Inês; Nunes, Carla
2015-02-01
Performing international comparisons on efficiency usually has two main drawbacks: the lack of comparability of data from different countries and the appropriateness and adequacy of data selected for efficiency measurement. With inpatient discharges for four countries, some of the problems of data comparability usually found in international comparisons were mitigated. The objectives are to assess and compare hospital efficiency levels within and between countries, using stochastic frontier analysis with both cross-sectional and panel data. Data from English (2005-2008), Portuguese (2002-2009), Spanish (2003-2009) and Slovenian (2005-2009) hospital discharges and characteristics are used. Weighted hospital discharges were considered as outputs while the number of employees, physicians, nurses and beds were selected as inputs of the production function. Stochastic frontier analysis using both cross-sectional and panel data were performed, as well as ordinary least squares (OLS) analysis. The adequacy of the data was assessed with Kolmogorov-Smirnov and Breusch-Pagan/Cook-Weisberg tests. Data available results were redundant to perform efficiency measurements using stochastic frontier analysis with cross-sectional data. The likelihood ratio test reveals that in cross-sectional data stochastic frontier analysis (SFA) is not statistically different from OLS in Portuguese data, while SFA and OLS estimates are statistically different for Spanish, Slovenian and English data. In the panel data, the inefficiency term is statistically different from 0 in the four countries in analysis, though for Portugal it is still close to 0. Panel data are preferred over cross-section analysis because results are more robust. For all countries except Slovenia, beds and employees are relevant inputs for the production process. © The Author 2015. Published by Oxford University Press on behalf of the European Public Health Association. All rights reserved.
Magnetic Field Measurements of the Spotted Yellow Dwarf DE Boo During 2001-2004
NASA Astrophysics Data System (ADS)
Plachinda, S.; Baklanova, D.; Butkovskaya, V.; Pankov, N.
2017-06-01
Spectropolarimetric observations of DE Boo have been performed at Crimean astrophysical observatory during 18 nights in 2001-2004. We present the result of the longitudinal magnetic field measurements on this star. The magnetic field varies from +44 G to -36 G with mean Standard Error (SE) of 8.2 G. For full array of the magnetic field measurements the difference between experimental errors and Monte Carlo errors is not statistically significant.
Consequences of nursing procedures measurement on job satisfaction
Khademol-hoseyni, Seyyed Mohammad; Nouri, Jamileh Mokhtari; Khoshnevis, Mohammad Ali; Ebadi, Abbas
2013-01-01
Background: Job satisfaction among nurses has consequences on the quality of nursing care and accompanying organizational commitments. Nursing procedure measurement (NPM) is one of the essential parts of the performance-oriented system. This research was performed in order to determining the job satisfaction rate in selected wards of Baqiyatallah (a. s.) Hospital prior and following the NPM. Materials and Methods: An interventional research technique designed with an evaluation study approach in which job satisfaction was measured before and after NPM within 2 months in selected wards with census sampling procedure. The questionnaire contained two major parts; demographic data and questions regarding job satisfaction, salary, and fringe benefits. Data analyzed with SPSS version 13. Results: Statistical evaluation did not reveal significant difference between demographic data and satisfaction and/or dissatisfaction of nurses (before and after nursing procedures measurement). Following NPM, the rate of salary and benefits dissatisfaction decreased up to 5% and the rate of satisfaction increased about 1.5%, however the statistical tests did not reveal a significant difference. Subsequent to NPM, the rate of job value increased (P = 0.019), whereas the rate of job comfort decreased (P = 0.033) significantly. Conclusions: Measuring procedures do not affect the job satisfaction of ward staff or their salary and benefits. Therefore, it is suggested that the satisfaction measurement compute following nurses’ salary and therefore benefits adjusted based on NPM. This is our suggested approach. PMID:23983741
NASA Astrophysics Data System (ADS)
Morgenthaler, George W.; Nuñez, German R.; Botello, Aaron M.; Soto, Jose; Shrairman, Ruth; Landau, Alexander
1998-01-01
Many reaction time experiments have been conducted over the years to observe human responses. However, most of the experiments that were performed did not have quantitatively accurate instruments for measuring change in reaction time under stress. There is a great need for quantitative instruments to measure neuromuscular reaction responses under stressful conditions such as distraction, disorientation, disease, alcohol, drugs, etc. The two instruments used in the experiments reported in this paper are such devices. Their accuracy, portability, ease of use, and biometric character are what makes them very special. PACE™ is a software model used to measure reaction time. VeriFax's Impairoscope measures the deterioration of neuromuscular responses. During the 1997 Summer Semester, various reaction time experiments were conducted on University of Colorado faculty, staff, and students using the PACE™ system. The tests included both two-eye and one-eye unstressed trials and trials with various stresses such as fatigue, distractions in which subjects were asked to perform simple arithmetic during the PACE™ tests, and stress due to rotating-chair dizziness. Various VeriFax Impairoscope tests, both stressed and unstressed, were conducted to determine the Impairoscope's ability to quantitatively measure this impairment. In the 1997 Fall Semester, a Phase II effort was undertaken to increase test sample sizes in order to provide statistical precision and stability. More sophisticated statistical methods remain to be applied to better interpret the data.
Consequences of nursing procedures measurement on job satisfaction.
Khademol-Hoseyni, Seyyed Mohammad; Nouri, Jamileh Mokhtari; Khoshnevis, Mohammad Ali; Ebadi, Abbas
2013-03-01
Job satisfaction among nurses has consequences on the quality of nursing care and accompanying organizational commitments. Nursing procedure measurement (NPM) is one of the essential parts of the performance-oriented system. This research was performed in order to determining the job satisfaction rate in selected wards of Baqiyatallah (a. s.) Hospital prior and following the NPM. An interventional research technique designed with an evaluation study approach in which job satisfaction was measured before and after NPM within 2 months in selected wards with census sampling procedure. The questionnaire contained two major parts; demographic data and questions regarding job satisfaction, salary, and fringe benefits. Data analyzed with SPSS version 13. Statistical evaluation did not reveal significant difference between demographic data and satisfaction and/or dissatisfaction of nurses (before and after nursing procedures measurement). Following NPM, the rate of salary and benefits dissatisfaction decreased up to 5% and the rate of satisfaction increased about 1.5%, however the statistical tests did not reveal a significant difference. Subsequent to NPM, the rate of job value increased (P = 0.019), whereas the rate of job comfort decreased (P = 0.033) significantly. Measuring procedures do not affect the job satisfaction of ward staff or their salary and benefits. Therefore, it is suggested that the satisfaction measurement compute following nurses' salary and therefore benefits adjusted based on NPM. This is our suggested approach.
Modeling the Test-Retest Statistics of a Localization Experiment in the Full Horizontal Plane.
Morsnowski, André; Maune, Steffen
2016-10-01
Two approaches to model the test-retest statistics of a localization experiment basing on Gaussian distribution and on surrogate data are introduced. Their efficiency is investigated using different measures describing directional hearing ability. A localization experiment in the full horizontal plane is a challenging task for hearing impaired patients. In clinical routine, we use this experiment to evaluate the progress of our cochlear implant (CI) recipients. Listening and time effort limit the reproducibility. The localization experiment consists of a 12 loudspeaker circle, placed in an anechoic room, a "camera silens". In darkness, HSM sentences are presented at 65 dB pseudo-erratically from all 12 directions with five repetitions. This experiment is modeled by a set of Gaussian distributions with different standard deviations added to a perfect estimator, as well as by surrogate data. Five repetitions per direction are used to produce surrogate data distributions for the sensation directions. To investigate the statistics, we retrospectively use the data of 33 CI patients with 92 pairs of test-retest-measurements from the same day. The first model does not take inversions into account, (i.e., permutations of the direction from back to front and vice versa are not considered), although they are common for hearing impaired persons particularly in the rear hemisphere. The second model considers these inversions but does not work with all measures. The introduced models successfully describe test-retest statistics of directional hearing. However, since their applications on the investigated measures perform differently no general recommendation can be provided. The presented test-retest statistics enable pair test comparisons for localization experiments.
Haile, Demewoz; Nigatu, Dabere; Gashaw, Ketema; Demelash, Habtamu
2016-01-01
Academic achievement of school age children can be affected by several factors such as nutritional status, demographics, and socioeconomic factors. Though evidence about the magnitude of malnutrition is well established in Ethiopia, there is a paucity of evidence about the association of nutritional status with academic performance among the nation's school age children. Hence, this study aimed to determine how nutritional status and cognitive function are associated with academic performance of school children in Goba town, South East Ethiopia. An institution based cross-sectional study was conducted among 131 school age students from primary schools in Goba town enrolled during the 2013/2014 academic year. The nutritional status of students was assessed by anthropometric measurement, while the cognitive assessment was measured by the Kaufman Assessment Battery for Children (KABC-II) and Ravens colored progressive matrices (Raven's CPM) tests. The academic performance of the school children was measured by collecting the preceding semester academic result from the school record. Descriptive statistics, bivariate and multivariable linear regression were used in the statistical analysis. This study found a statistically significant positive association between all cognitive test scores and average academic performance except for number recall (p = 0.12) and hand movements (p = 0.08). The correlation between all cognitive test scores and mathematics score was found positive and statistically significant (p < 0.05). In the multivariable linear regression model, better wealth index was significantly associated with higher mathematics score (ß = 0.63; 95 % CI: 0.12-0.74). Similarly a unit change in height for age z score resulted in 2.11 unit change in mathematics score (ß = 2.11; 95 % CI: 0.002-4.21). A single unit change of wealth index resulted 0.53 unit changes in average score of all academic subjects among school age children (ß = 0.53; 95 % CI: 0.11-0.95). A single unit change of age resulted 3.23 unit change in average score of all academic subjects among school age children (ß = 3.23; 95 % CI: 1.20-5.27). Nutritional status (height for age Z score) and wealth could be modifiable factors to improve academic performance of school age children. Moreover, interventions to improve nutrition for mothers and children may be an important contributor to academic success and national economic growth in Ethiopia. Further study with strong design and large sample size is needed.
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
A κ-generalized statistical mechanics approach to income analysis
NASA Astrophysics Data System (ADS)
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
NASA Technical Reports Server (NTRS)
Hyde, G.
1976-01-01
The 13/18 GHz COMSAT Propagation Experiment (CPE) was performed to measure attenuation caused by hydrometeors along slant paths from transmitting terminals on the ground to the ATS-6 satellite. The effectiveness of site diversity in overcoming this impairment was also studied. Problems encountered in assembling a valid data base of rain induced attenuation data for statistical analysis are considered. The procedures used to obtain the various statistics are then outlined. The graphs and tables of statistical data for the 15 dual frequency (13 and 18 GHz) site diversity locations are discussed. Cumulative rain rate statistics for the Fayetteville and Boston sites based on point rainfall data collected are presented along with extrapolations of the attenuation and point rainfall data.
NASA Technical Reports Server (NTRS)
Dutta, Soumyo; Braun, Robert D.; Russell, Ryan P.; Clark, Ian G.; Striepe, Scott A.
2012-01-01
Flight data from an entry, descent, and landing (EDL) sequence can be used to reconstruct the vehicle's trajectory, aerodynamic coefficients and the atmospheric profile experienced by the vehicle. Past Mars missions have contained instruments that do not provide direct measurement of the freestream atmospheric conditions. Thus, the uncertainties in the atmospheric reconstruction and the aerodynamic database knowledge could not be separated. The upcoming Mars Science Laboratory (MSL) will take measurements of the pressure distribution on the aeroshell forebody during entry and will allow freestream atmospheric conditions to be partially observable. This data provides a mean to separate atmospheric and aerodynamic uncertainties and is part of the MSL EDL Instrumentation (MEDLI) project. Methods to estimate the flight performance statistically using on-board measurements are demonstrated here through the use of simulated Mars data. Different statistical estimators are used to demonstrate which estimator best quantifies the uncertainties in the flight parameters. The techniques demonstrated herein are planned for application to the MSL flight dataset after the spacecraft lands on Mars in August 2012.
Coherent Lidar Design and Performance Verification
NASA Technical Reports Server (NTRS)
Frehlich, Rod
1996-01-01
This final report summarizes the investigative results from the 3 complete years of funding and corresponding publications are listed. The first year saw the verification of beam alignment for coherent Doppler lidar in space by using the surface return. The second year saw the analysis and computerized simulation of using heterodyne efficiency as an absolute measure of performance of coherent Doppler lidar. A new method was proposed to determine the estimation error for Doppler lidar wind measurements without the need for an independent wind measurement. Coherent Doppler lidar signal covariance, including wind shear and turbulence, was derived and calculated for typical atmospheric conditions. The effects of wind turbulence defined by Kolmogorov spatial statistics were investigated theoretically and with simulations. The third year saw the performance of coherent Doppler lidar in the weak signal regime determined by computer simulations using the best velocity estimators. Improved algorithms for extracting the performance of velocity estimators with wind turbulence included were also produced.
RCT: Module 2.03, Counting Errors and Statistics, Course 8768
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillmer, Kurt T.
2017-04-01
Radiological sample analysis involves the observation of a random process that may or may not occur and an estimation of the amount of radioactive material present based on that observation. Across the country, radiological control personnel are using the activity measurements to make decisions that may affect the health and safety of workers at those facilities and their surrounding environments. This course will present an overview of measurement processes, a statistical evaluation of both measurements and equipment performance, and some actions to take to minimize the sources of error in count room operations. This course will prepare the student withmore » the skills necessary for radiological control technician (RCT) qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examination (TEST 27566) and by providing in the field skills.« less
NASA Technical Reports Server (NTRS)
Gardner, Adrian
2010-01-01
National Aeronautical and Space Administration (NASA) weather and atmospheric environmental organizations are insatiable consumers of geophysical, hydrometeorological and solar weather statistics. The expanding array of internet-worked sensors producing targeted physical measurements has generated an almost factorial explosion of near real-time inputs to topical statistical datasets. Normalizing and value-based parsing of such statistical datasets in support of time-constrained weather and environmental alerts and warnings is essential, even with dedicated high-performance computational capabilities. What are the optimal indicators for advanced decision making? How do we recognize the line between sufficient statistical sampling and excessive, mission destructive sampling ? How do we assure that the normalization and parsing process, when interpolated through numerical models, yields accurate and actionable alerts and warnings? This presentation will address the integrated means and methods to achieve desired outputs for NASA and consumers of its data.
Statistically significant relational data mining :
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann
This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publicationsmore » that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.« less
Nilsagård, Ylva E; Forsberg, Anette S; von Koch, Lena
2013-02-01
The use of interactive video games is expanding within rehabilitation. The evidence base is, however, limited. Our aim was to evaluate the effects of a Nintendo Wii Fit® balance exercise programme on balance function and walking ability in people with multiple sclerosis (MS). A multi-centre, randomised, controlled single-blinded trial with random allocation to exercise or no exercise. The exercise group participated in a programme of 12 supervised 30-min sessions of balance exercises using Wii games, twice a week for 6-7 weeks. Primary outcome was the Timed Up and Go test (TUG). In total, 84 participants were enrolled; four were lost to follow-up. After the intervention, there were no statistically significant differences between groups but effect sizes for the TUG, TUGcognitive and, the Dynamic Gait Index (DGI) were moderate and small for all other measures. Statistically significant improvements within the exercise group were present for all measures (large to moderate effect sizes) except in walking speed and balance confidence. The non-exercise group showed statistically significant improvements for the Four Square Step Test and the DGI. In comparison with no intervention, a programme of supervised balance exercise using Nintendo Wii Fit® did not render statistically significant differences, but presented moderate effect sizes for several measures of balance performance.
Indoor Soiling Method and Outdoor Statistical Risk Analysis of Photovoltaic Power Plants
NASA Astrophysics Data System (ADS)
Rajasekar, Vidyashree
This is a two-part thesis. Part 1 presents an approach for working towards the development of a standardized artificial soiling method for laminated photovoltaic (PV) cells or mini-modules. Construction of an artificial chamber to maintain controlled environmental conditions and components/chemicals used in artificial soil formulation is briefly explained. Both poly-Si mini-modules and a single cell mono-Si coupons were soiled and characterization tests such as I-V, reflectance and quantum efficiency (QE) were carried out on both soiled, and cleaned coupons. From the results obtained, poly-Si mini-modules proved to be a good measure of soil uniformity, as any non-uniformity present would not result in a smooth curve during I-V measurements. The challenges faced while executing reflectance and QE characterization tests on poly-Si due to smaller size cells was eliminated on the mono-Si coupons with large cells to obtain highly repeatable measurements. This study indicates that the reflectance measurements between 600-700 nm wavelengths can be used as a direct measure of soil density on the modules. Part 2 determines the most dominant failure modes of field aged PV modules using experimental data obtained in the field and statistical analysis, FMECA (Failure Mode, Effect, and Criticality Analysis). The failure and degradation modes of about 744 poly-Si glass/polymer frameless modules fielded for 18 years under the cold-dry climate of New York was evaluated. Defect chart, degradation rates (both string and module levels) and safety map were generated using the field measured data. A statistical reliability tool, FMECA that uses Risk Priority Number (RPN) is used to determine the dominant failure or degradation modes in the strings and modules by means of ranking and prioritizing the modes. This study on PV power plants considers all the failure and degradation modes from both safety and performance perspectives. The indoor and outdoor soiling studies were jointly performed by two Masters Students, Sravanthi Boppana and Vidyashree Rajasekar. This thesis presents the indoor soiling study, whereas the other thesis presents the outdoor soiling study. Similarly, the statistical risk analyses of two power plants (model J and model JVA) were jointly performed by these two Masters students. Both power plants are located at the same cold-dry climate, but one power plant carries framed modules and the other carries frameless modules. This thesis presents the results obtained on the frameless modules.
Local statistics of retinal optic flow for self-motion through natural sceneries.
Calow, Dirk; Lappe, Markus
2007-12-01
Image analysis in the visual system is well adapted to the statistics of natural scenes. Investigations of natural image statistics have so far mainly focused on static features. The present study is dedicated to the measurement and the analysis of the statistics of optic flow generated on the retina during locomotion through natural environments. Natural locomotion includes bouncing and swaying of the head and eye movement reflexes that stabilize gaze onto interesting objects in the scene while walking. We investigate the dependencies of the local statistics of optic flow on the depth structure of the natural environment and on the ego-motion parameters. To measure these dependencies we estimate the mutual information between correlated data sets. We analyze the results with respect to the variation of the dependencies over the visual field, since the visual motions in the optic flow vary depending on visual field position. We find that retinal flow direction and retinal speed show only minor statistical interdependencies. Retinal speed is statistically tightly connected to the depth structure of the scene. Retinal flow direction is statistically mostly driven by the relation between the direction of gaze and the direction of ego-motion. These dependencies differ at different visual field positions such that certain areas of the visual field provide more information about ego-motion and other areas provide more information about depth. The statistical properties of natural optic flow may be used to tune the performance of artificial vision systems based on human imitating behavior, and may be useful for analyzing properties of natural vision systems.
Dynamics of EEG functional connectivity during statistical learning.
Tóth, Brigitta; Janacsek, Karolina; Takács, Ádám; Kóbor, Andrea; Zavecz, Zsófia; Nemeth, Dezso
2017-10-01
Statistical learning is a fundamental mechanism of the brain, which extracts and represents regularities of our environment. Statistical learning is crucial in predictive processing, and in the acquisition of perceptual, motor, cognitive, and social skills. Although previous studies have revealed competitive neurocognitive processes underlying statistical learning, the neural communication of the related brain regions (functional connectivity, FC) has not yet been investigated. The present study aimed to fill this gap by investigating FC networks that promote statistical learning in humans. Young adults (N=28) performed a statistical learning task while 128-channels EEG was acquired. The task involved probabilistic sequences, which enabled to measure incidental/implicit learning of conditional probabilities. Phase synchronization in seven frequency bands was used to quantify FC between cortical regions during the first, second, and third periods of the learning task, respectively. Here we show that statistical learning is negatively correlated with FC of the anterior brain regions in slow (theta) and fast (beta) oscillations. These negative correlations increased as the learning progressed. Our findings provide evidence that dynamic antagonist brain networks serve a hallmark of statistical learning. Copyright © 2017 Elsevier Inc. All rights reserved.
FY2017 Report on NISC Measurements and Detector Simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Madison Theresa; Meierbachtol, Krista Cruse; Jordan, Tyler Alexander
FY17 work focused on automation, both of the measurement analysis and comparison of simulations. The experimental apparatus was relocated and weeks of continuous measurements of the spontaneous fission source 252Cf was performed. Programs were developed to automate the conversion of measurements into ROOT data framework files with a simple terminal input. The complete analysis of the measurement (which includes energy calibration and the identification of correlated counts) can now be completed with a documented process which involves one simple execution line as well. Finally, the hurdles of slow MCNP simulations resulting in low simulation statistics have been overcome with themore » generation of multi-run suites which make use of the highperformance computing resources at LANL. Preliminary comparisons of measurements and simulations have been performed and will be the focus of FY18 work.« less
This paper describes the application and method performance parameters of a Luminex xMAP™ bead-based, multiplex immunoassay for measuring specific antibody responses in saliva samples (n=5438) to antigens of six common waterborne pathogens (Campylobacter jejuni, Helicobacter pylo...
77 FR 33120 - Truth in Lending (Regulation Z)
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-05
... FHFA's release of historical data on loan volumes and delinquency rates, including any tabulations or... with varying characteristics and to perform other statistical analyses that may assist the Bureau in... definitions of a ``qualified mortgage.'' For example, the Bureau is examining various measures of delinquency...
Code of Federal Regulations, 2010 CFR
2010-10-01
... care facility or facility means an organization involved in the delivery of health care services for... the delivery of health care services that is typical for a specified group. Norms means numerical or statistical measures of average observed performance in the delivery of health care services. Outliers means...
Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island
NASA Astrophysics Data System (ADS)
E Komalasari, K.; Pawitan, H.; Faqih, A.
2017-03-01
This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.
Heintze, Siegward Dietmar; Forjanic, Monika
2008-10-01
To evaluate the effect of the multiple-use of a three-step rubber-based polishing system on the polishing performance with and without a disinfection/sterilization protocol with prolonged disinfection (overnight). The three-step polishing system Astropol was applied under standardized contact pressure of 2 N on 320 grit pre-roughened flat composite specimens of Tetric EvoCeram for 10 seconds (F and P disc) and 30 seconds (HP disc) respectively. After each polishing step, the surface gloss and roughness were measured with a glossmeter and an optical sensor (FRT MicroProf), respectively. Material loss of the composite specimens and polishing instruments were measured after each step with a high precision digital scale. For all four variables (surface gloss, surface roughness, composite loss, loss of rubber material) the mean percentage of change compared to the reference was calculated. Already after the first use, the instruments which were used without disinfection or sterilization demonstrated a statistically significantly reduced polishing performance in all polishing steps compared to the reference (new polishing system) (t-test, P < 0.05). In addition, this loss in performance further increased with the second and third re-use. Especially the third component (Astropol HP) was affected by performance loss. By contrast, the multiple-use of the instruments which were subjected to prolonged disinfection did not result in a reduced polishing performance. For the P disc, a statistically significant improvement of the polishing performance could be observed throughout almost all multiple-use sessions (ANOVA, P < 0.05). The improved polishing performance was, however, accompanied by an increased loss of the silicone rubber material of the P and F polishing discs; the HP discs were not affected by this loss. Furthermore, particles of the rubber material also adhered to the composite. The polishing performance of the discs which were only subjected to the sterilization process was not statistically significantly different to the polishing performance of the control group in terms of surface roughness; but the surface gloss was worse than that of the control group. No loss of rubber material or adherence to the composite was observed in this group.
Acute effects of The Stick on strength, power, and flexibility.
Mikesky, Alan E; Bahamonde, Rafael E; Stanton, Katie; Alvey, Thurman; Fitton, Tom
2002-08-01
The Stick is a muscle massage device used by athletes, particularly track athletes, to improve performance. The purpose of this project was to assess the acute effects of The Stick on muscle strength, power, and flexibility. Thirty collegiate athletes consented to participate in a 4-week, double-blind study, which consisted of 4 testing sessions (1 familiarization and 3 data collection) scheduled 1 week apart. During each testing session subjects performed 4 measures in the following sequence: hamstring flexibility, vertical jump, flying-start 20-yard dash, and isokinetic knee extension at 90 degrees x s(-1). Two minutes of randomly assigned intervention treatment (visualization [control], mock insensible electrical stimulation [placebo], or massage using The Stick [experimental]) was performed immediately prior to each performance measure. Statistical analyses involved single-factor repeated measures analysis of variance (ANOVA) with Fisher's Least Significant Difference post-hoc test. None of the variables measured showed an acute improvement (p < or = 0.05) immediately following treatment with The Stick.
Kasturi, Seshadri; Lowman, Joye K; Lowman, Joye; Kelvin, Frederick M; Akisik, Fatih M; Akisik, Fateh; Terry, Colin L; Terry, Colin; Hale, Douglass S
2010-11-01
The purpose of this study was to compare pre- and postoperative pelvic organ prolapse-quantification (POP-Q) and magnetic resonance imaging (MRI) measurements in patients who undergo total Prolift (Ethicon, Inc, Somerville, NJ) colpopexy. Pre- and postoperative MRI and POP-Q examinations were performed on patients with stage 2 or greater prolapse who underwent the Prolift procedure. MRI measurements were taken at maximum descent. Correlations between changes in MRI and POP-Q measurements were determined. Ten subjects were enrolled. On MRI, statistically significant changes were seen with cystocele, enterocele, and apex. Statistically significant changes were seen on POP-Q measurements for Aa, Ba, C, Ap, Bp, and GH. Positive correlations were demonstrated between POP-Q and MRI changes. Minimal tissue reaction was seen on MRI. The Prolift system is effective in the surgical management of pelvic organ prolapse as measured by POP-Q and MRI. Postoperative MRIs support the inert nature of polypropylene mesh. Copyright © 2010 Mosby, Inc. All rights reserved.
AnthropMMD: An R package with a graphical user interface for the mean measure of divergence.
Santos, Frédéric
2018-01-01
The mean measure of divergence is a dissimilarity measure between groups of individuals described by dichotomous variables. It is well suited to datasets with many missing values, and it is generally used to compute distance matrices and represent phenograms. Although often used in biological anthropology and archaeozoology, this method suffers from a lack of implementation in common statistical software. A package for the R statistical software, AnthropMMD, is presented here. Offering a dynamic graphical user interface, it is the first one dedicated to Smith's mean measure of divergence. The package also provides facilities for graphical representations and the crucial step of trait selection, so that the entire analysis can be performed through the graphical user interface. Its use is demonstrated using an artificial dataset, and the impact of trait selection is discussed. Finally, AnthropMMD is compared to three other free tools available for calculating the mean measure of divergence, and is proven to be consistent with them. © 2017 Wiley Periodicals, Inc.
Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan
2006-07-15
ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.
The relationship between temporomandibular dysfunction and head and cervical posture.
Matheus, Ricardo Alves; Ramos-Perez, Flávia Maria de Moraes; Menezes, Alynne Vieira; Ambrosano, Gláucia Maria Bovi; Haiter-Neto, Francisco; Bóscolo, Frab Norberto; de Almeida, Solange Maria
2009-01-01
This study aimed to evaluate the possibility of any correlation between disc displacement and parameters used for evaluation of skull positioning in relation to the cervical spine: craniocervical angle, suboccipital space between C0-C1, cervical curvature and position of the hyoid bone in individuals with and without symptoms of temporomandibular dysfunction. The patients were evaluated following the guidelines set forth by RDC/TMD. Evaluation was performed by magnetic resonance imaging for establishment of disc positioning in the temporomandibular joints (TMJs) of 30 volunteer patients without temporomandibular dysfunction symptoms and 30 patients with symptoms. Evaluation of skull positioning in relation to the cervical spine was performed on lateral cephalograms achieved with the individual in natural head position. Data were submitted to statistical analysis by Fisher's exact test at 5% significance level. To measure the degree of reproducibility/agreements between surveys, the kappa (K) statistics was used. Significant differences were observed between C0-C1 measurement for both symptomatic (p=0.04) and asymptomatic (p=0.02). No statistical differences were observed regarding craniocervical angle, C1-C2 and hyoid bone position in relation to the TMJs with and without disc displacement. Although statistically significant difference was found in the C0-C1 space, no association between these and internal temporomandibular joint disorder can be considered. Based on the results observed in this study, no direct relationship could be determined between the presence of disc displacement and the variables assessed.
THE RELATIONSHIP BETWEEN TEMPOROMANDIBULAR DYSFUNCTION AND HEAD AND CERVICAL POSTURE
Matheus, Ricardo Alves; Ramos-Perez, Flávia Maria de Moraes; Menezes, Alynne Vieira; Ambrosano, Gláucia Maria Bovi; Haiter, Francisco; Bóscolo, Frab Norberto; de Almeida, Solange Maria
2009-01-01
Objective: This study aimed to evaluate the possibility of any correlation between disc displacement and parameters used for evaluation of skull positioning in relation to the cervical spine: craniocervical angle, suboccipital space between C0-C1, cervical curvature and position of the hyoid bone in individuals with and without symptoms of temporomandibular dysfunction. Material and Methods: The patients were evaluated following the guidelines set forth by RDC/TMD. Evaluation was performed by magnetic resonance imaging for establishment of disc positioning in the temporomandibular joints (TMJs) of 30 volunteer patients without temporomandibular dysfunction symptoms and 30 patients with symptoms. Evaluation of skull positioning in relation to the cervical spine was performed on lateral cephalograms achieved with the individual in natural head position. Data were submitted to statistical analysis by Fisher's exact test at 5% significance level. To measure the degree of reproducibility/agreements between surveys, the kappa (K) statistics was used. Results: Significant differences were observed between C0-C1 measurement for both symptomatic (p=0.04) and asymptomatic (p=0.02). No statistical differences were observed regarding craniocervical angle, C1-C2 and hyoid bone position in relation to the TMJs with and without disc displacement. Although statistically significant difference was found in the C0-C1 space, no association between these and internal temporomandibular joint disorder can be considered. Conclusion: Based on the results observed in this study, no direct relationship could be determined between the presence of disc displacement and the variables assessed. PMID:19466252
Risk assessment model for development of advanced age-related macular degeneration.
Klein, Michael L; Francis, Peter J; Ferris, Frederick L; Hamon, Sara C; Clemons, Traci E
2011-12-01
To design a risk assessment model for development of advanced age-related macular degeneration (AMD) incorporating phenotypic, demographic, environmental, and genetic risk factors. We evaluated longitudinal data from 2846 participants in the Age-Related Eye Disease Study. At baseline, these individuals had all levels of AMD, ranging from none to unilateral advanced AMD (neovascular or geographic atrophy). Follow-up averaged 9.3 years. We performed a Cox proportional hazards analysis with demographic, environmental, phenotypic, and genetic covariates and constructed a risk assessment model for development of advanced AMD. Performance of the model was evaluated using the C statistic and the Brier score and externally validated in participants in the Complications of Age-Related Macular Degeneration Prevention Trial. The final model included the following independent variables: age, smoking history, family history of AMD (first-degree member), phenotype based on a modified Age-Related Eye Disease Study simple scale score, and genetic variants CFH Y402H and ARMS2 A69S. The model did well on performance measures, with very good discrimination (C statistic = 0.872) and excellent calibration and overall performance (Brier score at 5 years = 0.08). Successful external validation was performed, and a risk assessment tool was designed for use with or without the genetic component. We constructed a risk assessment model for development of advanced AMD. The model performed well on measures of discrimination, calibration, and overall performance and was successfully externally validated. This risk assessment tool is available for online use.
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
Quantum measurement incompatibility does not imply Bell nonlocality
NASA Astrophysics Data System (ADS)
Hirsch, Flavien; Quintino, Marco Túlio; Brunner, Nicolas
2018-01-01
We discuss the connection between the incompatibility of quantum measurements, as captured by the notion of joint measurability, and the violation of Bell inequalities. Specifically, we explicitly present a given set of non-jointly-measurable positive-operator-value measures (POVMs) MA with the following property. Considering a bipartite Bell test where Alice uses MA, then for any possible shared entangled state ρ and any set of (possibly infinitely many) POVMs NB performed by Bob, the resulting statistics admits a local model and can thus never violate any Bell inequality. This shows that quantum measurement incompatibility does not imply Bell nonlocality in general.
Babu, Giridhara R; Murthy, G V S; Ana, Yamuna; Patel, Prital; Deepa, R; Neelon, Sara E Benjamin; Kinra, Sanjay; Reddy, K Srinath
2018-01-01
AIM To perform a meta-analysis of the association of obesity with hypertension and type 2 diabetes mellitus (T2DM) in India among adults. METHODS To conduct meta-analysis, we performed comprehensive, electronic literature search in the PubMed, CINAHL Plus, and Google Scholar. We restricted the analysis to studies with documentation of some measure of obesity namely; body mass index, waist-hip ratio, waist circumference and diagnosis of hypertension or diagnosis of T2DM. By obtaining summary estimates of all included studies, the meta-analysis was performed using both RevMan version 5 and “metan” command STATA version 11. Heterogeneity was measured by I2 statistic. Funnel plot analysis has been done to assess the study publication bias. RESULTS Of the 956 studies screened, 18 met the eligibility criteria. The pooled odds ratio between obesity and hypertension was 3.82 (95%CI: 3.39 to 4.25). The heterogeneity around this estimate (I2 statistic) was 0%, indicating low variability. The pooled odds ratio from the included studies showed a statistically significant association between obesity and T2DM (OR = 1.14, 95%CI: 1.04 to 1.24) with a high degree of variability. CONCLUSION Despite methodological differences, obesity showed significant, potentially plausible association with hypertension and T2DM in studies conducted in India. Being a modifiable risk factor, our study informs setting policy priority and intervention efforts to prevent debilitating complications. PMID:29359028
Antunes, Amanda H; Alberton, Cristine L; Finatto, Paula; Pinto, Stephanie S; Cadore, Eduardo L; Zaffari, Paula; Kruel, Luiz F M
2015-01-01
Maximal tests conducted on land are not suitable for the prescription of aquatic exercises, which makes it difficult to optimize the intensity of water aerobics classes. The aim of the present study was to evaluate the maximal and anaerobic threshold cardiorespiratory responses to 6 water aerobics exercises. Volunteers performed 3 of the exercises in the sagittal plane and 3 in the frontal plane. Twelve active female volunteers (aged 24 ± 2 years) performed 6 maximal progressive test sessions. Throughout the exercise tests, we measured heart rate (HR) and oxygen consumption (VO2). We randomized all sessions with a minimum interval of 48 hr between each session. For statistical analysis, we used repeated-measures 1-way analysis of variance. Regarding the maximal responses, for the peak VO2, abductor hop and jumping jacks (JJ) showed significantly lower values than frontal kick and cross-country skiing (CCS; p < .001; partial η(2) = .509), while for the peak HR, JJ showed statistically significantly lower responses compared with stationary running and CCS (p < .001; partial η(2) = .401). At anaerobic threshold intensity expressed as the percentage of the maximum values, no statistically significant differences were found among exercises. Cardiorespiratory responses are directly associated with the muscle mass involved in the exercise. Thus, it is worth emphasizing the importance of performing a maximal test that is specific to the analyzed exercise so the prescription of the intensity can be safer and valid.
Babu, Giridhara R; Murthy, G V S; Ana, Yamuna; Patel, Prital; Deepa, R; Neelon, Sara E Benjamin; Kinra, Sanjay; Reddy, K Srinath
2018-01-15
To perform a meta-analysis of the association of obesity with hypertension and type 2 diabetes mellitus (T2DM) in India among adults. To conduct meta-analysis, we performed comprehensive, electronic literature search in the PubMed, CINAHL Plus, and Google Scholar. We restricted the analysis to studies with documentation of some measure of obesity namely; body mass index, waist-hip ratio, waist circumference and diagnosis of hypertension or diagnosis of T2DM. By obtaining summary estimates of all included studies, the meta-analysis was performed using both RevMan version 5 and "metan" command STATA version 11. Heterogeneity was measured by I 2 statistic. Funnel plot analysis has been done to assess the study publication bias. Of the 956 studies screened, 18 met the eligibility criteria. The pooled odds ratio between obesity and hypertension was 3.82 (95%CI: 3.39 to 4.25). The heterogeneity around this estimate (I2 statistic) was 0%, indicating low variability. The pooled odds ratio from the included studies showed a statistically significant association between obesity and T2DM (OR = 1.14, 95%CI: 1.04 to 1.24) with a high degree of variability. Despite methodological differences, obesity showed significant, potentially plausible association with hypertension and T2DM in studies conducted in India. Being a modifiable risk factor, our study informs setting policy priority and intervention efforts to prevent debilitating complications.
Little, Max A.; Costello, Declan A. E.; Harries, Meredydd L.
2010-01-01
Summary Clinical acoustic voice-recording analysis is usually performed using classical perturbation measures, including jitter, shimmer, and noise-to-harmonic ratios (NHRs). However, restrictive mathematical limitations of these measures prevent analysis for severely dysphonic voices. Previous studies of alternative nonlinear random measures addressed wide varieties of vocal pathologies. Here, we analyze a single vocal pathology cohort, testing the performance of these alternative measures alongside classical measures. We present voice analysis pre- and postoperatively in 17 patients with unilateral vocal fold paralysis (UVFP). The patients underwent standard medialization thyroplasty surgery, and the voices were analyzed using jitter, shimmer, NHR, nonlinear recurrence period density entropy (RPDE), detrended fluctuation analysis (DFA), and correlation dimension. In addition, we similarly analyzed 11 healthy controls. Systematizing the preanalysis editing of the recordings, we found that the novel measures were more stable and, hence, reliable than the classical measures on healthy controls. RPDE and jitter are sensitive to improvements pre- to postoperation. Shimmer, NHR, and DFA showed no significant change (P > 0.05). All measures detect statistically significant and clinically important differences between controls and patients, both treated and untreated (P < 0.001, area under curve [AUC] > 0.7). Pre- to postoperation grade, roughness, breathiness, asthenia, and strain (GRBAS) ratings show statistically significant and clinically important improvement in overall dysphonia grade (G) (AUC = 0.946, P < 0.001). Recalculating AUCs from other study data, we compare these results in terms of clinical importance. We conclude that, when preanalysis editing is systematized, nonlinear random measures may be useful for monitoring UVFP-treatment effectiveness, and there may be applications to other forms of dysphonia. PMID:19900790
Bonner-Jackson, Aaron; Okonkwo, Ozioma; Tremont, Geoffrey
2012-07-01
Recent work has demonstrated the potentially protective effects of the apolipoprotein E (APOE) ε2 allele on cognitive functioning in individuals at risk for developing Alzheimer disease. However, little is known regarding the effect of ε2 genotype on rate of change in daily functioning over time. The aim of the current study was to examine the relationship between APOE genotype and change over time in ability to perform daily activities. We examined the relationship between APOE genotype and change in the ability to perform activities of daily living at 12- and 24-month intervals in 225 healthy comparison subjects, 381 individuals with amnestic mild cognitive impairment, and 189 individuals with Alzheimer disease who were enrolled in the Alzheimer's Disease Neuroimaging Initiative study. Neuropsychological measures were also collected at each follow-up. Overall, individuals with at least one APOE-ε2 allele showed less functional decline over time and better performance on neuropsychological measures than those without an ε2 allele, even after controlling for potential confounders. When diagnostic groups were examined individually, presence of the ε2 allele continued to be associated with slower functional decline, although the relationship was no longer statistically significant in most cases, likely due to reduced statistical power. Our findings suggest that the APOE-ε2 allele provides a buffer against significant changes in daily functioning over time and is associated with better neuropsychological performance across a number of measures.
Retinal nerve fiber layer reflectance for early glaucoma diagnosis.
Liu, Shuang; Wang, Bingqing; Yin, Biwei; Milner, Thomas E; Markey, Mia K; McKinnon, Stuart J; Rylander, Henry G
2014-01-01
Compare performance of normalized reflectance index (NRI) and retinal nerve fiber layer thickness (RNFLT) parameters determined from optical coherence tomography (OCT) images for glaucoma and glaucoma suspect diagnosis. Seventy-five eyes from 71 human subjects were studied: 33 controls, 24 glaucomatous, and 18 glaucoma-suspects. RNFLT and NRI maps were measured using 2 custom-built OCT systems and the commercial instrument RTVue. Using area under the receiver operating characteristic curve, RNFLT and NRI measured in 7 RNFL locations were analyzed to distinguish between control, glaucomatous, and glaucoma-suspect eyes. The mean NRI of the control group was significantly larger than the means of glaucomatous and glaucoma-suspect groups in most RNFL locations for all 3 OCT systems (P<0.05 for all comparisons). NRI performs significantly better than RNFLT at distinguishing between glaucoma-suspect and control eyes using RTVue OCT (P=0.008). The performances of NRI and RNFLT for classifying glaucoma-suspect versus control eyes were statistically indistinguishable for PS-OCT-EIA (P=0.101) and PS-OCT-DEC (P=0.227). The performances of NRI and RNFLT for classifying glaucomatous versus control eyes were statistically indistinguishable (PS-OCT-EIA: P=0.379; PS-OCT-DEC: P=0.338; RTVue OCT: P=0.877). NRI is a promising measure for distinguishing between glaucoma-suspect and control eyes and may indicate disease in the preperimetric stage. Results of this pilot clinical study warrant a larger study to confirm the diagnostic power of NRI for diagnosing preperimetric glaucoma.
Lynch, Thomas Sean; Kosanovic, Radomir; Gibbs, Daniel Bradley; Park, Caroline; Bedi, Asheesh; Larson, Christopher M.; Ahmad, Christopher S.
2017-01-01
Objectives: Athletic pubalgia is a condition in which there is an injury to the core musculature that precipitates groin and lower abdominal pain, particularly in cutting and pivoting sports. These are common injury patterns in the National Football League (NFL); however, the effect of surgery on performance for these players has not been described. Methods: Athletes in the NFL that underwent a surgical procedure for athletic pubalgia / core muscle injury (CMI) were identified through team injury reports and archives on public record since 2004. Outcome data was collected for athletes who met inclusion criteria which included total games played after season of injury / surgery, number of Pro Bowls voted to, yearly total years and touchdowns for offensive players and yearly total tackles sacks and interceptions for defensive players. Previously validated performance scores were calculated using this data for each player one season before and after their procedure for a CMI. Athletes were then matched to control professional football players without a diagnosis of athletic pubalgia by age, position, year and round drafted. Statistical analysis was used to compare pre-injury and post-injury performance measures for players treated with operative management to their case controls. Results: The study group was composed of 32 NFL athletes who underwent operative management for athletic pubalgia that met inclusion criteria during this study period, including 18 offensive players and 16 defensive players. The average age of athletes undergoing this surgery was 27 years old. Analysis of pre- and post-injury athletic performance revealed no statistically significant changes after return to sport after surgical intervention; however, there was a statistically significant difference in the number of Pro Bowls that affected athletes participated in before surgery (8) compared to the season after surgery (3). Analysis of durability, as measured by total number of games played before and after surgery, revealed no statistically significant difference. Conclusion: National Football League players who undergo operative care for athletic pubalgia have a high return to play with no decrease in performance scores when compared to case-matched controls. However, the indications for operative intervention and the type of procedure performed are heterogeneous. Further research is warranted to better understand how these injuries occur, what can be done to prevent their occurrence, and the long term career ramifications of this disorder.
Lee, Seul Gi; Shin, Yun Hee
2016-04-01
This study was done to verify effects of a self-directed feedback practice using smartphone videos on nursing students' basic nursing skills, confidence in performance and learning satisfaction. In this study an experimental study with a post-test only control group design was used. Twenty-nine students were assigned to the experimental group and 29 to the control group. Experimental treatment was exchanging feedback on deficiencies through smartphone recorded videos of nursing practice process taken by peers during self-directed practice. Basic nursing skills scores were higher for all items in the experimental group compared to the control group, and differences were statistically significant ["Measuring vital signs" (t=-2.10, p=.039); "Wearing protective equipment when entering and exiting the quarantine room and the management of waste materials" (t=-4.74, p<.001) "Gavage tube feeding" (t=-2.70, p=.009)]. Confidence in performance was higher in the experimental group compared to the control group, but the differences were not statistically significant. However, after the complete practice, there was a statistically significant difference in overall performance confidence (t=-3.07. p=.003). Learning satisfaction was higher in the experimental group compared to the control group, but the difference was not statistically significant (t=-1.67, p=.100). Results of this study indicate that self-directed feedback practice using smartphone videos can improve basic nursing skills. The significance is that it can help nursing students gain confidence in their nursing skills for the future through improvement of basic nursing skills and performance of quality care, thus providing patients with safer care.
Direct Mask Overlay Inspection
NASA Astrophysics Data System (ADS)
Hsia, Liang-Choo; Su, Lo-Soun
1983-11-01
In this paper, we present a mask inspection methodology and procedure that involves direct X-Y measurements. A group of dice is selected for overlay measurement; four measurement targets were laid out in the kerf of each die. The measured coordinates are then fit-ted to either a "historical" grid, which reflects the individual tool bias, or to an ideal grid squares fashion. Measurements are done using a Nikon X-Y laser interferometric measurement system, which provides a reference grid. The stability of the measurement system is essential. We then apply appropriate statistics to the residual after the fit to determine the overlay performance. Statistical methods play an important role in the product disposition. The acceptance criterion is, however, a compromise between the cost for mask making and the final device yield. In order to satisfy the demand on mask houses for quality of masks and high volume, mixing lithographic tools in mask making has become more popular, in particular, mixing optical and E-beam tools. In this paper, we also discuss the inspection procedure for mixing different lithographic tools.
Correlation between safety climate and contractor safety assessment programs in construction
Sparer, EH1; Murphy, LA; Taylor, KM; Dennerlein, Jt
2015-01-01
Background Contractor safety assessment programs (CSAPs) measure safety performance by integrating multiple data sources together; however, the relationship between these measures of safety performance and safety climate within the construction industry is unknown. Methods 401 construction workers employed by 68 companies on 26 sites and 11 safety managers employed by 11 companies completed brief surveys containing a nine-item safety climate scale developed for the construction industry. CSAP scores from ConstructSecure, Inc., an online CSAP database, classified these 68 companies as high or low scorers, with the median score of the sample population as the threshold. Spearman rank correlations evaluated the association between the CSAP score and the safety climate score at the individual level, as well as with various grouping methodologies. In addition, Spearman correlations evaluated the comparison between manager-assessed safety climate and worker-assessed safety climate. Results There were no statistically significant differences between safety climate scores reported by workers in the high and low CSAP groups. There were, at best, weak correlations between workers’ safety climate scores and the company CSAP scores, with marginal statistical significance with two groupings of the data. There were also no significant differences between the manager-assessed safety climate and the worker-assessed safety climate scores. Conclusions A CSAP safety performance score does not appear to capture safety climate, as measured in this study. The nature of safety climate in construction is complex, which may be reflective of the challenges in measuring safety climate within this industry. PMID:24038403
ERIC Educational Resources Information Center
McCaffrey, Daniel F.; Han, Bing; Lockwood, J. R.
2008-01-01
A key component to the new wave of performance-based pay initiatives is the use of student achievement data to evaluate teacher performance. As greater amounts of student achievement data are being collected, researchers have been developing and applying innovative statistical and econometric models to longitudinal data to develop measures of an…
Use of Unlabeled Samples for Mitigating the Hughes Phenomenon
NASA Technical Reports Server (NTRS)
Landgrebe, David A.; Shahshahani, Behzad M.
1993-01-01
The use of unlabeled samples in improving the performance of classifiers is studied. When the number of training samples is fixed and small, additional feature measurements may reduce the performance of a statistical classifier. It is shown that by using unlabeled samples, estimates of the parameters can be improved and therefore this phenomenon may be mitigated. Various methods for using unlabeled samples are reviewed and experimental results are provided.
BaTMAn: Bayesian Technique for Multi-image Analysis
NASA Astrophysics Data System (ADS)
Casado, J.; Ascasibar, Y.; García-Benito, R.; Guidi, G.; Choudhury, O. S.; Bellocchi, E.; Sánchez, S. F.; Díaz, A. I.
2016-12-01
Bayesian Technique for Multi-image Analysis (BaTMAn) characterizes any astronomical dataset containing spatial information and performs a tessellation based on the measurements and errors provided as input. The algorithm iteratively merges spatial elements as long as they are statistically consistent with carrying the same information (i.e. identical signal within the errors). The output segmentations successfully adapt to the underlying spatial structure, regardless of its morphology and/or the statistical properties of the noise. BaTMAn identifies (and keeps) all the statistically-significant information contained in the input multi-image (e.g. an IFS datacube). The main aim of the algorithm is to characterize spatially-resolved data prior to their analysis.
Inverse statistics and information content
NASA Astrophysics Data System (ADS)
Ebadi, H.; Bolgorian, Meysam; Jafari, G. R.
2010-12-01
Inverse statistics analysis studies the distribution of investment horizons to achieve a predefined level of return. This distribution provides a maximum investment horizon which determines the most likely horizon for gaining a specific return. There exists a significant difference between inverse statistics of financial market data and a fractional Brownian motion (fBm) as an uncorrelated time-series, which is a suitable criteria to measure information content in financial data. In this paper we perform this analysis for the DJIA and S&P500 as two developed markets and Tehran price index (TEPIX) as an emerging market. We also compare these probability distributions with fBm probability, to detect when the behavior of the stocks are the same as fBm.
Statistical Analysis of speckle noise reduction techniques for echocardiographic Images
NASA Astrophysics Data System (ADS)
Saini, Kalpana; Dewal, M. L.; Rohit, Manojkumar
2011-12-01
Echocardiography is the safe, easy and fast technology for diagnosing the cardiac diseases. As in other ultrasound images these images also contain speckle noise. In some cases this speckle noise is useful such as in motion detection. But in general noise removal is required for better analysis of the image and proper diagnosis. Different Adaptive and anisotropic filters are included for statistical analysis. Statistical parameters such as Signal-to-Noise Ratio (SNR), Peak Signal-to-Noise Ratio (PSNR), and Root Mean Square Error (RMSE) calculated for performance measurement. One more important aspect that there may be blurring during speckle noise removal. So it is prefered that filter should be able to enhance edges during noise removal.
Negative values of quasidistributions and quantum wave and number statistics
NASA Astrophysics Data System (ADS)
Peřina, J.; Křepelka, J.
2018-04-01
We consider nonclassical wave and number quantum statistics, and perform a decomposition of quasidistributions for nonlinear optical down-conversion processes using Bessel functions. We show that negative values of the quasidistribution do not directly represent probabilities; however, they directly influence measurable number statistics. Negative terms in the decomposition related to the nonclassical behavior with negative amplitudes of probability can be interpreted as positive amplitudes of probability in the negative orthogonal Bessel basis, whereas positive amplitudes of probability in the positive basis describe classical cases. However, probabilities are positive in all cases, including negative values of quasidistributions. Negative and positive contributions of decompositions to quasidistributions are estimated. The approach can be adapted to quantum coherence functions.
Bottle, Alex; Chase, Helen E; Aylin, Paul P; Loeffler, Mark
2018-05-01
Joint replacement revision is the most widely used long-term outcome measure in elective hip and knee surgery. Return to theatre (RTT) has been proposed as an additional outcome measure, but how it compares with revision in its statistical performance is unknown. National hospital administrative data for England were used to compare RTT at 90 days (RTT90) with revision rates within 3 years by surgeon. Standard power calculations were run for different scenarios. Funnel plots were used to count the number of surgeons with unusually high or low rates. From 2006 to 2011, there were 297 650 hip replacements (HRs) among 2952 surgeons and 341 226 knee replacements (KRs) among 2343 surgeons. RTT90 rates were 2.1% for HR and 1.5% for KR; 3-year revision rates were 2.1% for HR and 2.2% for KR. Statistical power to detect surgeons with poor performance on either metric was particularly low for surgeons performing 50 cases per year for the 5 years. The correlation between the risk-adjusted surgeon-level rates for the two outcomes was +0.51 for HR and +0.20 for KR, both p<0.001. There was little agreement between the measures regarding which surgeons had significantly high or low rates. RTT90 appears to provide useful and complementary information on surgeon performance and should be considered alongside revision rates, but low case loads considerably reduce the power to detect unusual performance on either metric. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Baune, Bernhard T; Sluth, Lasse B; Olsen, Christina K
2018-03-15
Major Depressive Disorder (MDD) is a complex disease characterized by emotional, physical and cognitive symptoms. We explored the efficacy of vortioxetine versus placebo on outcomes of cognition, functioning and mood symptoms in working patients with depression, using paroxetine as an active reference. Gainfully employed patients (18-65 years, N = 152) with MDD were randomized 1:1:1 to 8 weeks' double-blind, parallel treatment either with vortioxetine (10mg/day) or paroxetine (20mg/day), or with placebo. The primary efficacy measure was the Digit Symbol Substitution Test (DSST), analyzed using a mixed model for repeated measurements, and the key secondary efficacy measure was the University of San Diego Performance-based Skills Assessment - Brief (UPSA-B), analyzed using analysis of covariance (last observation carried forward). At week 8, DSST and UPSA-B performance had improved relative to baseline in all treatment groups, with no statistically significant differences between treatment groups. While improvements in mood were comparable for vortioxetine and paroxetine, numerical improvements in cognitive performance (DSST) were larger with vortioxetine. Vortioxetine significantly improved overall cognitive performance and clinician-rated functioning relative to placebo. The majority of adverse events were mild or moderate, with nausea being the most common adverse event for vortioxetine. Small sample sizes implied limited statistical power. This explorative study showed no significant differences versus placebo in DSST or UPSA-B performance at week 8. However, secondary results support vortioxetine as an effective and well-tolerated antidepressant, supporting an added benefit for cognition and functioning, which could have particular therapeutic relevance for the working patient population. Copyright © 2017 H Lundbeck A/S. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cianciara, Aleksander
2016-09-01
The paper presents the results of research aimed at verifying the hypothesis that the Weibull distribution is an appropriate statistical distribution model of microseismicity emission characteristics, namely: energy of phenomena and inter-event time. It is understood that the emission under consideration is induced by the natural rock mass fracturing. Because the recorded emission contain noise, therefore, it is subjected to an appropriate filtering. The study has been conducted using the method of statistical verification of null hypothesis that the Weibull distribution fits the empirical cumulative distribution function. As the model describing the cumulative distribution function is given in an analytical form, its verification may be performed using the Kolmogorov-Smirnov goodness-of-fit test. Interpretations by means of probabilistic methods require specifying the correct model describing the statistical distribution of data. Because in these methods measurement data are not used directly, but their statistical distributions, e.g., in the method based on the hazard analysis, or in that that uses maximum value statistics.
Accuracy assessment for a multi-parameter optical calliper in on line automotive applications
NASA Astrophysics Data System (ADS)
D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.
2017-08-01
In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.
McDonnell, J D; Schunck, N; Higdon, D; Sarich, J; Wild, S M; Nazarewicz, W
2015-03-27
Statistical tools of uncertainty quantification can be used to assess the information content of measured observables with respect to present-day theoretical models, to estimate model errors and thereby improve predictive capability, to extrapolate beyond the regions reached by experiment, and to provide meaningful input to applications and planned measurements. To showcase new opportunities offered by such tools, we make a rigorous analysis of theoretical statistical uncertainties in nuclear density functional theory using Bayesian inference methods. By considering the recent mass measurements from the Canadian Penning Trap at Argonne National Laboratory, we demonstrate how the Bayesian analysis and a direct least-squares optimization, combined with high-performance computing, can be used to assess the information content of the new data with respect to a model based on the Skyrme energy density functional approach. Employing the posterior probability distribution computed with a Gaussian process emulator, we apply the Bayesian framework to propagate theoretical statistical uncertainties in predictions of nuclear masses, two-neutron dripline, and fission barriers. Overall, we find that the new mass measurements do not impose a constraint that is strong enough to lead to significant changes in the model parameters. The example discussed in this study sets the stage for quantifying and maximizing the impact of new measurements with respect to current modeling and guiding future experimental efforts, thus enhancing the experiment-theory cycle in the scientific method.
The effectiveness of repeat lumbar transforaminal epidural steroid injections.
Murthy, Naveen S; Geske, Jennifer R; Shelerud, Randy A; Wald, John T; Diehn, Felix E; Thielen, Kent R; Kaufmann, Timothy J; Morris, Jonathan M; Lehman, Vance T; Amrami, Kimberly K; Carter, Rickey E; Maus, Timothy P
2014-10-01
The aim of this study was to determine 1) if repeat lumbar transforaminal epidural steroid injections (TFESIs) resulted in recovery of pain relief, which has waned since an index injection, and 2) if cumulative benefit could be achieved by repeat injections within 3 months of the index injection. Retrospective observational study with statistical modeling of the response to repeat TFESI. Academic radiology practice. Two thousand eighty-seven single-level TFESIs were performed for radicular pain on 933 subjects. Subjects received repeat TFESIs >2 weeks and <1 year from the index injection. Hierarchical linear modeling was performed to evaluate changes in continuous and categorical pain relief outcomes after repeat TFESI. Subgroup analyses were performed on patients with <3 months duration of pain (acute pain), patients receiving repeat injections within 3 months (clustered injections), and in patients with both acute pain and clustered injections. Repeat TFESIs achieved pain relief in both continuous and categorical outcomes. Relative to the index injection, there was a minimal but statistically significant decrease in pain relief in modeled continuous outcome measures with subsequent injections. Acute pain patients recovered all prior benefit with a statistically significant cumulative benefit. Patients receiving clustered injections achieved statistically significant cumulative benefit, of greater magnitude in acute pain patients. Repeat TFESI may be performed for recurrence of radicular pain with the expectation of recovery of most or all previously achieved benefit; acute pain patients will likely recover all prior benefit. Repeat TFESIs within 3 months of the index injection can provide cumulative benefit. Wiley Periodicals, Inc.
Using statistical text classification to identify health information technology incidents
Chai, Kevin E K; Anthony, Stephen; Coiera, Enrico; Magrabi, Farah
2013-01-01
Objective To examine the feasibility of using statistical text classification to automatically identify health information technology (HIT) incidents in the USA Food and Drug Administration (FDA) Manufacturer and User Facility Device Experience (MAUDE) database. Design We used a subset of 570 272 incidents including 1534 HIT incidents reported to MAUDE between 1 January 2008 and 1 July 2010. Text classifiers using regularized logistic regression were evaluated with both ‘balanced’ (50% HIT) and ‘stratified’ (0.297% HIT) datasets for training, validation, and testing. Dataset preparation, feature extraction, feature selection, cross-validation, classification, performance evaluation, and error analysis were performed iteratively to further improve the classifiers. Feature-selection techniques such as removing short words and stop words, stemming, lemmatization, and principal component analysis were examined. Measurements κ statistic, F1 score, precision and recall. Results Classification performance was similar on both the stratified (0.954 F1 score) and balanced (0.995 F1 score) datasets. Stemming was the most effective technique, reducing the feature set size to 79% while maintaining comparable performance. Training with balanced datasets improved recall (0.989) but reduced precision (0.165). Conclusions Statistical text classification appears to be a feasible method for identifying HIT reports within large databases of incidents. Automated identification should enable more HIT problems to be detected, analyzed, and addressed in a timely manner. Semi-supervised learning may be necessary when applying machine learning to big data analysis of patient safety incidents and requires further investigation. PMID:23666777
Radar cross section measurements of a scale model of the space shuttle orbiter vehicle
NASA Technical Reports Server (NTRS)
Yates, W. T.
1978-01-01
A series of microwave measurements was conducted to determine the radar cross section of the Space Shuttle Orbiter vehicle at a frequency and at aspect angles applicable to re-entry radar acquisition and tracking. The measurements were performed in a microwave anechoic chamber using a 1/15th scale model and a frequency applicable to C-band tracking radars. The data were digitally recorded and processed to yield statistical descriptions useful for prediction of orbiter re-entry detection and tracking ranges.
NASA Astrophysics Data System (ADS)
Panteleev, Ivan; Bayandin, Yuriy; Naimark, Oleg
2017-12-01
This work performs a correlation analysis of the statistical properties of continuous acoustic emission recorded in different parts of marble and fiberglass laminate samples under quasi-static deformation. A spectral coherent measure of time series, which is a generalization of the squared coherence spectrum on a multidimensional series, was chosen. The spectral coherent measure was estimated in a sliding time window for two parameters of the acoustic emission multifractal singularity spectrum: the spectrum width and the generalized Hurst exponent realizing the maximum of the singularity spectrum. It is shown that the preparation of the macrofracture focus is accompanied by the synchronization (coherent behavior) of the statistical properties of acoustic emission in allocated frequency intervals.
Evaluation of an Automated Keywording System.
ERIC Educational Resources Information Center
Malone, Linda C.; And Others
1990-01-01
Discussion of automated indexing techniques focuses on ways to statistically document improvements in the development of an automated keywording system over time. The system developed by the Joint Chiefs of Staff to automate the storage, categorization, and retrieval of information from military exercises is explained, and performance measures are…
Post-hurricane forest damage assessment using satellite remote sensing
W. Wang; J.J. Qu; X. Hao; Y. Liu; J.A. Stanturf
2010-01-01
This study developed a rapid assessment algorithm for post-hurricane forest damage estimation using moderate resolution imaging spectroradiometer (MODIS) measurements. The performance of five commonly used vegetation indices as post-hurricane forest damage indicators was investigated through statistical analysis. The Normalized Difference Infrared Index (NDII) was...
Australian Vocational Education and Training Statistics: VET in Schools, 2008
ERIC Educational Resources Information Center
National Centre for Vocational Education Research (NCVER), 2010
2010-01-01
This report presents information about senior secondary school students undertaking vocational education and training (VET) through the program known as "VET in Schools" during 2008. It includes information on participation, students, courses and qualifications, and subjects. The information on key performance measures and program…
Validation of minicams for measuring concentrations of chemical agent in environmental air
DOE Office of Scientific and Technical Information (OSTI.GOV)
Menton, R.G.; Hayes, T.L.; Chou, Y.L.
1993-05-13
Environmental monitoring for chemical agents is necessary to ensure that notification and appropriate action will be taken in the, event that there is a release exceeding control limits of such agents into the workplace outside of engineering controls. Prior to implementing new analytical procedures for environmental monitoring, precision and accuracy (PA) tests are conducted to ensure that an agent monitoring system performs according to specified accuracy, precision, and sensitivity requirements. This testing not only establishes the accuracy and precision of the method, but also determines what factors can affect the method's performance. Performance measures that are particularly important in agentmore » monitoring include the Detection Limit (DL), Decision Limit (DC), Found Action Level (FAL), and the Target Action Level (TAL). PA experiments were performed at Battelle's Medical Research and Evaluation Facility (MREF) to validate the use of the miniature chemical agent monitoring system (MINICAMs) for measuring environmental air concentrations of sulfur mustard (HD). This presentation discusses the experimental and statistical approaches for characterizing the performance of MINICAMS for measuring HD in air.« less
NASA Astrophysics Data System (ADS)
Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.
2017-03-01
In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.
Ultrasound Metrology in Mexico: a round robin test for medical diagnostics
NASA Astrophysics Data System (ADS)
Amezola Luna, R.; López Sánchez, A. L.; Elías Juárez, A. A.
2011-02-01
This paper presents preliminary statistical results from an on-going imaging medical ultrasound study, of particular relevance for gynecology and obstetrics areas. Its scope is twofold, firstly to compile the medical ultrasound infrastructure available in cities of Queretaro-Mexico, and second to promote the use of traceable measurement standards as a key aspect to assure quality of ultrasound examinations performed by medical specialists. The experimental methodology is based on a round robin test using an ultrasound phantom for medical imaging. The physician, using its own ultrasound machine, couplant and facilities, measures the size and depth of a set of pre-defined reflecting and absorbing targets of the reference phantom, which simulate human illnesses. Measurements performed give the medical specialist an objective feedback regarding some performance characteristics of their ultrasound examination systems, such as measurement system accuracy, dead zone, axial resolution, depth of penetration and anechoic targets detection. By the end of March 2010, 66 entities with medical ultrasound facilities, from both public and private institutions, have performed measurements. A network of medical ultrasound calibration laboratories in Mexico, with traceability to The International System of Units via national measurement standards, may indeed contribute to reduce measurement deviations and thus attain better diagnostics.
Tang, Jie; Nett, Brian E; Chen, Guang-Hong
2009-10-07
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Pharmacy Students' Test-Taking Motivation-Effort on a Low-Stakes Standardized Test
2011-01-01
Objective To measure third-year pharmacy students' level of motivation while completing the Pharmacy Curriculum Outcomes Assessment (PCOA) administered as a low-stakes test to better understand use of the PCOA as a measure of student content knowledge. Methods Student motivation was manipulated through an incentive (ie, personal letter from the dean) and a process of statistical motivation filtering. Data were analyzed to determine any differences between the experimental and control groups in PCOA test performance, motivation to perform well, and test performance after filtering for low motivation-effort. Results Incentivizing students diminished the need for filtering PCOA scores for low effort. Where filtering was used, performance scores improved, providing a more realistic measure of aggregate student performance. Conclusions To ensure that PCOA scores are an accurate reflection of student knowledge, incentivizing and/or filtering for low motivation-effort among pharmacy students should be considered fundamental best practice when the PCOA is administered as a low-stakes test PMID:21655395
Story Processing Ability in Cognitively Healthy Younger and Older Adults
Wright, Heather Harris; Capilouto, Gilson J.; Srinivasan, Cidambi; Fergadiotis, Gerasimos
2012-01-01
Purpose The purpose of the study was to examine the relationships among measures of comprehension and production for stories depicted in wordless pictures books and measures of memory and attention for 2 age groups. Method Sixty cognitively healthy adults participated. They consisted of two groups—young adults (20–29 years of age) and older adults (70–89 years of age). Participants completed cognitive measures and several discourse tasks; these included telling stories depicted in wordless picture books and answering multiple-choice comprehension questions pertaining to the story. Results The 2 groups did not differ significantly for proportion of story propositions conveyed; however, the younger group performed significantly better on the comprehension measure as compared with the older group. Only the older group demonstrated a statistically significant relationship between the story measures. Performance on the production and comprehension measures significantly correlated with performance on the cognitive measures for the older group but not for the younger group. Conclusions The relationship between adults’ comprehension of stimuli used to elicit narrative production samples and their narrative productions differed across the life span, suggesting that discourse processing performance changes in healthy aging. Finally, the study’s findings suggest that memory and attention contribute to older adults’ story processing performance. PMID:21106701
Wavelet-based multiscale performance analysis: An approach to assess and improve hydrological models
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Khosa, Rakesh; Adamowski, Jan; ch, Sudheer; Partheepan, G.; Anand, Jatin; Narsimlu, Boini
2014-12-01
The temporal dynamics of hydrological processes are spread across different time scales and, as such, the performance of hydrological models cannot be estimated reliably from global performance measures that assign a single number to the fit of a simulated time series to an observed reference series. Accordingly, it is important to analyze model performance at different time scales. Wavelets have been used extensively in the area of hydrological modeling for multiscale analysis, and have been shown to be very reliable and useful in understanding dynamics across time scales and as these evolve in time. In this paper, a wavelet-based multiscale performance measure for hydrological models is proposed and tested (i.e., Multiscale Nash-Sutcliffe Criteria and Multiscale Normalized Root Mean Square Error). The main advantage of this method is that it provides a quantitative measure of model performance across different time scales. In the proposed approach, model and observed time series are decomposed using the Discrete Wavelet Transform (known as the à trous wavelet transform), and performance measures of the model are obtained at each time scale. The applicability of the proposed method was explored using various case studies-both real as well as synthetic. The synthetic case studies included various kinds of errors (e.g., timing error, under and over prediction of high and low flows) in outputs from a hydrologic model. The real time case studies investigated in this study included simulation results of both the process-based Soil Water Assessment Tool (SWAT) model, as well as statistical models, namely the Coupled Wavelet-Volterra (WVC), Artificial Neural Network (ANN), and Auto Regressive Moving Average (ARMA) methods. For the SWAT model, data from Wainganga and Sind Basin (India) were used, while for the Wavelet Volterra, ANN and ARMA models, data from the Cauvery River Basin (India) and Fraser River (Canada) were used. The study also explored the effect of the choice of the wavelets in multiscale model evaluation. It was found that the proposed wavelet-based performance measures, namely the MNSC (Multiscale Nash-Sutcliffe Criteria) and MNRMSE (Multiscale Normalized Root Mean Square Error), are a more reliable measure than traditional performance measures such as the Nash-Sutcliffe Criteria (NSC), Root Mean Square Error (RMSE), and Normalized Root Mean Square Error (NRMSE). Further, the proposed methodology can be used to: i) compare different hydrological models (both physical and statistical models), and ii) help in model calibration.
Online neural monitoring of statistical learning.
Batterink, Laura J; Paller, Ken A
2017-05-01
The extraction of patterns in the environment plays a critical role in many types of human learning, from motor skills to language acquisition. This process is known as statistical learning. Here we propose that statistical learning has two dissociable components: (1) perceptual binding of individual stimulus units into integrated composites and (2) storing those integrated representations for later use. Statistical learning is typically assessed using post-learning tasks, such that the two components are conflated. Our goal was to characterize the online perceptual component of statistical learning. Participants were exposed to a structured stream of repeating trisyllabic nonsense words and a random syllable stream. Online learning was indexed by an EEG-based measure that quantified neural entrainment at the frequency of the repeating words relative to that of individual syllables. Statistical learning was subsequently assessed using conventional measures in an explicit rating task and a reaction-time task. In the structured stream, neural entrainment to trisyllabic words was higher than in the random stream, increased as a function of exposure to track the progression of learning, and predicted performance on the reaction time (RT) task. These results demonstrate that monitoring this critical component of learning via rhythmic EEG entrainment reveals a gradual acquisition of knowledge whereby novel stimulus sequences are transformed into familiar composites. This online perceptual transformation is a critical component of learning. Copyright © 2017 Elsevier Ltd. All rights reserved.
Steeves, Darren; Campagna, Phil
2018-02-14
This project investigated whether there was a relationship between maximal aerobic power and the recovery or performance in elite ice hockey players during a simulated hockey game. An on-ice protocol was used to simulate a game of ice hockey. Recovery values were determined by the differences in lactate and heart rate measures. Total distance traveled was also recorded as a performance measure. On two other days, subjects returned and completed a maximal aerobic power test on a treadmill and a maximal lactate test on ice. Statistical analysis showed no relationship between maximal aerobic power or maximal lactate values and recovery (heart rate, lactate) or the performance measure of distance traveled. It was concluded there was no relationship between maximal aerobic power and recovery during a simulated game in elite hockey players.
Bradshaw, Elizabeth J; Keogh, Justin W L; Hume, Patria A; Maulder, Peter S; Nortje, Jacques; Marnewick, Michel
2009-06-01
The purpose of this study was to examine the role of neuromotor noise on golf swing performance in high- and low-handicap players. Selected two-dimensional kinematic measures of 20 male golfers (n=10 per high- or low-handicap group) performing 10 golf swings with a 5-iron club was obtained through video analysis. Neuromotor noise was calculated by deducting the standard error of the measurement from the coefficient of variation obtained from intra-individual analysis. Statistical methods included linear regression analysis and one-way analysis of variance using SPSS. Absolute invariance in the key technical positions (e.g., at the top of the backswing) of the golf swing appears to be a more favorable technique for skilled performance.
Watanabe, Yuya; Yamada, Yosuke; Yoshida, Tsukasa; Matsui, Tomoyuki; Seo, Kazuya; Azuma, Yoshikazu; Hiramoto, Machiko; Miura, Yuichiro; Fukushima, Hideaki; Shimazu, Akito; Eto, Toshiaki; Saotome, Homare; Kida, Noriyuki; Morihara, Toru
2017-10-30
This study examined anthropometric and fitness profiles of Japanese female professional baseball players and investigated the relationship between players' physical fitness and in-season game performance. Fifty-seven players who were registered in the Japan Women's Baseball League (JWBL) participated. Height, weight, grip strength, back strength, knee-extension and -flexion strength, hamstring extensibility, vertical jump height, and horizontal jump distance were measured at pre-season (February and March) in 2013. Game performance during the 2013 season (March to November) was obtained from official JWBL statistics. Vertical jump height showed significant positive correlations with individual performance records [e.g., total bases (r = 0.551), slugging percentage (r = 0.459), and stolen bases (r = 0.442)]. Similar relationships were observed between horizontal jump distance and performance statistics in most cases. In contrast, grip, back, and lower-limb strength, and hamstring extensibility were not significantly correlated with game performance. Stepwise regression analysis selected vertical jump height as an independent variable, significantly correlating with several game performance measures (e.g., total bases: adjusted R = 0.257). Also, vertical jump height and body mass index were identified as independent variables significantly associated with stolen bases (adjusted R = 0.251). Maximal jump performance, rather than simple isometric muscle strength or flexibility, is a good performance test that can be used at the end of pre-season to predict in-season batting and stolen base performance. Our findings demonstrate the importance of constructing pre-season training programs to enhance lower-limb muscular power that is linked to successful in-season performance in female baseball players.
A comprehensive review of arsenic levels in the semiconductor manufacturing industry.
Park, Donguk; Yang, Haengsun; Jeong, Jeeyeon; Ha, Kwonchul; Choi, Sangjun; Kim, Chinyon; Yoon, Chungsik; Park, Dooyong; Paek, Domyung
2010-11-01
This paper presents a summary of arsenic level statistics from air and wipe samples taken from studies conducted in fabrication operations. The main objectives of this study were not only to describe arsenic measurement data but also, through a literature review, to categorize fabrication workers in accordance with observed arsenic levels. All airborne arsenic measurements reported were included in the summary statistics for analysis of the measurement data. The arithmetic mean was estimated assuming a lognormal distribution from the geometric mean and the geometric standard deviation or the range. In addition, weighted arithmetic means (WAMs) were calculated based on the number of measurements reported for each mean. Analysis of variance (ANOVA) was employed to compare arsenic levels classified according to several categories such as the year, sampling type, location sampled, operation type, and cleaning technique. Nine papers were found reporting airborne arsenic measurement data from maintenance workers or maintenance areas in semiconductor chip-making plants. A total of 40 statistical summaries from seven articles were identified that represented a total of 423 airborne arsenic measurements. Arsenic exposure levels taken during normal operating activities in implantation operations (WAM = 1.6 μg m⁻³, no. of samples = 77, no. of statistical summaries = 2) were found to be lower than exposure levels of engineers who were involved in maintenance works (7.7 μg m⁻³, no. of samples = 181, no. of statistical summaries = 19). The highest level (WAM = 218.6 μg m⁻³) was associated with various maintenance works performed inside an ion implantation chamber. ANOVA revealed no significant differences in the WAM arsenic levels among the categorizations based on operation and sampling characteristics. Arsenic levels (56.4 μg m⁻³) recorded during maintenance works performed in dry conditions were found to be much higher than those from maintenance works in wet conditions (0.6 μg m⁻³). Arsenic levels from wipe samples in process areas after maintenance activities ranged from non-detectable to 146 μg cm⁻², indicating the potential for dispersion into the air and hence inhalation. We conclude that workers who are regularly or occasionally involved in maintenance work have higher potential for occupational exposure than other employees who are in charge of routine production work. In addition, fabrication workers can be classified into two groups based on the reviewed arsenic exposure levels: operators with potential for low levels of exposure and maintenance engineers with high levels of exposure. These classifications could be used as a basis for a qualitative ordinal ranking of exposure in an epidemiological study.
NASA Astrophysics Data System (ADS)
Herrera-Oliva, C. S.
2013-05-01
In this work we design and implement a method for the determination of precipitation forecast through the application of an elementary neuronal network (perceptron) to the statistical analysis of the precipitation reported in catalogues. The method is limited mainly by the catalogue length (and, in a smaller degree, by its accuracy). The method performance is measured using grading functions that evaluate a tradeoff between positive and negative aspects of performance. The method is applied to the Guadalupe Valley, Baja California, Mexico. Using consecutive intervals of dt=0.1 year, employing the data of several climatological stations situated in and surrounding this important wine industries zone. We evaluated the performance of different models of ANN, whose variables of entrance are the heights of precipitation. The results obtained were satisfactory, except for exceptional values of rain. Key words: precipitation forecast, artificial neural networks, statistical analysis
"Top Performing" US Hospitals and the Health Status of Counties they Serve.
Maraccini, Amber M; Yang, Wei; Slonim, Anthony D
2018-06-01
This study (a) examined the relationships between "top performing" US hospitals and the health status of counties they serve and (b) compared the health status of "top performing" US hospital counties versus that of remaining US counties. Statistical analyses considered US News and World Report Honor Roll ranking data, as a measure of hospital performance, and County Health Rankings (CHR) data, as a measure of county health status. "Top performing" hospital Honor Roll scores were correlated with measures of Clinical Care (p < 0.001). Counties with "top performing" US hospitals presented greater health status with regard to All Health Outcomes (p < 0.001), Length of Life (p < 0.001), Quality of Life (p < 0.001), All Health Factors (p < 0.001), Health Behaviors (p < 0.001), and Clinical Care (p < 0.001), than compared to remaining US counties. Hospital impact on county health status remains primarily recognized in clinical care and not in overall health. Also, counties that contain a "top performing" US hospital tend to present lower health risks to their citizens than compared to other US counties.
Physician performance assessment using a composite quality index.
Liu, Kaibo; Jain, Shabnam; Shi, Jianjun
2013-07-10
Assessing physician performance is important for the purposes of measuring and improving quality of service and reducing healthcare delivery costs. In recent years, physician performance scorecards have been used to provide feedback on individual measures; however, one key challenge is how to develop a composite quality index that combines multiple measures for overall physician performance evaluation. A controversy arises over establishing appropriate weights to combine indicators in multiple dimensions, and cannot be easily resolved. In this study, we proposed a generic unsupervised learning approach to develop a single composite index for physician performance assessment by using non-negative principal component analysis. We developed a new algorithm named iterative quadratic programming to solve the numerical issue in the non-negative principal component analysis approach. We conducted real case studies to demonstrate the performance of the proposed method. We provided interpretations from both statistical and clinical perspectives to evaluate the developed composite ranking score in practice. In addition, we implemented the root cause assessment techniques to explain physician performance for improvement purposes. Copyright © 2012 John Wiley & Sons, Ltd.
The effects of smartphone multitasking on gait and dynamic balance.
Lee, Jeon Hyeong; Lee, Myoung Hee
2018-02-01
[Purpose] This study was performed to analyze the influence of smartphone multitasking on gait and dynamic balance. [Subjects and Methods] The subjects were 19 male and 20 female university students. There were 4 types of gait tasks: General Gait (walking without a task), Task Gait 1 (walking while writing a message), Task Gait 2 (walking while writing a message and listening to music), Task Gait 3 (walking while writing a message and having a conversation). To exclude the learning effect, the order of tasks was randomized. The Zebris FDM-T treadmill system (Zebris Medical GmbH, Germany) was used to measure left and right step length and width, and a 10 m walking test (10MWT) was conducted for gait velocity. In addition, a Timed Up and Go test (TUG) was used to measure dynamic balance. All the tasks were performed 3 times, and the mean of the measured values was analyzed. [Results] There were no statistically significant differences in step length and width. There were statistically significant differences in the 10MWT and TUG tests. [Conclusion] Using a smartphone while walking decreases a person's dynamic balance and walking ability. It is considered that accident rates are higher when using a smartphone.
Ilyin, V K; Shumilina, G A; Solovieva, Z O; Nosovsky, A M; Kaminskaya, E V
Earlier studies were furthered by examination of parodentium anaerobic microbiota and investigation of gingival liquid immunological factors in space flight. Immunoglobulins were measured using the .enzyme immunoassay (EM). The qualitative content of keya parodentium pathogens is determined with state-of-the-art molecular biology technologies such as the polymerase chain reaction. Statistical data processing was performed using the principle component analysis and ensuing standard statistical analysis. Thereupon, recommendations on cosmonaut's oral and dental hygiene during space mission were developed.
[Evaluation of the capacity of the APR-DRG classification system to predict hospital mortality].
De Marco, Maria Francesca; Lorenzoni, Luca; Addari, Piero; Nante, Nicola
2002-01-01
Inpatient mortality has increasingly been used as an hospital outcome measure. Comparing mortality rates across hospitals requires adjustment for patient risks before making inferences about quality of care based on patient outcomes. Therefore it is essential to dispose of well performing severity measures. The aim of this study is to evaluate the ability of the All Patient Refined DRG system to predict inpatient mortality for congestive heart failure, myocardial infarction, pneumonia and ischemic stroke. Administrative records were used in this analysis. We used two statistics methods to assess the ability of the APR-DRG to predict mortality: the area under the receiver operating characteristics curve (referred to as the c-statistic) and the Hosmer-Lemeshow test. The database for the study included 19,212 discharges for stroke, pneumonia, myocardial infarction and congestive heart failure from fifteen hospital participating in the Italian APR-DRG Project. A multivariate analysis was performed to predict mortality for each condition in study using age, sex and APR-DRG risk mortality subclass as independent variables. Inpatient mortality rate ranges from 9.7% (pneumonia) to 16.7% (stroke). Model discrimination, calculated using the c-statistic, was 0.91 for myocardial infarction, 0.68 for stroke, 0.78 for pneumonia and 0.71 for congestive heart failure. The model calibration assessed using the Hosmer-Leme-show test was quite good. The performance of the APR-DRG scheme when used on Italian hospital activity records is similar to that reported in literature and it seems to improve by adding age and sex to the model. The APR-DRG system does not completely capture the effects of these variables. In some cases, the better performance might be due to the inclusion of specific complications in the risk-of-mortality subclass assignment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuravlev, B. V., E-mail: zhurav@ippe.ru; Lychagin, A. A.; Titarenko, N. N.
The spectra of neutrons from the (p, n) reactions on {sup 47}Ti, {sup 48}Ti, {sup 49}Ti, {sup 53}Cr, and {sup 54}Cr nuclei were measured in the proton-energy range 7-11 MeV. The measurements were performed with the aid of a fast-neutron spectrometer by the time-of-flight method over the base of the EGP-15 tandem accelerator of the Institute for Physics and Power Engineering (IPPE, Obninsk). Owing to a high resolution and a high stability of the time-of-flight spectrometer used, low-lying discrete levels could be identified reliably along with a continuum section of neutron spectra. An analysis of measured data was performed withinmore » the statistical equilibrium and preequilibrium models of nuclear reactions. The relevant calculations were performed by using the exact formalism of Hauser-Feshbach statistical theory supplemented with the generalized model of a superfluid nucleus, the back-shifted Fermi gas model, and the Gilbert-Cameron composite formula for the nuclear level density. The nuclear level densities for {sup 47}V, {sup 48}V, {sup 49}V, {sup 53}Mn, and {sup 54}Mn were determined along with their energy dependences and model parameters. The results are discussed together with available experimental data and recommendations of model systematics.« less
Lindholm, Henrik; Egels-Zandén, Niklas; Rudén, Christina
2016-10-01
In managing chemical risks to the environment and human health in supply chains, voluntary corporate social responsibility (CSR) measures, such as auditing code of conduct compliance, play an important role. To examine how well suppliers' chemical health and safety performance complies with buyers' CSR policies and whether audited factories improve their performance. CSR audits (n = 288) of garment factories conducted by Fair Wear Foundation (FWF), an independent non-profit organization, were analyzed using descriptive statistics and statistical modeling. Forty-three per cent of factories did not comply with the FWF code of conduct, i.e. received remarks on chemical safety. Only among factories audited 10 or more times was there a significant increase in the number of factories receiving no remarks. Compliance with chemical safety requirements in garment supply chains is low and auditing is statistically correlated with improvements only at factories that have undergone numerous audits.
Latham, Daniel T; Hill, Grant M; Petray, Clayre K
2013-04-01
The purpose of this study was to assess whether a treadmill mile is an acceptable FitnessGram Test substitute for the traditional one-mile run for middle school boys and girls. Peak heart rate and perceived physical exertion of the participants were also measured to assess students' effort. 48 boys and 40 girls participated, with approximately 85% classified as Hispanic. Boys' mean time for the traditional one-mile run, as well as peak heart rate and perceived exertion, were statistically significantly faster and higher, respectively, than for the treadmill mile. Girls' treadmill mile times were not statistically significantly different from the traditional one-mile run. There were no statistically significant differences for girl's peak heart rate or perceived exertion. The results suggest that providing middle school students a choice of completing the FitnessGram mile run in either traditional one-mile run or treadmill one-mile format may positively affect performance.
Structural texture similarity metrics for image analysis and retrieval.
Zujovic, Jana; Pappas, Thrasyvoulos N; Neuhoff, David L
2013-07-01
We develop new metrics for texture similarity that accounts for human visual perception and the stochastic nature of textures. The metrics rely entirely on local image statistics and allow substantial point-by-point deviations between textures that according to human judgment are essentially identical. The proposed metrics extend the ideas of structural similarity and are guided by research in texture analysis-synthesis. They are implemented using a steerable filter decomposition and incorporate a concise set of subband statistics, computed globally or in sliding windows. We conduct systematic tests to investigate metric performance in the context of "known-item search," the retrieval of textures that are "identical" to the query texture. This eliminates the need for cumbersome subjective tests, thus enabling comparisons with human performance on a large database. Our experimental results indicate that the proposed metrics outperform peak signal-to-noise ratio (PSNR), structural similarity metric (SSIM) and its variations, as well as state-of-the-art texture classification metrics, using standard statistical measures.
Phillips, David E; AbouZahr, Carla; Lopez, Alan D; Mikkelsen, Lene; de Savigny, Don; Lozano, Rafael; Wilmoth, John; Setel, Philip W
2015-10-03
In this Series paper, we examine whether well functioning civil registration and vital statistics (CRVS) systems are associated with improved population health outcomes. We present a conceptual model connecting CRVS to wellbeing, and describe an ecological association between CRVS and health outcomes. The conceptual model posits that the legal identity that civil registration provides to individuals is key to access entitlements and services. Vital statistics produced by CRVS systems provide essential information for public health policy and prevention. These outcomes benefit individuals and societies, including improved health. We use marginal linear models and lag-lead analysis to measure ecological associations between a composite metric of CRVS performance and three health outcomes. Results are consistent with the conceptual model: improved CRVS performance coincides with improved health outcomes worldwide in a temporally consistent manner. Investment to strengthen CRVS systems is not only an important goal for individuals and societies, but also a development imperative that is good for health. Copyright © 2015 Elsevier Ltd. All rights reserved.
Statistical Model Selection for TID Hardness Assurance
NASA Technical Reports Server (NTRS)
Ladbury, R.; Gorelick, J. L.; McClure, S.
2010-01-01
Radiation Hardness Assurance (RHA) methodologies against Total Ionizing Dose (TID) degradation impose rigorous statistical treatments for data from a part's Radiation Lot Acceptance Test (RLAT) and/or its historical performance. However, no similar methods exist for using "similarity" data - that is, data for similar parts fabricated in the same process as the part under qualification. This is despite the greater difficulty and potential risk in interpreting of similarity data. In this work, we develop methods to disentangle part-to-part, lot-to-lot and part-type-to-part-type variation. The methods we develop apply not just for qualification decisions, but also for quality control and detection of process changes and other "out-of-family" behavior. We begin by discussing the data used in ·the study and the challenges of developing a statistic providing a meaningful measure of degradation across multiple part types, each with its own performance specifications. We then develop analysis techniques and apply them to the different data sets.
NASA Technical Reports Server (NTRS)
Stutzman, Warren L.; Safaai-Jazi, A.; Pratt, Timothy; Nelson, B.; Laster, J.; Ajaz, H.
1993-01-01
Virginia Tech has performed a comprehensive propagation experiment using the Olympus satellite beacons at 12.5, 19.77, and 29.66 GHz (which we refer to as 12, 20, and 30 GHz). Four receive terminals were designed and constructed, one terminal at each frequency plus a portable one with 20 and 30 GHz receivers for microscale and scintillation studies. Total power radiometers were included in each terminal in order to set the clear air reference level for each beacon and also to predict path attenuation. More details on the equipment and the experiment design are found elsewhere. Statistical results for one year of data collection were analyzed. In addition, the following studies were performed: a microdiversity experiment in which two closely spaced 20 GHz receivers were used; a comparison of total power and Dicke switched radiometer measurements, frequency scaling of scintillations, and adaptive power control algorithm development. Statistical results are reported.
2016-01-01
Background In managing chemical risks to the environment and human health in supply chains, voluntary corporate social responsibility (CSR) measures, such as auditing code of conduct compliance, play an important role. Objectives To examine how well suppliers’ chemical health and safety performance complies with buyers’ CSR policies and whether audited factories improve their performance. Methods CSR audits (n = 288) of garment factories conducted by Fair Wear Foundation (FWF), an independent non-profit organization, were analyzed using descriptive statistics and statistical modeling. Results Forty-three per cent of factories did not comply with the FWF code of conduct, i.e. received remarks on chemical safety. Only among factories audited 10 or more times was there a significant increase in the number of factories receiving no remarks. Conclusions Compliance with chemical safety requirements in garment supply chains is low and auditing is statistically correlated with improvements only at factories that have undergone numerous audits. PMID:27611103
NASA Astrophysics Data System (ADS)
Iinuma, Masataka; Suzuki, Yutaro; Nii, Taiki; Kinoshita, Ryuji; Hofmann, Holger F.
2016-03-01
In general, it is difficult to evaluate measurement errors when the initial and final conditions of the measurement make it impossible to identify the correct value of the target observable. Ozawa proposed a solution based on the operator algebra of observables which has recently been used in experiments investigating the error-disturbance trade-off of quantum measurements. Importantly, this solution makes surprisingly detailed statements about the relations between measurement outcomes and the unknown target observable. In the present paper, we investigate this relation by performing a sequence of two measurements on the polarization of a photon, so that the first measurement commutes with the target observable and the second measurement is sensitive to a complementary observable. While the initial measurement can be evaluated using classical statistics, the second measurement introduces the effects of quantum correlations between the noncommuting physical properties. By varying the resolution of the initial measurement, we can change the relative contribution of the nonclassical correlations and identify their role in the evaluation of the quantum measurement. It is shown that the most striking deviation from classical expectations is obtained at the transition between weak and strong measurements, where the competition between different statistical effects results in measurement values well outside the range of possible eigenvalues.
Three-dimensional accuracy of different correction methods for cast implant bars
Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom
2014-01-01
PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205
NASA Astrophysics Data System (ADS)
Gorecki, A.; Brambilla, A.; Moulin, V.; Gaborieau, E.; Radisson, P.; Verger, L.
2013-11-01
Multi-energy (ME) detectors are becoming a serious alternative to classical dual-energy sandwich (DE-S) detectors for X-ray applications such as medical imaging or explosive detection. They can use the full X-ray spectrum of irradiated materials, rather than disposing only of low and high energy measurements, which may be mixed. In this article, we intend to compare both simulated and real industrial detection systems, operating at a high count rate, independently of the dimensions of the measurements and independently of any signal processing methods. Simulations or prototypes of similar detectors have already been compared (see [1] for instance), but never independently of estimation methods and never with real detectors. We have simulated both an ME detector made of CdTe - based on the characteristics of the MultiX ME100 and - a DE-S detector - based on the characteristics of the Detection Technology's X-Card 1.5-64DE model. These detectors were compared to a perfect spectroscopic detector and an optimal DE-S detector. For comparison purposes, two approaches were investigated. The first approach addresses how to distinguise signals, while the second relates to identifying materials. Performance criteria were defined and comparisons were made over a range of material thicknesses and with different photon statistics. Experimental measurements in a specific configuration were acquired to checks simulations. Results showed good agreement between the ME simulation and the ME100 detector. Both criteria seem to be equivalent, and the ME detector performs 3.5 times better than the DE-S detector with same photon statistics based on simulations and experimental measurements. Regardless of the photon statistics ME detectors appeared more efficient than DE-S detectors for all material thicknesses between 1 and 9 cm when measuring plastics with an attenuation signature close that of explosive materials. This translates into an improved false detection rate (FDR): DE-S detectors have an FDR 2.87±0.03-fold higher than ME detectors for 4 cm of POM with 20 000 incident photons, when identifications are screened against a two-material base.
Quatember, R; Maly, J
1980-11-15
200 test persons were subjected to a double-blind experiment involving a medication of K. H. 3 (Schwarzhaupt). The measurement procedures involved 9 apparatus dealing with psychophysiological measurement level methods and led to the following outcomes: 1) Increase of the psychomotor tempo of the dominant hand after 5 month application of K. H. 3 (motor performance series). -2) Reduction of reaction errors determined by a vigilance measurement instrument after 5 month treatment with K. H. 3 (evidenced by an increase in monotony resistance and continous attention). -3) Improvement of multiple-choice reactions (simultaneous reaction capacity) to optic, acoustic and orientation-linked stimuli (fewer false and delayed reactions). -4) Increase of visual attentiveness and visual short time memory after 5 month medication of K. H. 3 measured by the Cognitron concentration measurement device). No statistically significant differences of the investigated performance parameters were found between K. H. 3 and placebo groups after 3 month application of K. H. 3. The result of the present study involving measurements at the psychophysiological measurement level are compared with data of a previous study.
Estimating the Probability of Traditional Copying, Conditional on Answer-Copying Statistics.
Allen, Jeff; Ghattas, Andrew
2016-06-01
Statistics for detecting copying on multiple-choice tests produce p values measuring the probability of a value at least as large as that observed, under the null hypothesis of no copying. The posterior probability of copying is arguably more relevant than the p value, but cannot be derived from Bayes' theorem unless the population probability of copying and probability distribution of the answer-copying statistic under copying are known. In this article, the authors develop an estimator for the posterior probability of copying that is based on estimable quantities and can be used with any answer-copying statistic. The performance of the estimator is evaluated via simulation, and the authors demonstrate how to apply the formula using actual data. Potential uses, generalizability to other types of cheating, and limitations of the approach are discussed.
α -induced reactions on 115In: Cross section measurements and statistical model analysis
NASA Astrophysics Data System (ADS)
Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.
2018-05-01
Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also constrained by the data although there is no unique best-fit combination. Conclusions: The best-fit calculations allow us to extrapolate the low-energy (α ,γ ) cross section of 115In to the astrophysical Gamow window with reasonable uncertainties. However, still further improvements of the α -nucleus potential are required for a global description of elastic (α ,α ) scattering and α -induced reactions in a wide range of masses and energies.
Power, S; Mirza, M; Thakorlal, A; Ganai, B; Gavagan, L D; Given, M F; Lee, M J
2015-06-01
This prospective pilot study was undertaken to evaluate the feasibility and effectiveness of using a radiation absorbing shield to reduce operator dose from scatter during lower limb endovascular procedures. A commercially available bismuth shield system (RADPAD) was used. Sixty consecutive patients undergoing lower limb angioplasty were included. Thirty procedures were performed without the RADPAD (control group) and thirty with the RADPAD (study group). Two separate methods were used to measure dose to a single operator. Thermoluminescent dosimeter (TLD) badges were used to measure hand, eye, and unshielded body dose. A direct dosimeter with digital readout was also used to measure eye and unshielded body dose. To allow for variation between control and study groups, dose per unit time was calculated. TLD results demonstrated a significant reduction in median body dose per unit time for the study group compared with controls (p = 0.001), corresponding to a mean dose reduction rate of 65 %. Median eye and hand dose per unit time were also reduced in the study group compared with control group, however, this was not statistically significant (p = 0.081 for eye, p = 0.628 for hand). Direct dosimeter readings also showed statistically significant reduction in median unshielded body dose rate for the study group compared with controls (p = 0.037). Eye dose rate was reduced for the study group but this was not statistically significant (p = 0.142). Initial results are encouraging. Use of the shield resulted in a statistically significant reduction in unshielded dose to the operator's body. Measured dose to the eye and hand of operator were also reduced but did not reach statistical significance in this pilot study.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuravlev, B. V., E-mail: zhurav@ippe.ru; Lychagin, A. A., E-mail: Lychagin1@yandex.ru; Titarenko, N. N.
Level densities and their energy dependences for nuclei in the mass range of 47 {<=} A {<=} 59 were determined from the results obtained by measuring neutron-evaporation spectra in respective (p, n) reactions. The spectra of neutrons originating from the (p, n) reactions on {sup 47}Ti, {sup 48}Ti, {sup 49}Ti, {sup 53}Cr, {sup 54}Cr, {sup 57}Fe, and {sup 59}Co nuclei were measured in the proton-energy range of 7-11 MeV. These measurements were performed with the aid of a fast-neutron spectrometer by the time-of-flight method over the base of the EGP-15 pulsed tandem accelerator installed at the Institute for Physics andmore » Power Engineering (Obninsk, Russia). A high resolution of the spectrometer and its stability in the time of flight made it possible to identify reliably discrete low-lying levels along with the continuum part of neutron spectra. Our measured data were analyzed within the statistical equilibrium and preequilibrium models of nuclear reactions. The respective calculations were performed with the aid of the Hauser-Feshbach formalismof statistical theory supplemented with the generalized model of a superfluid nucleus, the back-shifted Fermi gas model, and the Gilbert-Cameron composite formula for nuclear level densities. Nuclear level densities for {sup 47}V, {sup 48}V, {sup 49}V, {sup 53}Mn, {sup 54}Mn, {sup 57}Co, and {sup 59}Ni and their energy dependences were determined. The results are discussed and compared with available experimental data and with recommendations of model-based systematics.« less
Grigoriadis, Themos; Giannoulis, George; Zacharakis, Dimitris; Protopapas, Athanasios; Cardozo, Linda; Athanasiou, Stavros
2016-03-01
The purpose of the study was to examine whether a test performed during urodynamics, the "1-3-5 cough test", could determine the severity of urodynamic stress incontinence (USI). We included women referred for urodynamics who were diagnosed with USI. The "1-3-5 cough test" was performed to grade the severity of USI at the completion of filling cystometry. A diagnosis of "severe", "moderate" or "mild" USI was given if urine leakage was observed after one, three or five consecutive coughs respectively. We examined the associations between grades of USI severity and measures of subjective perception of stress urinary incontinence (SUI): International Consultation of Incontinence Modular Questionnaire-Female Lower Urinary Tract Symptom (ICIQ-FLUTS), King's Health Questionnaire (KHQ), Urinary Distress Inventory-6 (UDI-6), Urinary Impact Questionnaire-7 (UIQ-7). A total of 1,181 patients completed the ICIQ-FLUTS and KHQ and 612 completed the UDI-6 and UIQ-7 questionnaires. There was a statistically significant association of higher grades of USI severity with higher scores of the incontinence domain of the ICIQ-FLUTS. The scores of the UDI-6, UIQ-7 and of all KHQ domains (with the exception of general health perception and personal relationships) had statistically significant larger mean values for higher USI severity grade. Groups of higher USI severity had statistically significant associations with higher scores of most of the subjective measures of SUI. Severity of USI, as defined by the "1-3-5 cough test", was associated with the severity of subjective measures of SUI. This test may be a useful tool for the objective interpretation of patients with SUI who undergo urodynamics.
The Muon $g$-$2$ Experiment at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gohn, Wesley
A new measurement of the anomalous magnetic moment of the muon,more » $$a_{\\mu} \\equiv (g-2)/2$$, will be performed at the Fermi National Accelerator Laboratory with data taking beginning in 2017. The most recent measurement, performed at Brookhaven National Laboratory (BNL) and completed in 2001, shows a 3.5 standard deviation discrepancy with the standard model value of $$a_\\mu$$. The new measurement will accumulate 21 times the BNL statistics using upgraded magnet, detector, and storage ring systems, enabling a measurement of $$a_\\mu$$ to 140 ppb, a factor of 4 improvement in the uncertainty the previous measurement. This improvement in precision, combined with recent improvements in our understanding of the QCD contributions to the muon $g$-$2$, could provide a discrepancy from the standard model greater than 7$$\\sigma$$ if the central value is the same as that measured by the BNL experiment, which would be a clear indication of new physics.« less
This study assessed the pollutant emission offset potential of distributed grid-connected photovoltaic (PV) power systems. Computer-simulated performance results were utilized for 211 PV systems located across the U.S. The PV systems' monthly electrical energy outputs were based ...
Some Psychometric and Design Implications of Game-Based Learning Analytics
ERIC Educational Resources Information Center
Gibson, David; Clarke-Midura, Jody
2013-01-01
The rise of digital game and simulation-based learning applications has led to new approaches in educational measurement that take account of patterns in time, high resolution paths of action, and clusters of virtual performance artifacts. The new approaches, which depart from traditional statistical analyses, include data mining, machine…
Systems Analysis of Alternative Architectures for Riverine Warfare in 2010
2006-12-01
propose system of systems improvements for the RF in 2010. With the RF currently working to establish a command structure, train and equip its forces...opposing force. Measures of performance such as time to first enemy detection and loss exchange ratio were collected from MANA. A detailed statistical
Mental Representation of Circuit Diagrams: Individual Differences in Procedural Knowledge.
1983-12-01
operation. One may know, for example, that a transformer serves to change the voltage of an AC supply, that a particular combination of transitors acts as a...and error measures with respect to overall performance. Even if a large 3-1-- sample could provide statistically significant differences between skill
ASSOCIATIVE ADJUSTMENTS TO REDUCE ERRORS IN DOCUMENT SEARCHING.
ERIC Educational Resources Information Center
BRYANT, EDWARD C.; AND OTHERS
ASSOCIATIVE ADJUSTMENTS TO A DOCUMENT FILE ARE CONSIDERED AS A MEANS FOR IMPROVING RETRIEVAL. A THEORETICAL INVESTIGATION OF THE STATISTICAL PROPERTIES OF A GENERALIZED MISMATCH MEASURE WAS CARRIED OUT AND IMPROVEMENTS IN RETRIEVAL RESULTING FROM PERFORMING ASSOCIATIVE REGRESSION ADJUSTMENTS ON DATA FILE WERE EXAMINED BOTH FROM THE THEORETICAL AND…
The Effectiveness of Course Web Sites in Higher Education: An Exploratory Study.
ERIC Educational Resources Information Center
Comunale, Christie L.; Sexton, Thomas R.; Voss, Diana J. Pedagano
2002-01-01
Describes an exploratory study of the educational effectiveness of course Web sites among undergraduate accounting students and graduate students in business statistics. Measured Web site visit frequency, usefulness of each site feature, and the impacts of Web sites on perceived learning and course performance. (Author/LRW)
Evaluating Teachers and Schools Using Student Growth Models
ERIC Educational Resources Information Center
Schafer, William D.; Lissitz, Robert W.; Zhu, Xiaoshu; Zhang, Yuan; Hou, Xiaodong; Li, Ying
2012-01-01
Interest in Student Growth Modeling (SGM) and Value Added Modeling (VAM) arises from educators concerned with measuring the effectiveness of teaching and other school activities through changes in student performance as a companion and perhaps even an alternative to status. Several formal statistical models have been proposed for year-to-year…
Performance Evaluation of New-Generation Pulse Oximeters in the NICU: Observational Study.
Nizami, Shermeen; Greenwood, Kim; Barrowman, Nick; Harrold, JoAnn
2015-09-01
This crossover observational study compares the data characteristics and performance of new-generation Nellcor OXIMAX and Masimo SET SmartPod pulse oximeter technologies. The study was conducted independent of either original equipment manufacturer (OEM) across eleven preterm infants in a Neonatal Intensive Care Unit (NICU). The SmartPods were integrated with Dräger Infinity Delta monitors. The Delta monitor measured the heart rate (HR) using an independent electrocardiogram sensor, and the two SmartPods collected arterial oxygen saturation (SpO2) and pulse rate (PR). All patient data were non-Gaussian. Nellcor PR showed a higher correlation with the HR as compared to Masimo PR. The statistically significant difference found in their median values (1% for SpO2, 1 bpm for PR) was deemed clinically insignificant. SpO2 alarms generated by both SmartPods were observed and categorized for performance evaluation. Results for sensitivity, positive predictive value, accuracy and false alarm rates were Nellcor (80.3, 50, 44.5, 50%) and Masimo (72.2, 48.2, 40.6, 51.8%) respectively. These metrics were not statistically significantly different between the two pulse oximeters. Despite claims by OEMs, both pulse oximeters exhibited high false alarm rates, with no statistically or clinically significant difference in performance. These findings have a direct impact on alarm fatigue in the NICU. Performance evaluation studies can also impact medical device purchase decisions made by hospital administrators.
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
On the validity of time-dependent AUC estimators.
Schmid, Matthias; Kestler, Hans A; Potapov, Sergej
2015-01-01
Recent developments in molecular biology have led to the massive discovery of new marker candidates for the prediction of patient survival. To evaluate the predictive value of these markers, statistical tools for measuring the performance of survival models are needed. We consider estimators of discrimination measures, which are a popular approach to evaluate survival predictions in biomarker studies. Estimators of discrimination measures are usually based on regularity assumptions such as the proportional hazards assumption. Based on two sets of molecular data and a simulation study, we show that violations of the regularity assumptions may lead to over-optimistic estimates of prediction accuracy and may therefore result in biased conclusions regarding the clinical utility of new biomarkers. In particular, we demonstrate that biased medical decision making is possible even if statistical checks indicate that all regularity assumptions are satisfied. © The Author 2013. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Lightness, chroma, and hue distributions in natural teeth measured by a spectrophotometer.
Pustina-Krasniqi, Teuta; Shala, Kujtim; Staka, Gloria; Bicaj, Teuta; Ahmedi, Enis; Dula, Linda
2017-01-01
The aim of the study was to analyze the distribution of color parameters, lightness (L*), chroma (C), hue (H), a* and b*, in the intercanine sector in maxilla. Patients' tooth color measurements were performed using an intraoral spectrophotometer VITA Easyshade ® (VITA Zahnfabrik H. Rauter GmbH and Co. KG, Bad Sackingen, Germany). The measurements were made in 255 subjects in the intercanine sector in maxilla. The mean values for the group of 255 subjects were as follows: L*, a*, b*, C, and H as 81.6, 0.67, 21.6, 21.7, and 92.7, respectively. For F=206.27 and P < 0.001 between L*, a*, b*, C, H, and central incisor/lateral incisor/canines, there were statistically significant differences. With the statistical analysis, it was determined that there are significant color differences between the teeth of the intercanine sector, which differences are clinically significant also.
Al-Khalid, Hamad; Alaskari, Ayman; Oraby, Samy
2011-01-01
Hardness homogeneity of the commonly used structural ferrous and nonferrous engineering materials is of vital importance in the design stage, therefore, reliable information regarding material properties homogeneity should be validated and any deviation should be addressed. In the current study the hardness variation, over wide spectrum radial locations of some ferrous and nonferrous structural engineering materials, was investigated. Measurements were performed over both faces (cross-section) of each stock bar according to a pre-specified stratified design, ensuring the coverage of the entire area both in radial and circumferential directions. Additionally the credibility of the apparatus and measuring procedures were examined through a statistically based calibration process of the hardness reference block. Statistical and response surface graphical analysis are used to examine the nature, adequacy and significance of the measured hardness values. Calibration of the apparatus reference block proved the reliability of the measuring system, where no strong evidence was found against the stochastic nature of hardness measures over the various stratified locations. Also, outlier elimination procedures were proved to be beneficial only at fewer measured points. Hardness measurements showed a dispersion domain that is within the acceptable confidence interval. For AISI 4140 and AISI 1020 steels, hardness is found to have a slight decrease trend as the diameter is reduced, while an opposite behavior is observed for AA 6082 aluminum alloy. However, no definite significant behavior was noticed regarding the effect of the sector sequence (circumferential direction). PMID:28817030
Al-Khalid, Hamad; Alaskari, Ayman; Oraby, Samy
2011-12-23
Hardness homogeneity of the commonly used structural ferrous and nonferrous engineering materials is of vital importance in the design stage, therefore, reliable information regarding material properties homogeneity should be validated and any deviation should be addressed. In the current study the hardness variation, over wide spectrum radial locations of some ferrous and nonferrous structural engineering materials, was investigated. Measurements were performed over both faces (cross-section) of each stock bar according to a pre-specified stratified design, ensuring the coverage of the entire area both in radial and circumferential directions. Additionally the credibility of the apparatus and measuring procedures were examined through a statistically based calibration process of the hardness reference block. Statistical and response surface graphical analysis are used to examine the nature, adequacy and significance of the measured hardness values. Calibration of the apparatus reference block proved the reliability of the measuring system, where no strong evidence was found against the stochastic nature of hardness measures over the various stratified locations. Also, outlier elimination procedures were proved to be beneficial only at fewer measured points. Hardness measurements showed a dispersion domain that is within the acceptable confidence interval. For AISI 4140 and AISI 1020 steels, hardness is found to have a slight decrease trend as the diameter is reduced, while an opposite behavior is observed for AA 6082 aluminum alloy. However, no definite significant behavior was noticed regarding the effect of the sector sequence (circumferential direction).
The neural correlates of statistical learning in a word segmentation task: An fMRI study
Karuza, Elisabeth A.; Newport, Elissa L.; Aslin, Richard N.; Starling, Sarah J.; Tivarus, Madalina E.; Bavelier, Daphne
2013-01-01
Functional magnetic resonance imaging (fMRI) was used to assess neural activation as participants learned to segment continuous streams of speech containing syllable sequences varying in their transitional probabilities. Speech streams were presented in four runs, each followed by a behavioral test to measure the extent of learning over time. Behavioral performance indicated that participants could discriminate statistically coherent sequences (words) from less coherent sequences (partwords). Individual rates of learning, defined as the difference in ratings for words and partwords, were used as predictors of neural activation to ask which brain areas showed activity associated with these measures. Results showed significant activity in the pars opercularis and pars triangularis regions of the left inferior frontal gyrus (LIFG). The relationship between these findings and prior work on the neural basis of statistical learning is discussed, and parallels to the frontal/subcortical network involved in other forms of implicit sequence learning are considered. PMID:23312790
A Statistical Representation of Pyrotechnic Igniter Output
NASA Astrophysics Data System (ADS)
Guo, Shuyue; Cooper, Marcia
2017-06-01
The output of simplified pyrotechnic igniters for research investigations is statistically characterized by monitoring the post-ignition external flow field with Schlieren imaging. Unique to this work is a detailed quantification of all measurable manufacturing parameters (e.g., bridgewire length, charge cavity dimensions, powder bed density) and associated shock-motion variability in the tested igniters. To demonstrate experimental precision of the recorded Schlieren images and developed image processing methodologies, commercial exploding bridgewires using wires of different parameters were tested. Finally, a statistically-significant population of manufactured igniters were tested within the Schlieren arrangement resulting in a characterization of the nominal output. Comparisons between the variances measured throughout the manufacturing processes and the calculated output variance provide insight into the critical device phenomena that dominate performance. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's NNSA under contract DE-AC04-94AL85000.
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
Factorial analysis of trihalomethanes formation in drinking water.
Chowdhury, Shakhawat; Champagne, Pascale; McLellan, P James
2010-06-01
Disinfection of drinking water reduces pathogenic infection, but may pose risks to human health through the formation of disinfection byproducts. The effects of different factors on the formation of trihalomethanes were investigated using a statistically designed experimental program, and a predictive model for trihalomethanes formation was developed. Synthetic water samples with different factor levels were produced, and trihalomethanes concentrations were measured. A replicated fractional factorial design with center points was performed, and significant factors were identified through statistical analysis. A second-order trihalomethanes formation model was developed from 92 experiments, and the statistical adequacy was assessed through appropriate diagnostics. This model was validated using additional data from the Drinking Water Surveillance Program database and was applied to the Smiths Falls water supply system in Ontario, Canada. The model predictions were correlated strongly to the measured trihalomethanes, with correlations of 0.95 and 0.91, respectively. The resulting model can assist in analyzing risk-cost tradeoffs in the design and operation of water supply systems.
Wavelet methodology to improve single unit isolation in primary motor cortex cells
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A.
2016-01-01
The proper isolation of action potentials recorded extracellularly from neural tissue is an active area of research in the fields of neuroscience and biomedical signal processing. This paper presents an isolation methodology for neural recordings using the wavelet transform (WT), a statistical thresholding scheme, and the principal component analysis (PCA) algorithm. The effectiveness of five different mother wavelets was investigated: biorthogonal, Daubachies, discrete Meyer, symmetric, and Coifman; along with three different wavelet coefficient thresholding schemes: fixed form threshold, Stein’s unbiased estimate of risk, and minimax; and two different thresholding rules: soft and hard thresholding. The signal quality was evaluated using three different statistical measures: mean-squared error, root-mean squared, and signal to noise ratio. The clustering quality was evaluated using two different statistical measures: isolation distance, and L-ratio. This research shows that the selection of the mother wavelet has a strong influence on the clustering and isolation of single unit neural activity, with the Daubachies 4 wavelet and minimax thresholding scheme performing the best. PMID:25794461
Statistical assessment of optical phase fluctuations through turbulent mixing layers
NASA Astrophysics Data System (ADS)
Gardner, Patrick J.; Roggemann, Michael C.; Welsh, Byron M.; Bowersox, Rodney D.
1995-09-01
A lateral shearing interferometer is used to measure the slope of perturbed wavefronts after propagating through turbulent shear flows. This provides a two-dimensional flow visualization technique which is nonintrusive. The slope measurements are used to reconstruct the phase of the turbulence-corrupted wave front. Experiments were performed on a plane shear mixing layer of helium and nitrogen gas at fixed velocities, for five locations in the flow development. The two gases, having a density ratio of approximately seven, provide an effective means of simulating compressible shear layers. Statistical autocorrelation functions and structure functions are computed on the reconstructed phase maps. The autocorrelation function results indicate that the turbulence-induced phase fluctuations are not wide-sense stationary. The structure functions exhibit statistical homogeneity, indicating the phase fluctuation are stationary in first increments. However, the turbulence-corrupted phase is not isotropic. A five-thirds power law is shown to fit one-dimensional, orthogonal slices of the structure function, with scaling coefficients related to the location in the flow.
Inferring brain-computational mechanisms with models of activity measurements
Diedrichsen, Jörn
2016-01-01
High-resolution functional imaging is providing increasingly rich measurements of brain activity in animals and humans. A major challenge is to leverage such data to gain insight into the brain's computational mechanisms. The first step is to define candidate brain-computational models (BCMs) that can perform the behavioural task in question. We would then like to infer which of the candidate BCMs best accounts for measured brain-activity data. Here we describe a method that complements each BCM by a measurement model (MM), which simulates the way the brain-activity measurements reflect neuronal activity (e.g. local averaging in functional magnetic resonance imaging (fMRI) voxels or sparse sampling in array recordings). The resulting generative model (BCM-MM) produces simulated measurements. To avoid having to fit the MM to predict each individual measurement channel of the brain-activity data, we compare the measured and predicted data at the level of summary statistics. We describe a novel particular implementation of this approach, called probabilistic representational similarity analysis (pRSA) with MMs, which uses representational dissimilarity matrices (RDMs) as the summary statistics. We validate this method by simulations of fMRI measurements (locally averaging voxels) based on a deep convolutional neural network for visual object recognition. Results indicate that the way the measurements sample the activity patterns strongly affects the apparent representational dissimilarities. However, modelling of the measurement process can account for these effects, and different BCMs remain distinguishable even under substantial noise. The pRSA method enables us to perform Bayesian inference on the set of BCMs and to recognize the data-generating model in each case. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574316
A comprehensive analysis of the IMRT dose delivery process using statistical process control (SPC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerard, Karine; Grandhaye, Jean-Pierre; Marchesi, Vincent
The aim of this study is to introduce tools to improve the security of each IMRT patient treatment by determining action levels for the dose delivery process. To achieve this, the patient-specific quality control results performed with an ionization chamber--and which characterize the dose delivery process--have been retrospectively analyzed using a method borrowed from industry: Statistical process control (SPC). The latter consisted in fulfilling four principal well-structured steps. The authors first quantified the short term variability of ionization chamber measurements regarding the clinical tolerances used in the cancer center ({+-}4% of deviation between the calculated and measured doses) by calculatingmore » a control process capability (C{sub pc}) index. The C{sub pc} index was found superior to 4, which implies that the observed variability of the dose delivery process is not biased by the short term variability of the measurement. Then, the authors demonstrated using a normality test that the quality control results could be approximated by a normal distribution with two parameters (mean and standard deviation). Finally, the authors used two complementary tools--control charts and performance indices--to thoroughly analyze the IMRT dose delivery process. Control charts aim at monitoring the process over time using statistical control limits to distinguish random (natural) variations from significant changes in the process, whereas performance indices aim at quantifying the ability of the process to produce data that are within the clinical tolerances, at a precise moment. The authors retrospectively showed that the analysis of three selected control charts (individual value, moving-range, and EWMA control charts) allowed efficient drift detection of the dose delivery process for prostate and head-and-neck treatments before the quality controls were outside the clinical tolerances. Therefore, when analyzed in real time, during quality controls, they should improve the security of treatments. They also showed that the dose delivery processes in the cancer center were in control for prostate and head-and-neck treatments. In parallel, long term process performance indices (P{sub p}, P{sub pk}, and P{sub pm}) have been analyzed. Their analysis helped defining which actions should be undertaken in order to improve the performance of the process. The prostate dose delivery process has been shown statistically capable (0.08% of the results is expected to be outside the clinical tolerances) contrary to the head-and-neck dose delivery process (5.76% of the results are expected to be outside the clinical tolerances).« less
A comprehensive analysis of the IMRT dose delivery process using statistical process control (SPC).
Gérard, Karine; Grandhaye, Jean-Pierre; Marchesi, Vincent; Kafrouni, Hanna; Husson, François; Aletti, Pierre
2009-04-01
The aim of this study is to introduce tools to improve the security of each IMRT patient treatment by determining action levels for the dose delivery process. To achieve this, the patient-specific quality control results performed with an ionization chamber--and which characterize the dose delivery process--have been retrospectively analyzed using a method borrowed from industry: Statistical process control (SPC). The latter consisted in fulfilling four principal well-structured steps. The authors first quantified the short-term variability of ionization chamber measurements regarding the clinical tolerances used in the cancer center (+/- 4% of deviation between the calculated and measured doses) by calculating a control process capability (C(pc)) index. The C(pc) index was found superior to 4, which implies that the observed variability of the dose delivery process is not biased by the short-term variability of the measurement. Then, the authors demonstrated using a normality test that the quality control results could be approximated by a normal distribution with two parameters (mean and standard deviation). Finally, the authors used two complementary tools--control charts and performance indices--to thoroughly analyze the IMRT dose delivery process. Control charts aim at monitoring the process over time using statistical control limits to distinguish random (natural) variations from significant changes in the process, whereas performance indices aim at quantifying the ability of the process to produce data that are within the clinical tolerances, at a precise moment. The authors retrospectively showed that the analysis of three selected control charts (individual value, moving-range, and EWMA control charts) allowed efficient drift detection of the dose delivery process for prostate and head-and-neck treatments before the quality controls were outside the clinical tolerances. Therefore, when analyzed in real time, during quality controls, they should improve the security of treatments. They also showed that the dose delivery processes in the cancer center were in control for prostate and head-and-neck treatments. In parallel, long-term process performance indices (P(p), P(pk), and P(pm)) have been analyzed. Their analysis helped defining which actions should be undertaken in order to improve the performance of the process. The prostate dose delivery process has been shown statistically capable (0.08% of the results is expected to be outside the clinical tolerances) contrary to the head-and-neck dose delivery process (5.76% of the results are expected to be outside the clinical tolerances).
ERIC Educational Resources Information Center
Scott, Leslie A.; Ingels, Steven J.
2007-01-01
The search for an understandable reporting format has led the National Assessment Governing Board to explore the possibility of measuring and interpreting student performance on the 12th-grade National Assessment of Educational Progress (NAEP), the Nation's Report Card, in terms of readiness for college, the workplace, and the military. This…
Statistical analysis and digital processing of the Mössbauer spectra
NASA Astrophysics Data System (ADS)
Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri
2010-02-01
This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
Reinstein, Dan Z.; Archer, Timothy J.; Silverman, Ronald H.; Coleman, D. Jackson
2008-01-01
Purpose To determine the accuracy, repeatability, and reproducibility of measurement of lateral dimensions using the Artemis (Ultralink LLC) very high-frequency (VHF) digital ultrasound (US) arc scanner. Setting London Vision Clinic, London, United Kingdom. Methods A test object was measured first with a micrometer and then with the Artemis arc scanner. Five sets of 10 consecutive B-scans of the test object were performed with the scanner. The test object was removed from the system between each scan set. One expert observer and one newly trained observer separately measured the lateral dimension of the test object. Two-factor analysis of variance was performed. The accuracy was calculated as the average bias of the scan set averages. The repeatability and reproducibility coefficients were calculated. The coefficient of variation (CV) was calculated for repeatability and reproducibility. Results The test object was measured to be 10.80 mm wide. The mean lateral dimension bias was 0.00 mm. The repeatability coefficient was 0.114 mm. The reproducibility coefficient was 0.026 mm. The repeatability CV was 0.38%, and the reproducibility CV was 0.09%. There was no statistically significant variation between observers (P = .0965). There was a statistically significant variation between scan sets (P = .0036) attributed to minor vertical changes in the alignment of the test object between consecutive scan sets. Conclusion The Artemis VHF digital US arc scanner obtained accurate, repeatable, and reproducible measurements of lateral dimensions of the size commonly found in the anterior segment. PMID:17081860
Muzyka-Woźniak, Maria; Oleszko, Adam
2018-04-26
To compare measurements of axial length (AL), corneal curvature (K), anterior chamber depth (ACD) and white-to-white (WTW) distance on a new device combining Scheimpflug camera and partial coherence interferometry (Pentacam AXL) with a reference optical biometer (IOL Master 500). To evaluate differences between IOL power calculations based on the two biometers. Ninety-seven eyes of 97 consecutive cataract or refractive lens exchange patients were examined preoperatively on IOL Master 500 and Pentacam AXL units. Comparisons between two devices were performed for AL, K, ACD and WTW. Intraocular lens (IOL) power targeting emmetropia was calculated with SRK/T and Haigis formulas on both devices and compared. There were statistically significant differences between two devices for all measured parameters (P < 0.05), except ACD (P = 0.36). Corneal curvature measured with Pentacam AXL was significantly flatter then with IOL Master. The mean difference in AL was clinically insignificant (0.01 mm; 95% LoA 0.16 mm). Pentacam AXL yielded higher IOL power in 75% of eyes for Haigis formula and in 62% of eyes for SRK/T formula, with a mean difference within ± 0.5 D for 72 and 86% of eyes, respectively. There were statistically significant differences between AL, K and WTW measurements obtained with the compared biometers. Flatter corneal curvature measurements on Pentacam AXL necessitate formulas optimisation for Pentacam AXL.
Stenhouse, Rosie; Snowden, Austyn; Young, Jenny; Carver, Fiona; Carver, Hannah; Brown, Norrie
2016-08-01
Reports of poor nursing care have focused attention on values based selection of candidates onto nursing programmes. Values based selection lacks clarity and valid measures. Previous caring experience might lead to better care. Emotional intelligence (EI) might be associated with performance, is conceptualised and measurable. To examine the impact of 1) previous caring experience, 2) emotional intelligence 3) social connection scores on performance and retention in a cohort of first year nursing and midwifery students in Scotland. A longitudinal, quasi experimental design. Adult and mental health nursing, and midwifery programmes in a Scottish University. Adult, mental health and midwifery students (n=598) completed the Trait Emotional Intelligence Questionnaire-short form and Schutte's Emotional Intelligence Scale on entry to their programmes at a Scottish University, alongside demographic and previous caring experience data. Social connection was calculated from a subset of questions identified within the TEIQue-SF in a prior factor and Rasch analysis. Student performance was calculated as the mean mark across the year. Withdrawal data were gathered. 598 students completed baseline measures. 315 students declared previous caring experience, 277 not. An independent-samples t-test identified that those without previous caring experience scored higher on performance (57.33±11.38) than those with previous caring experience (54.87±11.19), a statistically significant difference of 2.47 (95% CI, 0.54 to 4.38), t(533)=2.52, p=.012. Emotional intelligence scores were not associated with performance. Social connection scores for those withdrawing (mean rank=249) and those remaining (mean rank=304.75) were statistically significantly different, U=15,300, z=-2.61, p$_amp_$lt;0.009. Previous caring experience led to worse performance in this cohort. Emotional intelligence was not a useful indicator of performance. Lower scores on the social connection factor were associated with withdrawal from the course. Copyright © 2016 Elsevier Ltd. All rights reserved.
Friedberg, Mark W; Coltin, Kathryn L; Safran, Dana Gelb; Dresser, Marguerite; Zaslavsky, Alan M; Schneider, Eric C
2009-10-06
Recent proposals to reform primary care have encouraged physician practices to adopt such structural capabilities as performance feedback and electronic health records. Whether practices with these capabilities have higher performance on measures of primary care quality is unknown. To measure associations between structural capabilities of primary care practices and performance on commonly used quality measures. Cross-sectional analysis. Massachusetts. 412 primary care practices. During 2007, 1 physician from each participating primary care practice (median size, 4 physicians) was surveyed about structural capabilities of the practice (responses representing 308 practices were obtained). Data on practice structural capabilities were linked to multipayer performance data on 13 Healthcare Effectiveness Data and Information Set (HEDIS) process measures in 4 clinical areas: screening, diabetes, depression, and overuse. Frequently used multifunctional electronic health records were associated with higher performance on 5 HEDIS measures (3 in screening and 2 in diabetes), with statistically significant differences in performance ranging from 3.1 to 7.6 percentage points. Frequent meetings to discuss quality were associated with higher performance on 3 measures of diabetes care (differences ranging from 2.3 to 3.1 percentage points). Physician awareness of patient experience ratings was associated with higher performance on screening for breast cancer and cervical cancer (1.9 and 2.2 percentage points, respectively). No other structural capabilities were associated with performance on more than 1 measure. No capabilities were associated with performance on depression care or overuse. Structural capabilities of primary care practices were assessed by physician survey. Among the investigated structural capabilities of primary care practices, electronic health records were associated with higher performance across multiple HEDIS measures. Overall, the modest magnitude and limited number of associations between structural capabilities and clinical performance suggest the importance of continuing to measure the processes and outcomes of care for patients. The Commonwealth Fund.