Sample records for standard statistical measures

  1. The estimation of the measurement results with using statistical methods

    NASA Astrophysics Data System (ADS)

    Velychko, O.; Gordiyenko, T.

    2015-02-01

    The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.

  2. Langley Wind Tunnel Data Quality Assurance-Check Standard Results

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.

    2000-01-01

    A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.

  3. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    PubMed

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  4. A Framework for Establishing Standard Reference Scale of Texture by Multivariate Statistical Analysis Based on Instrumental Measurement and Sensory Evaluation.

    PubMed

    Zhi, Ruicong; Zhao, Lei; Xie, Nan; Wang, Houyin; Shi, Bolin; Shi, Jingye

    2016-01-13

    A framework of establishing standard reference scale (texture) is proposed by multivariate statistical analysis according to instrumental measurement and sensory evaluation. Multivariate statistical analysis is conducted to rapidly select typical reference samples with characteristics of universality, representativeness, stability, substitutability, and traceability. The reasonableness of the framework method is verified by establishing standard reference scale of texture attribute (hardness) with Chinese well-known food. More than 100 food products in 16 categories were tested using instrumental measurement (TPA test), and the result was analyzed with clustering analysis, principal component analysis, relative standard deviation, and analysis of variance. As a result, nine kinds of foods were determined to construct the hardness standard reference scale. The results indicate that the regression coefficient between the estimated sensory value and the instrumentally measured value is significant (R(2) = 0.9765), which fits well with Stevens's theory. The research provides reliable a theoretical basis and practical guide for quantitative standard reference scale establishment on food texture characteristics.

  5. Statistical Analysis of a Round-Robin Measurement Survey of Two Candidate Materials for a Seebeck Coefficient Standard Reference Material

    PubMed Central

    Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.

    2009-01-01

    In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212

  6. OPR-PPR, a Computer Program for Assessing Data Importance to Model Predictions Using Linear Statistics

    USGS Publications Warehouse

    Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.

    2007-01-01

    The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.

  7. 1998 Conference on Precision Electromagnetic Measurements Digest. Proceedings.

    NASA Astrophysics Data System (ADS)

    Nelson, T. L.

    The following topics were dealt with: fundamental constants; caesium standards; AC-DC transfer; impedance measurement; length measurement; units; statistics; cryogenic resonators; time transfer; QED; resistance scaling and bridges; mass measurement; atomic fountains and clocks; single electron transport; Newtonian constant of gravitation; stabilised lasers and frequency measurements; cryogenic current comparators; optical frequency standards; high voltage devices and systems; international compatibility; magnetic measurement; precision power measurement; high resolution spectroscopy; DC transport standards; waveform acquisition and analysis; ion trap standards; optical metrology; quantised Hall effect; Josephson array comparisons; signal generation and measurement; Avogadro constant; microwave networks; wideband power standards; antennas, fields and EMC; quantum-based standards.

  8. Standardization of Sonographic Lung-to-Head Ratio Measurements in Isolated Congenital Diaphragmatic Hernia: Impact on the Reproducibility and Efficacy to Predict Outcomes.

    PubMed

    Britto, Ingrid Schwach Werneck; Sananes, Nicolas; Olutoye, Oluyinka O; Cass, Darrell L; Sangi-Haghpeykar, Haleh; Lee, Timothy C; Cassady, Christopher I; Mehollin-Ray, Amy; Welty, Stephen; Fernandes, Caraciolo; Belfort, Michael A; Lee, Wesley; Ruano, Rodrigo

    2015-10-01

    The purpose of this study was to evaluate the impact of standardization of the lung-to-head ratio measurements in isolated congenital diaphragmatic hernia on prediction of neonatal outcomes and reproducibility. We conducted a retrospective cohort study of 77 cases of isolated congenital diaphragmatic hernia managed in a single center between 2004 and 2012. We compared lung-to-head ratio measurements that were performed prospectively in our institution without standardization to standardized measurements performed according to a defined protocol. The standardized lung-to-head ratio measurements were statistically more accurate than the nonstandardized measurements for predicting neonatal mortality (area under the receiver operating characteristic curve, 0.85 versus 0.732; P = .003). After standardization, there were no statistical differences in accuracy between measurements regardless of whether we considered observed-to-expected values (P > .05). Standardization of the lung-to-head ratio did not improve prediction of the need for extracorporeal membrane oxygenation (P> .05). Both intraoperator and interoperator reproducibility were good for the standardized lung-to-head ratio (intraclass correlation coefficient, 0.98 [95% confidence interval, 0.97-0.99]; bias, 0.02 [limits of agreement, -0.11 to +0.15], respectively). Standardization of lung-to-head ratio measurements improves prediction of neonatal outcomes. Further studies are needed to confirm these results and to assess the utility of standardization of other prognostic parameters.

  9. What to use to express the variability of data: Standard deviation or standard error of mean?

    PubMed

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  10. The Impact of Statistical Adjustment on Conditional Standard Errors of Measurement in the Assessment of Physician Communication Skills

    ERIC Educational Resources Information Center

    Raymond, Mark R.; Clauser, Brian E.; Furman, Gail E.

    2010-01-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary…

  11. ETS levels in hospitality environments satisfying ASHRAE standard 62-1989: "ventilation for acceptable indoor air quality"

    NASA Astrophysics Data System (ADS)

    Moschandreas, D. J.; Vuilleumier, K. L.

    Prior to this study, indoor air constituent levels and ventilation rates of hospitality environments had not been measured simultaneously. This investigation measured indoor Environmental Tobacco Smoke-related (ETS-related) constituent levels in two restaurants, a billiard hall and a casino. The objective of this study was to characterize ETS-related constituent levels inside hospitality environments when the ventilation rates satisfy the requirements of the ASHRAE 62-1989 Ventilation Standard. The ventilation rate of each selected hospitality environment was measured and adjusted. The study advanced only if the requirements of the ASHRAE 62-1989 Ventilation Standard - the pertinent standard of the American Society of Heating, Refrigeration and Air Conditioning Engineers - were satisfied. The supply rates of outdoor air and occupant density were measured intermittently to assure that the ventilation rate of each facility satisfied the standard under occupied conditions. Six ETS-related constituents were measured: respirable suspended particulate (RSP) matter, fluorescent particulate matter (FPM, an estimate of the ETS particle concentrations), ultraviolet particulate matter (UVPM, a second estimate of the ETS particle concentrations), solanesol, nicotine and 3-ethenylpyridine (3-EP). ETS-related constituent levels in smoking sections, non-smoking sections and outdoors were sampled daily for eight consecutive days at each hospitality environment. This study found that the difference between the concentrations of ETS-related constituents in indoor smoking and non-smoking sections was statistically significant. Differences between indoor non-smoking sections and outdoor ETS-related constituent levels were identified but were not statistically significant. Similarly, differences between weekday and weekend evenings were identified but were not statistically significant. The difference between indoor smoking sections and outdoors was statistically significant. Most importantly, ETS-related constituent concentrations measured indoors did not exceed existing occupational standards. It was concluded that if the measured ventilation rates of the sampled facilities satisfied the ASHRAE 62-1989 Ventilation Standard requirements, the corresponding ETS-related constituents were measured at concentrations below known harmful levels as specified by the American Conference of Governmental Industrial Hygiene (ACGIH).

  12. Statistical Process Control Charts for Measuring and Monitoring Temporal Consistency of Ratings

    ERIC Educational Resources Information Center

    Omar, M. Hafidz

    2010-01-01

    Methods of statistical process control were briefly investigated in the field of educational measurement as early as 1999. However, only the use of a cumulative sum chart was explored. In this article other methods of statistical quality control are introduced and explored. In particular, methods in the form of Shewhart mean and standard deviation…

  13. The Standard Deviation of Launch Vehicle Environments

    NASA Technical Reports Server (NTRS)

    Yunis, Isam

    2005-01-01

    Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.

  14. Low altitude wind shear statistics derived from measured and FAA proposed standard wind profiles

    NASA Technical Reports Server (NTRS)

    Dunham, R. E., Jr.; Usry, J. W.

    1984-01-01

    Wind shear statistics were calculated for a simulated data set using wind profiles proposed as a standard and compared to statistics derived from measured wind profile data. Wind shear values were grouped in altitude bands of 100 ft between 100 and 1400 ft, and in wind shear increments of 0.025 kt/ft between + or - 0.600 kt/ft for the simulated data set and between + or - 0.200 kt/ft for the measured set. No values existed outside the + or - 0.200 kt/ft boundaries for the measured data. Frequency distributions, means, and standard deviations were derived for each altitude band for both data sets, and compared. Also, frequency distributions were derived for the total sample for both data sets and compared. Frequency of occurrence of a given wind shear was about the same for both data sets for wind shears, but less than + or 0.10 kt/ft, but the simulated data set had larger values outside these boundaries. Neglecting the vertical wind component did not significantly affect the statistics for these data sets. The frequency of occurrence of wind shears for the flight measured data was essentially the same for each altitude band and the total sample, but the simulated data distributions were different for each altitude band. The larger wind shears for the flight measured data were found to have short durations.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kegel, T.M.

    Calibration laboratories are faced with the need to become accredited or registered to one or more quality standards. One requirement common to all of these standards is the need to have in place a measurement assurance program. What is a measurement assurance program? Brian Belanger, in Measurement Assurance Programs: Part 1, describes it as a {open_quotes}quality assurance program for a measurement process that quantifies the total uncertainty of the measurements (both random and systematic components of error) with respect to national or designated standards and demonstrates that the total uncertainty is sufficiently small to meet the user`s requirements.{close_quotes} Rolf Schumachermore » is more specific in Measurement Assurance in Your Own Laboratory. He states, {open_quotes}Measurement assurance is the application of broad quality control principles to measurements of calibrations.{close_quotes} Here, the focus is on one important part of any measurement assurance program: implementation of statistical process control (SPC). Paraphrasing Juran`s Quality Control Handbook, a process is in statistical control if the only observed variations are those that can be attributed to random causes. Conversely, a process that exhibits variations due to assignable causes is not in a state of statistical control. Finally, Carrol Croarkin states, {open_quotes}In the measurement assurance context the measurement algorithm including instrumentation, reference standards and operator interactions is the process that is to be controlled, and its direct product is the measurement per se. The measurements are assumed to be valid if the measurement algorithm is operating in a state of control.{close_quotes} Implicit in this statement is the important fact that an out-of-control process cannot produce valid measurements. 7 figs.« less

  16. Measuring Skewness: A Forgotten Statistic?

    ERIC Educational Resources Information Center

    Doane, David P.; Seward, Lori E.

    2011-01-01

    This paper discusses common approaches to presenting the topic of skewness in the classroom, and explains why students need to know how to measure it. Two skewness statistics are examined: the Fisher-Pearson standardized third moment coefficient, and the Pearson 2 coefficient that compares the mean and median. The former is reported in statistical…

  17. 40 CFR 1065.12 - Approval of alternate procedures.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... engine meets all applicable emission standards according to specified procedures. (iii) Use statistical.... (e) We may give you specific directions regarding methods for statistical analysis, or we may approve... statistical tests. Perform the tests as follows: (1) Repeat measurements for all applicable duty cycles at...

  18. Does daily nurse staffing match ward workload variability? Three hospitals' experiences.

    PubMed

    Gabbay, Uri; Bukchin, Michael

    2009-01-01

    Nurse shortage and rising healthcare resource burdens mean that appropriate workforce use is imperative. This paper aims to evaluate whether daily nursing staffing meets ward workload needs. Nurse attendance and daily nurses' workload capacity in three hospitals were evaluated. Statistical process control was used to evaluate intra-ward nurse workload capacity and day-to-day variations. Statistical process control is a statistics-based method for process monitoring that uses charts with predefined target measure and control limits. Standardization was performed for inter-ward analysis by converting ward-specific crude measures to ward-specific relative measures by dividing observed/expected. Two charts: acceptable and tolerable daily nurse workload intensity, were defined. Appropriate staffing indicators were defined as those exceeding predefined rates within acceptable and tolerable limits (50 percent and 80 percent respectively). A total of 42 percent of the overall days fell within acceptable control limits and 71 percent within tolerable control limits. Appropriate staffing indicators were met in only 33 percent of wards regarding acceptable nurse workload intensity and in only 45 percent of wards regarding tolerable workloads. The study work did not differentiate crude nurse attendance and it did not take into account patient severity since crude bed occupancy was used. Double statistical process control charts and certain staffing indicators were used, which is open to debate. Wards that met appropriate staffing indicators prove the method's feasibility. Wards that did not meet appropriate staffing indicators prove the importance and the need for process evaluations and monitoring. Methods presented for monitoring daily staffing appropriateness are simple to implement either for intra-ward day-to-day variation by using nurse workload capacity statistical process control charts or for inter-ward evaluation using standardized measure of nurse workload intensity. The real challenge will be to develop planning systems and implement corrective interventions such as dynamic and flexible daily staffing, which will face difficulties and barriers. The paper fulfils the need for workforce utilization evaluation. A simple method using available data for daily staffing appropriateness evaluation, which is easy to implement and operate, is presented. The statistical process control method enables intra-ward evaluation, while standardization by converting crude into relative measures enables inter-ward analysis. The staffing indicator definitions enable performance evaluation. This original study uses statistical process control to develop simple standardization methods and applies straightforward statistical tools. This method is not limited to crude measures, rather it uses weighted workload measures such as nursing acuity or weighted nurse level (i.e. grade/band).

  19. QCD Precision Measurements and Structure Function Extraction at a High Statistics, High Energy Neutrino Scattering Experiment:. NuSOnG

    NASA Astrophysics Data System (ADS)

    Adams, T.; Batra, P.; Bugel, L.; Camilleri, L.; Conrad, J. M.; de Gouvêa, A.; Fisher, P. H.; Formaggio, J. A.; Jenkins, J.; Karagiorgi, G.; Kobilarcik, T. R.; Kopp, S.; Kyle, G.; Loinaz, W. A.; Mason, D. A.; Milner, R.; Moore, R.; Morfín, J. G.; Nakamura, M.; Naples, D.; Nienaber, P.; Olness, F. I.; Owens, J. F.; Pate, S. F.; Pronin, A.; Seligman, W. G.; Shaevitz, M. H.; Schellman, H.; Schienbein, I.; Syphers, M. J.; Tait, T. M. P.; Takeuchi, T.; Tan, C. Y.; van de Water, R. G.; Yamamoto, R. K.; Yu, J. Y.

    We extend the physics case for a new high-energy, ultra-high statistics neutrino scattering experiment, NuSOnG (Neutrino Scattering On Glass) to address a variety of issues including precision QCD measurements, extraction of structure functions, and the derived Parton Distribution Functions (PDF's). This experiment uses a Tevatron-based neutrino beam to obtain a sample of Deep Inelastic Scattering (DIS) events which is over two orders of magnitude larger than past samples. We outline an innovative method for fitting the structure functions using a parametrized energy shift which yields reduced systematic uncertainties. High statistics measurements, in combination with improved systematics, will enable NuSOnG to perform discerning tests of fundamental Standard Model parameters as we search for deviations which may hint of "Beyond the Standard Model" physics.

  20. Measuring the Gas Constant "R": Propagation of Uncertainty and Statistics

    ERIC Educational Resources Information Center

    Olsen, Robert J.; Sattar, Simeen

    2013-01-01

    Determining the gas constant "R" by measuring the properties of hydrogen gas collected in a gas buret is well suited for comparing two approaches to uncertainty analysis using a single data set. The brevity of the experiment permits multiple determinations, allowing for statistical evaluation of the standard uncertainty u[subscript…

  1. Student Distractor Choices on the Mathematics Virginia Standards of Learning Middle School Assessments

    ERIC Educational Resources Information Center

    Lewis, Virginia Vimpeny

    2011-01-01

    Number Concepts; Measurement; Geometry; Probability; Statistics; and Patterns, Functions and Algebra. Procedural Errors were further categorized into the following content categories: Computation; Measurement; Statistics; and Patterns, Functions, and Algebra. The results of the analysis showed the main sources of error for 6th, 7th, and 8th…

  2. Standardizing power monitoring and control at exascale

    DOE PAGES

    Grant, Ryan E.; Levenhagen, Michael; Olivier, Stephen L.; ...

    2016-10-20

    Power API-the result of collaboration among national laboratories, universities, and major vendors-provides a range of standardized power management functions, from application-level control and measurement to facility-level accounting, including real-time and historical statistics gathering. Here, support is already available for Intel and AMD CPUs and standalone measurement devices.

  3. Single-Item Measurement of Suicidal Behaviors: Validity and Consequences of Misclassification

    PubMed Central

    Millner, Alexander J.; Lee, Michael D.; Nock, Matthew K.

    2015-01-01

    Suicide is a leading cause of death worldwide. Although research has made strides in better defining suicidal behaviors, there has been less focus on accurate measurement. Currently, the widespread use of self-report, single-item questions to assess suicide ideation, plans and attempts may contribute to measurement problems and misclassification. We examined the validity of single-item measurement and the potential for statistical errors. Over 1,500 participants completed an online survey containing single-item questions regarding a history of suicidal behaviors, followed by questions with more precise language, multiple response options and narrative responses to examine the validity of single-item questions. We also conducted simulations to test whether common statistical tests are robust against the degree of misclassification produced by the use of single-items. We found that 11.3% of participants that endorsed a single-item suicide attempt measure engaged in behavior that would not meet the standard definition of a suicide attempt. Similarly, 8.8% of those who endorsed a single-item measure of suicide ideation endorsed thoughts that would not meet standard definitions of suicide ideation. Statistical simulations revealed that this level of misclassification substantially decreases statistical power and increases the likelihood of false conclusions from statistical tests. Providing a wider range of response options for each item reduced the misclassification rate by approximately half. Overall, the use of single-item, self-report questions to assess the presence of suicidal behaviors leads to misclassification, increasing the likelihood of statistical decision errors. Improving the measurement of suicidal behaviors is critical to increase understanding and prevention of suicide. PMID:26496707

  4. Design of experiments enhanced statistical process control for wind tunnel check standard testing

    NASA Astrophysics Data System (ADS)

    Phillips, Ben D.

    The current wind tunnel check standard testing program at NASA Langley Research Center is focused on increasing data quality, uncertainty quantification and overall control and improvement of wind tunnel measurement processes. The statistical process control (SPC) methodology employed in the check standard testing program allows for the tracking of variations in measurements over time as well as an overall assessment of facility health. While the SPC approach can and does provide researchers with valuable information, it has certain limitations in the areas of process improvement and uncertainty quantification. It is thought by utilizing design of experiments methodology in conjunction with the current SPC practices that one can efficiently and more robustly characterize uncertainties and develop enhanced process improvement procedures. In this research, methodologies were developed to generate regression models for wind tunnel calibration coefficients, balance force coefficients and wind tunnel flow angularities. The coefficients of these regression models were then tracked in statistical process control charts, giving a higher level of understanding of the processes. The methodology outlined is sufficiently generic such that this research can be applicable to any wind tunnel check standard testing program.

  5. Validation of Scores from a New Measure of Preservice Teachers' Self-Efficacy to Teach Statistics in the Middle Grades

    ERIC Educational Resources Information Center

    Harrell-Williams, Leigh M.; Sorto, M. Alejandra; Pierce, Rebecca L.; Lesser, Lawrence M.; Murphy, Teri J.

    2014-01-01

    The influential "Common Core State Standards for Mathematics" (CCSSM) expect students to start statistics learning during middle grades. Thus teacher education and professional development programs are advised to help preservice and in-service teachers increase their knowledge and confidence to teach statistics. Although existing…

  6. A statistical approach to instrument calibration

    Treesearch

    Robert R. Ziemer; David Strauss

    1978-01-01

    Summary - It has been found that two instruments will yield different numerical values when used to measure identical points. A statistical approach is presented that can be used to approximate the error associated with the calibration of instruments. Included are standard statistical tests that can be used to determine if a number of successive calibrations of the...

  7. A multi-time-step noise reduction method for measuring velocity statistics from particle tracking velocimetry

    NASA Astrophysics Data System (ADS)

    Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain

    2017-10-01

    We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.

  8. Heavy flavor decay of Zγ at CDF

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Timothy M. Harrington-Taber

    2013-01-01

    Diboson production is an important and frequently measured parameter of the Standard Model. This analysis considers the previously neglected pmore » $$\\bar{p}$$ →Z γ→ b$$\\bar{b}$$ channel, as measured at the Collider Detector at Fermilab. Using the entire Tevatron Run II dataset, the measured result is consistent with Standard Model predictions, but the statistical error associated with this method of measurement limits the strength of this correlation.« less

  9. Quantification and statistical significance analysis of group separation in NMR-based metabonomics studies

    PubMed Central

    Goodpaster, Aaron M.; Kennedy, Michael A.

    2015-01-01

    Currently, no standard metrics are used to quantify cluster separation in PCA or PLS-DA scores plots for metabonomics studies or to determine if cluster separation is statistically significant. Lack of such measures makes it virtually impossible to compare independent or inter-laboratory studies and can lead to confusion in the metabonomics literature when authors putatively identify metabolites distinguishing classes of samples based on visual and qualitative inspection of scores plots that exhibit marginal separation. While previous papers have addressed quantification of cluster separation in PCA scores plots, none have advocated routine use of a quantitative measure of separation that is supported by a standard and rigorous assessment of whether or not the cluster separation is statistically significant. Here quantification and statistical significance of separation of group centroids in PCA and PLS-DA scores plots are considered. The Mahalanobis distance is used to quantify the distance between group centroids, and the two-sample Hotelling's T2 test is computed for the data, related to an F-statistic, and then an F-test is applied to determine if the cluster separation is statistically significant. We demonstrate the value of this approach using four datasets containing various degrees of separation, ranging from groups that had no apparent visual cluster separation to groups that had no visual cluster overlap. Widespread adoption of such concrete metrics to quantify and evaluate the statistical significance of PCA and PLS-DA cluster separation would help standardize reporting of metabonomics data. PMID:26246647

  10. Summary Report on NRL Participation in the Microwave Landing System Program.

    DTIC Science & Technology

    1980-08-19

    shifters were measured and statistically analyzed. Several research contracts for promising phased array techniques were awarded to industrial contractors...program was written for compiling statistical data on the measurements, which reads out inser- sertion phase characteristics and standard deviation...GLOSSARY OF TERMS ALPA Airline Pilots’ Association ATA Air Transport Association AWA Australiasian Wireless Amalgamated AWOP All-weather Operations

  11. Exploring Students' Conceptions of the Standard Deviation

    ERIC Educational Resources Information Center

    delMas, Robert; Liu, Yan

    2005-01-01

    This study investigated introductory statistics students' conceptual understanding of the standard deviation. A computer environment was designed to promote students' ability to coordinate characteristics of variation of values about the mean with the size of the standard deviation as a measure of that variation. Twelve students participated in an…

  12. Nonclassical light revealed by the joint statistics of simultaneous measurements.

    PubMed

    Luis, Alfredo

    2016-04-15

    Nonclassicality cannot be a single-observable property, since the statistics of any quantum observable is compatible with classical physics. We develop a general procedure to reveal nonclassical behavior of light states from the joint statistics arising in the practical measurement of multiple observables. Beside embracing previous approaches, this protocol can disclose nonclassical features for standard examples of classical-like behavior, such as SU(2) and Glauber coherent states. When combined with other criteria, this would imply that every light state is nonclassical.

  13. Analysis of Statistical Methods Currently used in Toxicology Journals

    PubMed Central

    Na, Jihye; Yang, Hyeri

    2014-01-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health. PMID:25343012

  14. Analysis of Statistical Methods Currently used in Toxicology Journals.

    PubMed

    Na, Jihye; Yang, Hyeri; Bae, SeungJin; Lim, Kyung-Min

    2014-09-01

    Statistical methods are frequently used in toxicology, yet it is not clear whether the methods employed by the studies are used consistently and conducted based on sound statistical grounds. The purpose of this paper is to describe statistical methods used in top toxicology journals. More specifically, we sampled 30 papers published in 2014 from Toxicology and Applied Pharmacology, Archives of Toxicology, and Toxicological Science and described methodologies used to provide descriptive and inferential statistics. One hundred thirteen endpoints were observed in those 30 papers, and most studies had sample size less than 10, with the median and the mode being 6 and 3 & 6, respectively. Mean (105/113, 93%) was dominantly used to measure central tendency, and standard error of the mean (64/113, 57%) and standard deviation (39/113, 34%) were used to measure dispersion, while few studies provide justifications regarding why the methods being selected. Inferential statistics were frequently conducted (93/113, 82%), with one-way ANOVA being most popular (52/93, 56%), yet few studies conducted either normality or equal variance test. These results suggest that more consistent and appropriate use of statistical method is necessary which may enhance the role of toxicology in public health.

  15. Intercorrelations of Anthropometric Measurements: A Source Book for USA Data

    DTIC Science & Technology

    1978-05-01

    and most important of the statistical measures after the arithmetic mean and standard devi- ation. The coefficient was devised and developed by Francis ... Galton and Karl Pearson in the last decades of the nineteenth century as a measure of the degree of interrelationship or concomitant variation of a...paragraphs--in a wide variety of formulas such as ones for tests of statistical significance and for discriminant functions. Correlation coefficients are

  16. Using luminosity data as a proxy for economic statistics

    PubMed Central

    Chen, Xi

    2011-01-01

    A pervasive issue in social and environmental research has been how to improve the quality of socioeconomic data in developing countries. Given the shortcomings of standard sources, the present study examines luminosity (measures of nighttime lights visible from space) as a proxy for standard measures of output (gross domestic product). We compare output and luminosity at the country level and at the 1° latitude × 1° longitude grid-cell level for the period 1992–2008. We find that luminosity has informational value for countries with low-quality statistical systems, particularly for those countries with no recent population or economic censuses. PMID:21576474

  17. The international growth standard for preadolescent and adolescent children: statistical considerations.

    PubMed

    Cole, T J

    2006-12-01

    This article discusses statistical considerations for the design of a new study intended to provide an International Growth Standard for Preadolescent and Adolescent Children, including issues such as cross-sectional, longitudinal, and mixed designs; sample-size derivation for the number of populations and number of children per population; modeling of growth centiles of height, weight, and other measurements; and modeling of the adolescent growth spurt. The conclusions are that a mixed longitudinal design will provide information on both growth distance and velocity; samples of children from 5 to 10 sites should be suitable for an international standard (based on political rather than statistical arguments); the samples should be broadly uniform across age but oversampled during puberty, and should include data into adulthood. The LMS method is recommended for constructing measurement centiles, and parametric or semiparametric approaches are available to estimate the timing of the adolescent growth spurt in individuals. If the new standard is to be grafted onto the 2006 World Health Organization (WHO) reference, caution is needed at the join point of 5 years, where children from the new standard are likely to be appreciably more obese than those from the WHO reference, due to the rising trends in obesity and the time gap in data collection between the two surveys.

  18. Ultra-low-dose computed tomographic angiography with model-based iterative reconstruction compared with standard-dose imaging after endovascular aneurysm repair: a prospective pilot study.

    PubMed

    Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K

    2014-12-01

    An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.

  19. Comparison of anthropometry with photogrammetry based on a standardized clinical photographic technique using a cephalostat and chair.

    PubMed

    Han, Kihwan; Kwon, Hyuk Joon; Choi, Tae Hyun; Kim, Jun Hyung; Son, Daegu

    2010-03-01

    The aim of this study was to standardize clinical photogrammetric techniques, and to compare anthropometry with photogrammetry. To standardize clinical photography, we have developed a photographic cephalostat and chair. We investigated the repeatability of the standardized clinical photogrammetric technique. Then, with 40 landmarks, a total of 96 anthropometric measurement items was obtained from 100 Koreans. Ninety six photogrammetric measurements from the same subjects were also obtained from standardized clinical photographs using Adobe Photoshop version 7.0 (Adobe Systems Corporation, San Jose, CA, USA). The photogrammetric and anthropometric measurement data (mm, degree) were then compared. A coefficient was obtained by dividing the anthropometric measurements by the photogrammetric measurements. The repeatability of the standardized photography was statistically significantly high (p=0.463). Among the 96 measurement items, 44 items were reliable; for these items the photogrammetric measurements were not different to the anthropometric measurements. The remaining 52 items must be classified as unreliable. By developing a photographic cephalostat and chair, we have standardized clinical photogrammetric techniques. The reliable set of measurement items can be used as anthropometric measurements. For unreliable measurement items, applying a suitable coefficient to the photogrammetric measurement allows the anthropometric measurement to be obtained indirectly.

  20. An analytic technique for statistically modeling random atomic clock errors in estimation

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1981-01-01

    Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.

  1. An empirical determination of the minimum number of measurements needed to estimate the mean random vitrinite reflectance of disseminated organic matter

    USGS Publications Warehouse

    Barker, C.E.; Pawlewicz, M.J.

    1993-01-01

    In coal samples, published recommendations based on statistical methods suggest 100 measurements are needed to estimate the mean random vitrinite reflectance (Rv-r) to within ??2%. Our survey of published thermal maturation studies indicates that those using dispersed organic matter (DOM) mostly have an objective of acquiring 50 reflectance measurements. This smaller objective size in DOM versus that for coal samples poses a statistical contradiction because the standard deviations of DOM reflectance distributions are typically larger indicating a greater sample size is needed to accurately estimate Rv-r in DOM. However, in studies of thermal maturation using DOM, even 50 measurements can be an unrealistic requirement given the small amount of vitrinite often found in such samples. Furthermore, there is generally a reduced need for assuring precision like that needed for coal applications. Therefore, a key question in thermal maturation studies using DOM is how many measurements of Rv-r are needed to adequately estimate the mean. Our empirical approach to this problem is to compute the reflectance distribution statistics: mean, standard deviation, skewness, and kurtosis in increments of 10 measurements. This study compares these intermediate computations of Rv-r statistics with a final one computed using all measurements for that sample. Vitrinite reflectance was measured on mudstone and sandstone samples taken from borehole M-25 in the Cerro Prieto, Mexico geothermal system which was selected because the rocks have a wide range of thermal maturation and a comparable humic DOM with depth. The results of this study suggest that after only 20-30 measurements the mean Rv-r is generally known to within 5% and always to within 12% of the mean Rv-r calculated using all of the measured particles. Thus, even in the worst case, the precision after measuring only 20-30 particles is in good agreement with the general precision of one decimal place recommended for mean Rv-r measurements on DOM. The coefficient of variation (V = standard deviation/mean) is proposed as a statistic to indicate the reliability of the mean Rv-r estimates made at n ??? 20. This preliminary study suggests a V 0.2 suggests an unreliable mean in such small samples. ?? 1993.

  2. Analysis of statistical misconception in terms of statistical reasoning

    NASA Astrophysics Data System (ADS)

    Maryati, I.; Priatna, N.

    2018-05-01

    Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.

  3. Standard Entropy of Crystalline Iodine from Vapor Pressure Measurements: A Physical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Harris, Ronald M.

    1978-01-01

    Presents material dealing with an application of statistical thermodynamics to the diatomic solid I-2(s). The objective is to enhance the student's appreciation of the power of the statistical formulation of thermodynamics. The Simple Einstein Model is used. (Author/MA)

  4. The Statistical Loop Analyzer (SLA)

    NASA Technical Reports Server (NTRS)

    Lindsey, W. C.

    1985-01-01

    The statistical loop analyzer (SLA) is designed to automatically measure the acquisition, tracking and frequency stability performance characteristics of symbol synchronizers, code synchronizers, carrier tracking loops, and coherent transponders. Automated phase lock and system level tests can also be made using the SLA. Standard baseband, carrier and spread spectrum modulation techniques can be accomodated. Through the SLA's phase error jitter and cycle slip measurements the acquisition and tracking thresholds of the unit under test are determined; any false phase and frequency lock events are statistically analyzed and reported in the SLA output in probabilistic terms. Automated signal drop out tests can be performed in order to trouble shoot algorithms and evaluate the reacquisition statistics of the unit under test. Cycle slip rates and cycle slip probabilities can be measured using the SLA. These measurements, combined with bit error probability measurements, are all that are needed to fully characterize the acquisition and tracking performance of a digital communication system.

  5. Uncertainty Analysis of Instrument Calibration and Application

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.

  6. Educational Indicators: A Guide for Policymakers. CPRE Occasional Paper Series.

    ERIC Educational Resources Information Center

    Oakes, Jeannie

    An educational indicator is a statistic revealing something about the education system's health or performance. Indicators must meet certain substantive and technical standards that define the kind of information they should provide and the features they should measure. There are two types of statistical indicators. Whereas single statistics…

  7. 42 CFR 421.122 - Performance standards.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... performance, application of acceptable statistical measures of variation to nationwide intermediary experience... or criterion. (b) Factors beyond intermediary's control. To identify measurable factors that significantly affect an intermediary's performance, but that are not within the intermediary's control, CMS will...

  8. Observation of the rare Bs0 →µ+µ- decay from the combined analysis of CMS and LHCb data

    NASA Astrophysics Data System (ADS)

    Cms Collaboration; Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; Hartl, C.; Hörmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knünz, V.; Krammer, M.; Krätschmer, I.; Liko, D.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schöfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C.-E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Bansal, S.; Cornelis, T.; de Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Ochesanu, S.; Rougny, R.; van de Klundert, M.; van Haevermaet, H.; van Mechelen, P.; van Remortel, N.; van Spilbeeck, A.; Blekman, F.; Blyweert, S.; D'Hondt, J.; Daci, N.; Heracleous, N.; Keaveney, J.; Lowette, S.; Maes, M.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; van Doninck, W.; van Mulders, P.; van Onsem, G. P.; Villella, I.; Caillol, C.; Clerbaux, B.; de Lentdecker, G.; Dobur, D.; Favart, L.; Gay, A. P. R.; Grebenyuk, A.; Léonard, A.; Mohammadi, A.; Perniè, L.; Randle-Conde, A.; Reis, T.; Seva, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Zenoni, F.; Adler, V.; Beernaert, K.; Benucci, L.; Cimmino, A.; Costantini, S.; Crucy, S.; Dildick, S.; Fagot, A.; Garcia, G.; McCartin, J.; Ocampo Rios, A. A.; Ryckbosch, D.; Salva Diblen, S.; Sigamani, M.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bruno, G.; Castello, R.; Caudron, A.; Ceard, L.; da Silveira, G. G.; Delaere, C.; Du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Nuttens, C.; Pagano, D.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Aldá Júnior, W. L.; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Dos Reis Martins, T.; Mora Herrera, C.; Pol, M. E.; Rebello Teles, P.; Carvalho, W.; Chinellato, J.; Custódio, A.; da Costa, E. M.; de Jesus Damiao, D.; de Oliveira Martins, C.; Fonseca de Souza, S.; Malbouisson, H.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado da Silva, W. L.; Santaolalla, J.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Aleksandrov, A.; Genchev, V.; Hadjiiska, R.; Iaydjiev, P.; Marinov, A.; Piperov, S.; Rodozov, M.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Plestina, R.; Romeo, F.; Tao, J.; Wang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Zou, W.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Mekterovic, D.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Bodlak, M.; Finger, M.; Finger, M., Jr.; Assran, Y.; Ellithi Kamel, A.; Mahmoud, M. A.; Radi, A.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Eerola, P.; Fedi, G.; Voutilainen, M.; Härkönen, J.; Karimäki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampén, T.; Lassila-Perini, K.; Lehti, S.; Lindén, T.; Luukka, P.; Mäenpää, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Busson, P.; Charlot, C.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Filipovic, N.; Florent, A.; Granier de Cassagnac, R.; Mastrolorenzo, L.; Miné, P.; Mironov, C.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Veelken, C.; Yilmaz, Y.; Zabi, A.; Agram, J.-L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J.-M.; Chabert, E. C.; Collard, C.; Conte, E.; Fontaine, J.-C.; Gelé, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A.-C.; Skovpen, K.; van Hove, P.; Gadrat, S.; Beauceron, S.; Beaupere, N.; Boudoul, G.; Bouvier, E.; Brochet, S.; Carrillo Montoya, C. A.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Xiao, H.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Bontenackels, M.; Edelhoff, M.; Feld, L.; Heister, A.; Hindrichs, O.; Klein, K.; Ostapchuk, A.; Raupach, F.; Sammet, J.; Schael, S.; Schulte, J. F.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Güth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Teyssier, D.; Thüer, S.; Weber, M.; Cherepanov, V.; Erdogan, Y.; Flügge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Künsken, A.; Lingemann, J.; Nowack, A.; Nugent, I. M.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behr, J.; Behrens, U.; Bell, A. J.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Garay Garcia, J.; Geiser, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Krücker, D.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I.-A.; Meyer, A. B.; Mittag, G.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Ribeiro Cipriano, P. M.; Roland, B.; Ron, E.; Sahin, M. Ö.; Salfeld-Nebgen, J.; Saxena, P.; Schoerner-Sadenius, T.; Schröder, M.; Seitz, C.; Spannagel, S.; Vargas Trevino, A. D. R.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Görner, M.; Haller, J.; Hoffmann, M.; Höing, R. S.; Junkes, A.; Kirschenmann, H.; Klanner, R.; Kogler, R.; Lange, J.; Lapsien, T.; Lenz, T.; Marchesini, I.; Ott, J.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Poehlsen, T.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Seidel, M.; Sola, V.; Stadie, H.; Steinbrück, G.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Barth, C.; Baus, C.; Berger, J.; Böser, C.; Butz, E.; Chwalek, T.; de Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Frensch, F.; Giffels, M.; Gilbert, A.; Hartmann, F.; Hauth, T.; Husemann, U.; Katkov, I.; Kornmayer, A.; Kuznetsova, E.; Lobelle Pardo, P.; Mozer, M. U.; Müller, T.; Müller, Th.; Nürnberg, A.; Quast, G.; Rabbertz, K.; Röcker, S.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Markou, A.; Markou, C.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Stiliaris, E.; Aslanoglou, X.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Paradas, E.; Strologas, J.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Sahoo, N.; Swain, S. K.; Beri, S. B.; Bhatnagar, V.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, M.; Kumar, R.; Mittal, M.; Nishu, N.; Singh, J. B.; Ashok Kumar; Arun Kumar; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, V.; Banerjee, S.; Bhattacharya, S.; Chatterjee, K.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Modak, A.; Mukherjee, S.; Roy, D.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kailas, S.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; de Filippis, N.; de Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Primavera, F.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; D'Alessandro, R.; Focardi, E.; Gallo, E.; Gonzi, S.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Ferretti, R.; Ferro, F.; Lo Vetere, M.; Robutti, E.; Tosi, S.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Lucchini, M. T.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; di Guida, S.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bisello, D.; Branca, A.; Carlin, R.; Checchia, P.; Dall'Osso, M.; Dorigo, T.; Dosselli, U.; Galanti, M.; Gasparini, F.; Gasparini, U.; Giubilato, P.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Ciangottini, D.; Fanò, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Spiezia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; Ciocci, M. A.; Dell'Orso, R.; Donato, S.; Fiori, F.; Foà, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Moon, C. S.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Vernieri, C.; Barone, L.; Cavallari, F.; D'Imperio, G.; Del Re, D.; Diemoz, M.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Micheli, F.; Nourbakhsh, S.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Soffi, L.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Degano, A.; Demaria, N.; Finco, L.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Potenza, A.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Umer, T.; Zanetti, A.; Chang, S.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Park, H.; Sakharov, A.; Son, D. C.; Kim, T. J.; Kim, J. Y.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, Y.; Lee, B.; Lee, K. S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, J. H.; Park, I. C.; Ryu, G.; Ryu, M. S.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Juodagalvis, A.; Komaragiri, J. R.; Md Ali, M. A. B.; Casimiro Linares, E.; Castilla-Valdez, H.; de La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Reucroft, S.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Górski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Wolszczak, W.; Bargassa, P.; Beirão da Cruz E Silva, C.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Varela, J.; Vischia, P.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Semenov, S.; Spiridonov, A.; Stolin, V.; Vlasov, E.; Zhokin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Belyaev, A.; Boos, E.; Dubinin, M.; Dudko, L.; Ershov, A.; Gribushin, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Obraztsov, S.; Petrushanko, S.; Savrin, V.; Snigirev, A.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Ekmedzic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; de La Cruz, B.; Delgado Peris, A.; Domínguez Vázquez, D.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernández Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro de Martino, E.; Pérez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Albajar, C.; de Trocóniz, J. F.; Missiroli, M.; Moran, D.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Duarte Campderros, J.; Fernandez, M.; Gomez, G.; Graziano, A.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Rodríguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Bernet, C.; Bloch, P.; Bocci, A.; Bonato, A.; Bondu, O.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Colafranceschi, S.; D'Alfonso, M.; D'Enterria, D.; Dabrowski, A.; David, A.; de Guio, F.; de Roeck, A.; de Visscher, S.; di Marco, E.; Dobson, M.; Dordevic, M.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Franzoni, G.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Hammer, J.; Hansen, M.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Lourenço, C.; Magini, N.; Malgeri, L.; Mannelli, M.; Marrouche, J.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Orsini, L.; Pape, L.; Perez, E.; Perrozzi, L.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pimiä, M.; Piparo, D.; Plagge, M.; Racz, A.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schäfer, C.; Schwick, C.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Tsirou, A.; Veres, G. I.; Wardle, N.; Wöhri, H. K.; Wollny, H.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Renker, D.; Rohe, T.; Bachmair, F.; Bäni, L.; Bianchini, L.; Buchmann, M. A.; Casal, B.; Chanon, N.; Dissertori, G.; Dittmar, M.; Donegà, M.; Dünser, M.; Eller, P.; Grab, C.; Hits, D.; Hoss, J.; Lustermann, W.; Mangano, B.; Marini, A. C.; Marionneau, M.; Martinez Ruiz Del Arbol, P.; Masciovecchio, M.; Meister, D.; Mohr, N.; Musella, P.; Nägeli, C.; Nessi-Tedaldi, F.; Pandolfi, F.; Pauss, F.; Peruzzi, M.; Quittnat, M.; Rebane, L.; Rossini, M.; Starodumov, A.; Takahashi, M.; Theofilatos, K.; Wallny, R.; Weber, H. A.; Amsler, C.; Canelli, M. F.; Chiochia, V.; de Cosa, A.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Millan Mejias, B.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Ronga, F. J.; Taroni, S.; Verzetti, M.; Yang, Y.; Cardaci, M.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Volpe, R.; Yu, S. S.; Chang, P.; Chang, Y. H.; Chang, Y. W.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Grundler, U.; Hou, W.-S.; Kao, K. Y.; Liu, Y. F.; Lu, R.-S.; Majumder, D.; Petrakou, E.; Tzeng, Y. M.; Wilken, R.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, M.; Akin, I. V.; Bilin, B.; Bilmis, S.; Gamsizkan, H.; Isildak, B.; Karapinar, G.; Ocalan, K.; Sekmen, S.; Surat, U. E.; Yalvac, M.; Zeyrek, M.; Albayrak, E. A.; Gülmez, E.; Kaya, M.; Kaya, O.; Yetkin, T.; Cankocak, K.; Vardarlı, F. I.; Levchuk, L.; Sorokin, P.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Senkin, S.; Smith, V. J.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Burton, D.; Colling, D.; Cripps, N.; Dauncey, P.; Davies, G.; Della Negra, M.; Dunne, P.; Ferguson, W.; Fulcher, J.; Futyan, D.; Hall, G.; Iles, G.; Jarvis, M.; Karapostoli, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A.-M.; Malik, S.; Mathias, B.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Rogerson, S.; Rose, A.; Seez, C.; Sharp, P.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Dittmann, J.; Hatakeyama, K.; Kasmi, A.; Liu, H.; Scarborough, T.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Lawson, P.; Richardson, C.; Rohlf, J.; St. John, J.; Sulak, L.; Alimena, J.; Berry, E.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Dhingra, N.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Kukartsev, G.; Laird, E.; Landsberg, G.; Luk, M.; Narain, M.; Segala, M.; Sinthuprasith, T.; Speer, T.; Swanson, J.; Breedon, R.; Breto, G.; Calderon de La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Gardner, M.; Ko, W.; Lander, R.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Rakness, G.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Rikova, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Luthra, A.; Malberti, M.; Olmedo Negrete, M.; Shrinivas, A.; Sumowidagdo, S.; Wimpenny, S.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; D'Agnolo, R. T.; Holzner, A.; Kelley, R.; Klein, D.; Kovalskyi, D.; Letts, J.; MacNeill, I.; Olivito, D.; Padhi, S.; Palmer, C.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tu, Y.; Vartak, A.; Welke, C.; Würthwein, F.; Yagil, A.; Barge, D.; Bradmiller-Feld, J.; Campagnari, C.; Danielson, T.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Incandela, J.; Justus, C.; McColl, N.; Richman, J.; Stuart, D.; To, W.; West, C.; Yoo, J.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Pierini, M.; Spiropulu, M.; Vlimant, J. R.; Wilkinson, R.; Xie, S.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Iiyama, Y.; Paulini, M.; Russ, J.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Krohn, M.; Luiggi Lopez, E.; Nauenberg, U.; Smith, J. G.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Skinnari, L.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gao, Y.; Gottschalk, E.; Gray, L.; Green, D.; Grünendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Kaadze, K.; Klima, B.; Kreis, B.; Kwan, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mishra, K.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; O'Dell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Sharma, S.; Soha, A.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitbeck, A.; Whitmore, J.; Yang, F.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carver, M.; Curry, D.; Das, S.; de Gruttola, M.; di Giovanni, G. P.; Field, R. D.; Fisher, M.; Furic, I. K.; Hugon, J.; Konigsberg, J.; Korytov, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Rinkevicius, A.; Shchutska, L.; Snowball, M.; Sperka, D.; Yelton, J.; Zakaria, M.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Hohlmann, M.; Kalakhety, H.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; Moon, D. H.; O'Brien, C.; Sandoval Gonzalez, I. D.; Silkworth, C.; Turner, P.; Varelas, N.; Bilki, B.; Clarida, W.; Dilsiz, K.; Haytmyradov, M.; Merlo, J.-P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Rahmat, R.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yi, K.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Swartz, M.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Kenny, R. P., III; Malek, M.; Murray, M.; Noonan, D.; Sanders, S.; Sekaric, J.; Stringer, R.; Wang, Q.; Wood, J. S.; Chakaberia, I.; Ivanov, A.; Khalil, S.; Makouski, M.; Maravin, Y.; Saini, L. K.; Skhirtladze, N.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Baden, A.; Belloni, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kolberg, T.; Lu, Y.; Mignerey, A. C.; Pedro, K.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Bauer, G.; Busza, W.; Cali, I. A.; Chan, M.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Klute, M.; Lai, Y. S.; Lee, Y.-J.; Levin, A.; Luckey, P. D.; Ma, T.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Stephans, G. S. F.; Sumorok, K.; Velicanu, D.; Veverka, J.; Wyslouch, B.; Yang, M.; Zanetti, M.; Zhukova, V.; Dahmes, B.; Gude, A.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Pastika, N.; Rusack, R.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Oliveros, S.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Gonzalez Suarez, R.; Keller, J.; Knowlton, D.; Kravchenko, I.; Lazo-Flores, J.; Meier, F.; Ratnikov, F.; Snow, G. R.; Zvada, M.; Dolen, J.; Godshalk, A.; Iashvili, I.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Massironi, A.; Morse, D. M.; Nash, D.; Orimoto, T.; Trocino, D.; Wang, R.-J.; Wood, D.; Zhang, J.; Hahn, K. A.; Kubik, A.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Sung, K.; Velasco, M.; Won, S.; Brinkerhoff, A.; Chan, K. M.; Drozdetskiy, A.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Lynch, S.; Marinelli, N.; Musienko, Y.; Pearson, T.; Planer, M.; Ruchti, R.; Smith, G.; Valls, N.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wolfe, H.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Hunt, A.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Piroué, P.; Quan, X.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Brownson, E.; Malik, S.; Mendez, H.; Ramirez Vargas, J. E.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; de Mattia, M.; Gutay, L.; Hu, Z.; Jha, M. K.; Jones, M.; Jung, K.; Kress, M.; Leonardo, N.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Zablocki, J.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Michlin, B.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Covarelli, R.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Khukhunaishvili, A.; Korjenevski, S.; Petrillo, G.; Vishnevskiy, D.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Kaplan, S.; Lath, A.; Panwalkar, S.; Park, M.; Patel, R.; Salur, S.; Schnetzer, S.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Rose, K.; Spanier, S.; York, A.; Bouhali, O.; Castaneda Hernandez, A.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Krutelyov, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Perloff, A.; Roe, J.; Rose, A.; Safonov, A.; Suarez, I.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kovitanggoon, K.; Kunori, S.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Wood, J.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Dodd, L.; Duric, S.; Friis, E.; Hall-Wilton, R.; Herndon, M.; Hervé, A.; Klabbers, P.; Lanaro, A.; Lazaridis, C.; Levine, A.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ross, I.; Sarangi, T.; Savin, A.; Smith, W. H.; Taylor, D.; Vuosalo, C.; Bediaga, I.; de Miranda, J. M.; Ferreira Rodrigues, F.; Gomes, A.; Massafferri, A.; Dos Reis, A. C.; Rodrigues, A. B.; Amato, S.; Carvalho Akiba, K.; de Paula, L.; Francisco, O.; Gandelman, M.; Hicheur, A.; Lopes, J. H.; Martins Tostes, D.; Nasteva, I.; Otalora Goicochea, J. M.; Polycarpo, E.; Potterat, C.; Rangel, M. S.; Salustino Guimaraes, V.; Souza de Paula, B.; Vieira, D.; An, L.; Gao, Y.; Jing, F.; Li, Y.; Yang, Z.; Yuan, X.; Zhang, Y.; Zhong, L.; Beaucourt, L.; Chefdeville, M.; Decamp, D.; Déléage, N.; Ghez, Ph.; Lees, J.-P.; Marchand, J. F.; Minard, M.-N.; Pietrzyk, B.; Qian, W.; T'jampens, S.; Tisserand, V.; Tournefier, E.; Ajaltouni, Z.; Baalouch, M.; Cogneras, E.; Deschamps, O.; El Rifai, I.; Grabalosa Gándara, M.; Henrard, P.; Hoballah, M.; Lefèvre, R.; Maratas, J.; Monteil, S.; Niess, V.; Perret, P.; Adrover, C.; Akar, S.; Aslanides, E.; Cogan, J.; Kanso, W.; Le Gac, R.; Leroy, O.; Mancinelli, G.; Mordà, A.; Perrin-Terrin, M.; Serrano, J.; Tsaregorodtsev, A.; Amhis, Y.; Barsuk, S.; Borsato, M.; Kochebina, O.; Lefrançois, J.; Machefert, F.; Martín Sánchez, A.; Nicol, M.; Robbe, P.; Schune, M.-H.; Teklishyn, M.; Vallier, A.; Viaud, B.; Wormser, G.; Ben-Haim, E.; Charles, M.; Coquereau, S.; David, P.; Del Buono, L.; Henry, L.; Polci, F.; Albrecht, J.; Brambach, T.; Cauet, Ch.; Deckenhoff, M.; Eitschberger, U.; Ekelhof, R.; Gavardi, L.; Kruse, F.; Meier, F.; Niet, R.; Parkinson, C. J.; Schlupp, M.; Shires, A.; Spaan, B.; Swientek, S.; Wishahi, J.; Aquines Gutierrez, O.; Blouw, J.; Britsch, M.; Fontana, M.; Popov, D.; Schmelling, M.; Volyanskyy, D.; Zavertyaev, M.; Bachmann, S.; Bien, A.; Comerma-Montells, A.; de Cian, M.; Dordei, F.; Esen, S.; Färber, C.; Gersabeck, E.; Grillo, L.; Han, X.; Hansmann-Menzemer, S.; Jaeger, A.; Kolpin, M.; Kreplin, K.; Krocker, G.; Leverington, B.; Marks, J.; Meissner, M.; Neuner, M.; Nikodem, T.; Seyfert, P.; Stahl, M.; Stahl, S.; Uwer, U.; Vesterinen, M.; Wandernoth, S.; Wiedner, D.; Zhelezov, A.; McNulty, R.; Wallace, R.; Zhang, W. C.; Palano, A.; Carbone, A.; Falabella, A.; Galli, D.; Marconi, U.; Moggi, N.; Mussini, M.; Perazzini, S.; Vagnoni, V.; Valenti, G.; Zangoli, M.; Bonivento, W.; Cadeddu, S.; Cardini, A.; Cogoni, V.; Contu, A.; Lai, A.; Liu, B.; Manca, G.; Oldeman, R.; Saitta, B.; Vacca, C.; Andreotti, M.; Baldini, W.; Bozzi, C.; Calabrese, R.; Corvo, M.; Fiore, M.; Fiorini, M.; Luppi, E.; Pappalardo, L. L.; Shapoval, I.; Tellarini, G.; Tomassetti, L.; Vecchi, S.; Anderlini, L.; Bizzeti, A.; Frosini, M.; Graziani, G.; Passaleva, G.; Veltri, M.; Bencivenni, G.; Campana, P.; de Simone, P.; Lanfranchi, G.; Palutan, M.; Rama, M.; Sarti, A.; Sciascia, B.; Vazquez Gomez, R.; Cardinale, R.; Fontanelli, F.; Gambetta, S.; Patrignani, C.; Petrolini, A.; Pistone, A.; Calvi, M.; Cassina, L.; Gotti, C.; Khanji, B.; Kucharczyk, M.; Matteuzzi, C.; Fu, J.; Geraci, A.; Neri, N.; Palombo, F.; Amerio, S.; Collazuol, G.; Gallorini, S.; Gianelle, A.; Lucchesi, D.; Lupato, A.; Morandin, M.; Rotondo, M.; Sestini, L.; Simi, G.; Stroili, R.; Bedeschi, F.; Cenci, R.; Leo, S.; Marino, P.; Morello, M. J.; Punzi, G.; Stracka, S.; Walsh, J.; Carboni, G.; Furfaro, E.; Santovetti, E.; Satta, A.; Alves, A. A., Jr.; Auriemma, G.; Bocci, V.; Martellotti, G.; Penso, G.; Pinci, D.; Santacesaria, R.; Satriano, C.; Sciubba, A.; Dziurda, A.; Kucewicz, W.; Lesiak, T.; Rachwal, B.; Witek, M.; Firlej, M.; Fiutowski, T.; Idzik, M.; Morawski, P.; Moron, J.; Oblakowska-Mucha, A.; Swientek, K.; Szumlak, T.; Batozskaya, V.; Klimaszewski, K.; Kurek, K.; Szczekowski, M.; Ukleja, A.; Wislicki, W.; Cojocariu, L.; Giubega, L.; Grecu, A.; Maciuc, F.; Orlandea, M.; Popovici, B.; Stoica, S.; Straticiuc, M.; Alkhazov, G.; Bondar, N.; Dzyuba, A.; Maev, O.; Sagidova, N.; Shcheglov, Y.; Vorobyev, A.; Belogurov, S.; Belyaev, I.; Egorychev, V.; Golubkov, D.; Kvaratskheliya, T.; Machikhiliyan, I. V.; Polyakov, I.; Savrina, D.; Semennikov, A.; Zhokhov, A.; Berezhnoy, A.; Korolev, M.; Leflat, A.; Nikitin, N.; Filippov, S.; Gushchin, E.; Kravchuk, L.; Bondar, A.; Eidelman, S.; Krokovny, P.; Kudryavtsev, V.; Shekhtman, L.; Vorobyev, V.; Artamonov, A.; Belous, K.; Dzhelyadin, R.; Guz, Yu.; Novoselov, A.; Obraztsov, V.; Popov, A.; Romanovsky, V.; Shapkin, M.; Stenyakin, O.; Yushchenko, O.; Badalov, A.; Calvo Gomez, M.; Garrido, L.; Gascon, D.; Graciani Diaz, R.; Graugés, E.; Marin Benito, C.; Picatoste Olloqui, E.; Rives Molina, V.; Ruiz, H.; Vilasis-Cardona, X.; Adeva, B.; Alvarez Cartelle, P.; Dosil Suárez, A.; Fernandez Albor, V.; Gallas Torreira, A.; García Pardiñas, J.; Hernando Morata, J. A.; Plo Casasus, M.; Romero Vidal, A.; Saborido Silva, J. J.; Sanmartin Sedes, B.; Santamarina Rios, C.; Vazquez Regueiro, P.; Vázquez Sierra, C.; Vieites Diaz, M.; Alessio, F.; Archilli, F.; Barschel, C.; Benson, S.; Buytaert, J.; Campora Perez, D.; Castillo Garcia, L.; Cattaneo, M.; Charpentier, Ph.; Cid Vidal, X.; Clemencic, M.; Closier, J.; Coco, V.; Collins, P.; Corti, G.; Couturier, B.; D'Ambrosio, C.; Dettori, F.; di Canto, A.; Dijkstra, H.; Durante, P.; Ferro-Luzzi, M.; Forty, R.; Frank, M.; Frei, C.; Gaspar, C.; Gligorov, V. V.; Granado Cardoso, L. A.; Gys, T.; Haen, C.; He, J.; Head, T.; van Herwijnen, E.; Jacobsson, R.; Johnson, D.; Joram, C.; Jost, B.; Karacson, M.; Karbach, T. M.; Lacarrere, D.; Langhans, B.; Lindner, R.; Linn, C.; Lohn, S.; Mapelli, A.; Matev, R.; Mathe, Z.; Neubert, S.; Neufeld, N.; Otto, A.; Panman, J.; Pepe Altarelli, M.; Rauschmayr, N.; Rihl, M.; Roiser, S.; Ruf, T.; Schindler, H.; Schmidt, B.; Schopper, A.; Schwemmer, R.; Sridharan, S.; Stagni, F.; Subbiah, V. K.; Teubert, F.; Thomas, E.; Tonelli, D.; Trisovic, A.; Ubeda Garcia, M.; Wicht, J.; Wyllie, K.; Battista, V.; Bay, A.; Blanc, F.; Dorigo, M.; Dupertuis, F.; Fitzpatrick, C.; Gianì, S.; Haefeli, G.; Jaton, P.; Khurewathanakul, C.; Komarov, I.; La Thi, V. N.; Lopez-March, N.; Märki, R.; Martinelli, M.; Muster, B.; Nakada, T.; Nguyen, A. D.; Nguyen, T. D.; Nguyen-Mau, C.; Prisciandaro, J.; Puig Navarro, A.; Rakotomiaramanana, B.; Rouvinet, J.; Schneider, O.; Soomro, F.; Szczypka, P.; Tobin, M.; Tourneur, S.; Tran, M. T.; Veneziano, G.; Xu, Z.; Anderson, J.; Bernet, R.; Bowen, E.; Bursche, A.; Chiapolini, N.; Chrzaszcz, M.; Elsasser, Ch.; Graverini, E.; Lionetto, F.; Lowdon, P.; Müller, K.; Serra, N.; Steinkamp, O.; Storaci, B.; Straumann, U.; Tresch, M.; Vollhardt, A.; Aaij, R.; Ali, S.; van Beuzekom, M.; David, P. N. Y.; de Bruyn, K.; Farinelli, C.; Heijne, V.; Hulsbergen, W.; Jans, E.; Koppenburg, P.; Kozlinskiy, A.; van Leerdam, J.; Merk, M.; Oggero, S.; Pellegrino, A.; Snoek, H.; van Tilburg, J.; Tsopelas, P.; Tuning, N.; de Vries, J. A.; Ketel, T.; Koopman, R. F.; Lambert, R. W.; Martinez Santos, D.; Raven, G.; Schiller, M.; Syropoulos, V.; Tolk, S.; Dovbnya, A.; Kandybei, S.; Raniuk, I.; Okhrimenko, O.; Pugatch, V.; Bifani, S.; Farley, N.; Griffith, P.; Kenyon, I. R.; Lazzeroni, C.; Mazurov, A.; McCarthy, J.; Pescatore, L.; Watson, N. K.; Williams, M. P.; Adinolfi, M.; Benton, J.; Brook, N. H.; Cook, A.; Coombes, M.; Dalseno, J.; Hampson, T.; Harnew, S. T.; Naik, P.; Price, E.; Prouve, C.; Rademacker, J. H.; Richards, S.; Saunders, D. M.; Skidmore, N.; Souza, D.; Velthuis, J. J.; Voong, D.; Barter, W.; Bettler, M.-O.; Cliff, H. V.; Evans, H.-M.; Garra Tico, J.; Gibson, V.; Gregson, S.; Haines, S. C.; Jones, C. R.; Sirendi, M.; Smith, J.; Ward, D. R.; Wotton, S. A.; Wright, S.; Back, J. J.; Blake, T.; Craik, D. C.; Crocombe, A. C.; Dossett, D.; Gershon, T.; Kreps, M.; Langenbruch, C.; Latham, T.; O'Hanlon, D. P.; Pilař, T.; Poluektov, A.; Reid, M. M.; Silva Coutinho, R.; Wallace, C.; Whitehead, M.; Easo, S.; Nandakumar, R.; Papanestis, A.; Ricciardi, S.; Wilson, F. F.; Carson, L.; Clarke, P. E. L.; Cowan, G. A.; Eisenhardt, S.; Ferguson, D.; Lambert, D.; Luo, H.; Morris, A.-B.; Muheim, F.; Needham, M.; Playfer, S.; Alexander, M.; Beddow, J.; Dean, C.-T.; Eklund, L.; Hynds, D.; Karodia, S.; Longstaff, I.; Ogilvy, S.; Pappagallo, M.; Sail, P.; Skillicorn, I.; Soler, F. J. P.; Spradlin, P.; Affolder, A.; Bowcock, T. J. V.; Brown, H.; Casse, G.; Donleavy, S.; Dreimanis, K.; Farry, S.; Fay, R.; Hennessy, K.; Hutchcroft, D.; Liles, M.; McSkelly, B.; Patel, G. D.; Price, J. D.; Pritchard, A.; Rinnert, K.; Shears, T.; Smith, N. A.; Ciezarek, G.; Cunliffe, S.; Currie, R.; Egede, U.; Fol, P.; Golutvin, A.; Hall, S.; McCann, M.; Owen, P.; Patel, M.; Petridis, K.; Redi, F.; Sepp, I.; Smith, E.; Sutcliffe, W.; Websdale, D.; Appleby, R. B.; Barlow, R. J.; Bird, T.; Bjørnstad, P. M.; Borghi, S.; Brett, D.; Brodzicka, J.; Capriotti, L.; Chen, S.; de Capua, S.; Dujany, G.; Gersabeck, M.; Harrison, J.; Hombach, C.; Klaver, S.; Lafferty, G.; McNab, A.; Parkes, C.; Pearce, A.; Reichert, S.; Rodrigues, E.; Rodriguez Perez, P.; Smith, M.; Cheung, S.-F.; Derkach, D.; Evans, T.; Gauld, R.; Greening, E.; Harnew, N.; Hill, D.; Hunt, P.; Hussain, N.; Jalocha, J.; John, M.; Lupton, O.; Malde, S.; Smith, E.; Stevenson, S.; Thomas, C.; Topp-Joergensen, S.; Torr, N.; Wilkinson, G.; Counts, I.; Ilten, P.; Williams, M.; Andreassen, R.; Davis, A.; de Silva, W.; Meadows, B.; Sokoloff, M. D.; Sun, L.; Todd, J.; Andrews, J. E.; Hamilton, B.; Jawahery, A.; Wimberley, J.; Artuso, M.; Blusk, S.; Borgia, A.; Britton, T.; Ely, S.; Gandini, P.; Garofoli, J.; Gui, B.; Hadjivasiliou, C.; Jurik, N.; Kelsey, M.; Mountain, R.; Pal, B. K.; Skwarnicki, T.; Stone, S.; Wang, J.; Xing, Z.; Zhang, L.; Baesso, C.; Cruz Torres, M.; Göbel, C.; Molina Rodriguez, J.; Xie, Y.; Milanes, D. A.; Grünberg, O.; Heß, M.; Voß, C.; Waldi, R.; Likhomanenko, T.; Malinin, A.; Shevchenko, V.; Ustyuzhanin, A.; Martinez Vidal, F.; Oyanguren, A.; Ruiz Valls, P.; Sanchez Mayordomo, C.; Onderwater, C. J. G.; Wilschut, H. W.; Pesen, E.

    2015-06-01

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. The probabilities, or branching fractions, of the strange B meson () and the B0 meson decaying into two oppositely charged muons (μ+ and μ-) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the and decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B0 mesons. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton-proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the µ+µ- decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. Furthermore, we obtained evidence for the µ+µ- decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton-proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of and B0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.

  9. Computerized tomography magnified bone windows are superior to standard soft tissue windows for accurate measurement of stone size: an in vitro and clinical study.

    PubMed

    Eisner, Brian H; Kambadakone, Avinash; Monga, Manoj; Anderson, James K; Thoreson, Andrew A; Lee, Hang; Dretler, Stephen P; Sahani, Dushyant V

    2009-04-01

    We determined the most accurate method of measuring urinary stones on computerized tomography. For the in vitro portion of the study 24 calculi, including 12 calcium oxalate monohydrate and 12 uric acid stones, that had been previously collected at our clinic were measured manually with hand calipers as the gold standard measurement. The calculi were then embedded into human kidney-sized potatoes and scanned using 64-slice multidetector computerized tomography. Computerized tomography measurements were performed at 4 window settings, including standard soft tissue windows (window width-320 and window length-50), standard bone windows (window width-1120 and window length-300), 5.13x magnified soft tissue windows and 5.13x magnified bone windows. Maximum stone dimensions were recorded. For the in vivo portion of the study 41 patients with distal ureteral stones who underwent noncontrast computerized tomography and subsequently spontaneously passed the stones were analyzed. All analyzed stones were 100% calcium oxalate monohydrate or mixed, calcium based stones. Stones were prospectively collected at the clinic and the largest diameter was measured with digital calipers as the gold standard. This was compared to computerized tomography measurements using 4.0x magnified soft tissue windows and 4.0x magnified bone windows. Statistical comparisons were performed using Pearson's correlation and paired t test. In the in vitro portion of the study the most accurate measurements were obtained using 5.13x magnified bone windows with a mean 0.13 mm difference from caliper measurement (p = 0.6). Measurements performed in the soft tissue window with and without magnification, and in the bone window without magnification were significantly different from hand caliper measurements (mean difference 1.2, 1.9 and 1.4 mm, p = 0.003, <0.001 and 0.0002, respectively). When comparing measurement errors between stones of different composition in vitro, the error for calcium oxalate calculi was significantly different from the gold standard for all methods except bone window settings with magnification. For uric acid calculi the measurement error was observed only in standard soft tissue window settings. In vivo 4.0x magnified bone windows was superior to 4.0x magnified soft tissue windows in measurement accuracy. Magnified bone window measurements were not statistically different from digital caliper measurements (mean underestimation vs digital caliper 0.3 mm, p = 0.4), while magnified soft tissue windows were statistically distinct (mean underestimation 1.4 mm, p = 0.001). In this study magnified bone windows were the most accurate method of stone measurements in vitro and in vivo. Therefore, we recommend the routine use of magnified bone windows for computerized tomography measurement of stones. In vitro the measurement error in calcium oxalate stones was greater than that in uric acid stones, suggesting that stone composition may be responsible for measurement inaccuracies.

  10. Pilot study: EatFit impacts sixth graders' academic performance on achievement of mathematics and english education standards.

    PubMed

    Shilts, Mical Kay; Lamp, Cathi; Horowitz, Marcel; Townsend, Marilyn S

    2009-01-01

    Investigate the impact of a nutrition education program on student academic performance as measured by achievement of education standards. Quasi-experimental crossover-controlled study. California Central Valley suburban elementary school (58% qualified for free or reduced-priced lunch). All sixth-grade students (n = 84) in the elementary school clustered in 3 classrooms. 9-lesson intervention with an emphasis on guided goal setting and driven by the Social Cognitive Theory. Multiple-choice survey assessing 5 education standards for sixth-grade mathematics and English at 3 time points: baseline (T1), 5 weeks (T2), and 10 weeks (T3). Repeated measures, paired t test, and analysis of covariance. Changes in total scores were statistically different (P < .05), with treatment scores (T3 - T2) generating more gains. The change scores for 1 English (P < .01) and 2 mathematics standards (P < .05; P < .001) were statistically greater for the treatment period (T3 - T2) compared to the control period (T2 - T1). Using standardized tests, results of this pilot study suggest that EatFit can improve academic performance measured by achievement of specific mathematics and English education standards. Nutrition educators can show school administrators and wellness committee members that this program can positively impact academic performance, concomitant to its primary objective of promoting healthful eating and physical activity.

  11. Independent review : statistical analyses of relationship between vehicle curb weight, track width, wheelbase and fatality rates.

    DOT National Transportation Integrated Search

    2011-03-01

    "NHTSA selected the vehicle footprint (the measure of a vehicles wheelbase multiplied by its average track width) as the attribute upon which to base the CAFE standards for model year 2012-2016 passenger cars and light trucks. These standards are ...

  12. Test Statistics and Confidence Intervals to Establish Noninferiority between Treatments with Ordinal Categorical Data.

    PubMed

    Zhang, Fanghong; Miyaoka, Etsuo; Huang, Fuping; Tanaka, Yutaka

    2015-01-01

    The problem for establishing noninferiority is discussed between a new treatment and a standard (control) treatment with ordinal categorical data. A measure of treatment effect is used and a method of specifying noninferiority margin for the measure is provided. Two Z-type test statistics are proposed where the estimation of variance is constructed under the shifted null hypothesis using U-statistics. Furthermore, the confidence interval and the sample size formula are given based on the proposed test statistics. The proposed procedure is applied to a dataset from a clinical trial. A simulation study is conducted to compare the performance of the proposed test statistics with that of the existing ones, and the results show that the proposed test statistics are better in terms of the deviation from nominal level and the power.

  13. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.

    PubMed

    Austin, Peter C; Steyerberg, Ewout W

    2012-06-20

    When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population.

  14. Academic Outcome Measures of a Dedicated Education Unit Over Time: Help or Hinder?

    PubMed

    Smyer, Tish; Gatlin, Tricia; Tan, Rhigel; Tejada, Marianne; Feng, Du

    2015-01-01

    Critical thinking, nursing process, quality and safety measures, and standardized RN exit examination scores were compared between students (n = 144) placed in a dedicated education unit (DEU) and those in a traditional clinical model. Standardized test scores showed that differences between the clinical groups were not statistically significant. This study shows that the DEU model is 1 approach to clinical education that can enhance students' academic outcomes.

  15. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...

  16. Standardisation of a European measurement method for organic carbon and elemental carbon in ambient air: results of the field trial campaign and the determination of a measurement uncertainty and working range.

    PubMed

    Brown, Richard J C; Beccaceci, Sonya; Butterfield, David M; Quincey, Paul G; Harris, Peter M; Maggos, Thomas; Panteliadis, Pavlos; John, Astrid; Jedynska, Aleksandra; Kuhlbusch, Thomas A J; Putaud, Jean-Philippe; Karanasiou, Angeliki

    2017-10-18

    The European Committee for Standardisation (CEN) Technical Committee 264 'Air Quality' has recently produced a standard method for the measurements of organic carbon and elemental carbon in PM 2.5 within its working group 35 in response to the requirements of European Directive 2008/50/EC. It is expected that this method will be used in future by all Member States making measurements of the carbonaceous content of PM 2.5 . This paper details the results of a laboratory and field measurement campaign and the statistical analysis performed to validate the standard method, assess its uncertainty and define its working range to provide clarity and confidence in the underpinning science for future users of the method. The statistical analysis showed that the expanded combined uncertainty for transmittance protocol measurements of OC, EC and TC is expected to be below 25%, at the 95% level of confidence, above filter loadings of 2 μg cm -2 . An estimation of the detection limit of the method for total carbon was 2 μg cm -2 . As a result of the laboratory and field measurement campaign the EUSAAR2 transmittance measurement protocol was chosen as the basis of the standard method EN 16909:2017.

  17. Test-retest reliability of 3D ultrasound measurements of the thoracic spine.

    PubMed

    Fölsch, Christian; Schlögel, Stefanie; Lakemeier, Stefan; Wolf, Udo; Timmesfeld, Nina; Skwara, Adrian

    2012-05-01

    To explore the reliability of the Zebris CMS 20 ultrasound analysis system with pointer application for measuring end-range flexion, end-range extension, and neutral kyphosis angle of the thoracic spine. The study was performed within the School of Physiotherapy in cooperation with the Orthopedic Department at a University Hospital. The thoracic spines of 28 healthy subjects were measured. Measurements for neutral kyphosis angle, end-range flexion, and end-range extension were taken once at each time point. The bone landmarks were palpated by one examiner and marked with a pointer containing 2 transmitters using a frequency of 40 kHz. A third transmitter was fixed to the pelvis, and 3 microphones were used as receiver. The real angle was calculated by the software. Bland-Altman plots with 95% limits of agreement, intraclass correlations (ICC), standard deviations of mean measurements, and standard error of measurements were used for statistical analyses. The test-retest reliability in this study was measured within a 24-hour interval. Statistical parameters were used to judge reliability. The mean kyphosis angle was 44.8° with a standard deviation of 17.3° at the first measurement and a mean of 45.8° with a standard deviation of 16.2° the following day. The ICC was high at 0.95 for the neutral kyphosis angle, and the Bland-Altman 95% limits of agreement were within clinical acceptable margins. The ICC was 0.71 for end-range flexion and 0.34 for end-range extension, whereas the Bland-Altman 95% limits of agreement were wider than with the static measurement of kyphosis. Compared with static measurements, the analysis of motion with 3-dimensional ultrasound showed an increased standard deviation for test-retest measurements. The test-retest reliability of ultrasound measuring of the neutral kyphosis angle of the thoracic spine was demonstrated within 24 hours. Bland-Altman 95% limits of agreement and the standard deviation of differences did not appear to be clinically acceptable for measuring flexion and extension. Copyright © 2012 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  18. Statistical properties of four effect-size measures for mediation models.

    PubMed

    Miočević, Milica; O'Rourke, Holly P; MacKinnon, David P; Brown, Hendricks C

    2018-02-01

    This project examined the performance of classical and Bayesian estimators of four effect size measures for the indirect effect in a single-mediator model and a two-mediator model. Compared to the proportion and ratio mediation effect sizes, standardized mediation effect-size measures were relatively unbiased and efficient in the single-mediator model and the two-mediator model. Percentile and bias-corrected bootstrap interval estimates of ab/s Y , and ab(s X )/s Y in the single-mediator model outperformed interval estimates of the proportion and ratio effect sizes in terms of power, Type I error rate, coverage, imbalance, and interval width. For the two-mediator model, standardized effect-size measures were superior to the proportion and ratio effect-size measures. Furthermore, it was found that Bayesian point and interval summaries of posterior distributions of standardized effect-size measures reduced excessive relative bias for certain parameter combinations. The standardized effect-size measures are the best effect-size measures for quantifying mediated effects.

  19. Measuring the Cobb angle with the iPhone in kyphoses: a reliability study.

    PubMed

    Jacquot, Frederic; Charpentier, Axelle; Khelifi, Sofiane; Gastambide, Daniel; Rigal, Regis; Sautet, Alain

    2012-08-01

    Smartphones have gained widespread use in the healthcare field to fulfill a variety of tasks. We developed a small iPhone application to take advantage of the built-in position sensor to measure angles in a variety of spinal deformities. We present a reliability study of this tool in measuring kyphotic angles. Radiographs taken from 20 different patients' charts were presented to a panel of six operators at two different times. Radiographs were measured with the protractor and the iPhone application and statistical analysis was applied to measure intraclass correlation coefficients between both measurement methods, and to measure intra- and interobserver reliability The intraclass correlation coefficient calculated between methods (i.e. CobbMeter application on the iPhone versus standard method with the protractor) was 0.963 for all measures, indicating excellent correlation was obtained between the CobbMeter application and the standard method. The interobserver correlation coefficient was 0.965. The intraobserver ICC was 0.977, indicating excellent reproductibility of measurements at different times for all operators. The interobserver ICC between fellowship trained senior surgeons and general orthopaedic residents was 0.989. Consistently, the ICC for intraobserver and interobserver correlations was higher with the CobbMeter application than with the regular protractor method. This difference was not statistically significant. Measuring kyphotic angles with the iPhone application appears to be a valid procedure and is in no way inferior to the standard way of measuring the Cobb angle in kyphotic deformities.

  20. A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability

    PubMed Central

    Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.

    2012-01-01

    Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793

  1. Statistics Refresher for Molecular Imaging Technologists, Part 2: Accuracy of Interpretation, Significance, and Variance.

    PubMed

    Farrell, Mary Beth

    2018-06-01

    This article is the second part of a continuing education series reviewing basic statistics that nuclear medicine and molecular imaging technologists should understand. In this article, the statistics for evaluating interpretation accuracy, significance, and variance are discussed. Throughout the article, actual statistics are pulled from the published literature. We begin by explaining 2 methods for quantifying interpretive accuracy: interreader and intrareader reliability. Agreement among readers can be expressed simply as a percentage. However, the Cohen κ-statistic is a more robust measure of agreement that accounts for chance. The higher the κ-statistic is, the higher is the agreement between readers. When 3 or more readers are being compared, the Fleiss κ-statistic is used. Significance testing determines whether the difference between 2 conditions or interventions is meaningful. Statistical significance is usually expressed using a number called a probability ( P ) value. Calculation of P value is beyond the scope of this review. However, knowing how to interpret P values is important for understanding the scientific literature. Generally, a P value of less than 0.05 is considered significant and indicates that the results of the experiment are due to more than just chance. Variance, standard deviation (SD), confidence interval, and standard error (SE) explain the dispersion of data around a mean of a sample drawn from a population. SD is commonly reported in the literature. A small SD indicates that there is not much variation in the sample data. Many biologic measurements fall into what is referred to as a normal distribution taking the shape of a bell curve. In a normal distribution, 68% of the data will fall within 1 SD, 95% will fall within 2 SDs, and 99.7% will fall within 3 SDs. Confidence interval defines the range of possible values within which the population parameter is likely to lie and gives an idea of the precision of the statistic being measured. A wide confidence interval indicates that if the experiment were repeated multiple times on other samples, the measured statistic would lie within a wide range of possibilities. The confidence interval relies on the SE. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.

  2. How Does One Assess the Accuracy of Academic Success Predictors? ROC Analysis Applied to University Entrance Factors

    ERIC Educational Resources Information Center

    Vivo, Juana-Maria; Franco, Manuel

    2008-01-01

    This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…

  3. 20 CFR 634.4 - Statistical standards.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...

  4. 20 CFR 634.4 - Statistical standards.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...

  5. A Comparison of Readability in Science-Based Texts: Implications for Elementary Teachers

    ERIC Educational Resources Information Center

    Gallagher, Tiffany; Fazio, Xavier; Ciampa, Katia

    2017-01-01

    Science curriculum standards were mapped onto various texts (literacy readers, trade books, online articles). Statistical analyses highlighted the inconsistencies among readability formulae for Grades 2-6 levels of the standards. There was a lack of correlation among the readability measures, and also when comparing different text sources. Online…

  6. Tests of Alignment among Assessment, Standards, and Instruction Using Generalized Linear Model Regression

    ERIC Educational Resources Information Center

    Fulmer, Gavin W.; Polikoff, Morgan S.

    2014-01-01

    An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…

  7. Re-Conceptualization of Modified Angoff Standard Setting: Unified Statistical, Measurement, Cognitive, and Social Psychological Theories

    ERIC Educational Resources Information Center

    Iyioke, Ifeoma Chika

    2013-01-01

    This dissertation describes a design for training, in accordance with probability judgment heuristics principles, for the Angoff standard setting method. The new training with instruction, practice, and feedback tailored to the probability judgment heuristics principles was called the Heuristic training and the prevailing Angoff method training…

  8. Progress in the improved lattice calculation of direct CP-violation in the Standard Model

    NASA Astrophysics Data System (ADS)

    Kelly, Christopher

    2018-03-01

    We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.

  9. An instrument to assess the statistical intensity of medical research papers.

    PubMed

    Nieminen, Pentti; Virtanen, Jorma I; Vähänikkilä, Hannu

    2017-01-01

    There is widespread evidence that statistical methods play an important role in original research articles, especially in medical research. The evaluation of statistical methods and reporting in journals suffers from a lack of standardized methods for assessing the use of statistics. The objective of this study was to develop and evaluate an instrument to assess the statistical intensity in research articles in a standardized way. A checklist-type measure scale was developed by selecting and refining items from previous reports about the statistical contents of medical journal articles and from published guidelines for statistical reporting. A total of 840 original medical research articles that were published between 2007-2015 in 16 journals were evaluated to test the scoring instrument. The total sum of all items was used to assess the intensity between sub-fields and journals. Inter-rater agreement was examined using a random sample of 40 articles. Four raters read and evaluated the selected articles using the developed instrument. The scale consisted of 66 items. The total summary score adequately discriminated between research articles according to their study design characteristics. The new instrument could also discriminate between journals according to their statistical intensity. The inter-observer agreement measured by the ICC was 0.88 between all four raters. Individual item analysis showed very high agreement between the rater pairs, the percentage agreement ranged from 91.7% to 95.2%. A reliable and applicable instrument for evaluating the statistical intensity in research papers was developed. It is a helpful tool for comparing the statistical intensity between sub-fields and journals. The novel instrument may be applied in manuscript peer review to identify papers in need of additional statistical review.

  10. Assessment of opacimeter calibration according to International Standard Organization 10155.

    PubMed

    Gomes, J F

    2001-01-01

    This paper compares the calibration method for opacimeters issued by the International Standard Organization (ISO) 10155 with the manual reference method for determination of dust content in stack gases. ISO 10155 requires at least nine operational measurements, corresponding to three operational measurements per each dust emission range within the stack. The procedure is assessed by comparison with previous calibration methods for opacimeters using only two operational measurements from a set of measurements made at stacks from pulp mills. The results show that even if the international standard for opacimeter calibration requires that the calibration curve is to be obtained using 3 x 3 points, a calibration curve derived using 3 points could be, at times, acceptable in statistical terms, provided that the amplitude of individual measurements is low.

  11. Observation of the rare $$B^0_s\\to\\mu^+\\mu^-$$ decay from the combined analysis of CMS and LHCb data

    DOE PAGES

    Khachatryan, Vardan

    2015-05-13

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. We foudn that the probabilities, or branching fractions, of the strange B meson (B 0 2 ) and the B 0 meson decaying into two oppositely charged muons (μ + and μ -) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the B 0 2 → μ + and μ - and (B 0 → μ +more » and μ - decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B 0 mesons1. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN2 started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton–proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the μ + and μ -decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. We then obtained evidence for the B 0 → μ + and μ - decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of B 0 2 and B 0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.« less

  12. Observation of the rare $$B^0_s\\to\\mu^+\\mu^-$$ decay from the combined analysis of CMS and LHCb data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khachatryan, Vardan

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. We foudn that the probabilities, or branching fractions, of the strange B meson (B 0 2 ) and the B 0 meson decaying into two oppositely charged muons (μ + and μ -) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the B 0 2 → μ + and μ - and (B 0 → μ +more » and μ - decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B 0 mesons1. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN2 started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton–proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the μ + and μ -decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. We then obtained evidence for the B 0 → μ + and μ - decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of B 0 2 and B 0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.« less

  13. Observation of the rare B(s)(0) →µ+µ− decay from the combined analysis of CMS and LHCb data.

    PubMed

    2015-06-04

    The standard model of particle physics describes the fundamental particles and their interactions via the strong, electromagnetic and weak forces. It provides precise predictions for measurable quantities that can be tested experimentally. The probabilities, or branching fractions, of the strange B meson (B(s)(0)) and the B0 meson decaying into two oppositely charged muons (μ+ and μ−) are especially interesting because of their sensitivity to theories that extend the standard model. The standard model predicts that the B(s)(0) →µ+µ− and B(0) →µ+µ− decays are very rare, with about four of the former occurring for every billion mesons produced, and one of the latter occurring for every ten billion B0 mesons. A difference in the observed branching fractions with respect to the predictions of the standard model would provide a direction in which the standard model should be extended. Before the Large Hadron Collider (LHC) at CERN started operating, no evidence for either decay mode had been found. Upper limits on the branching fractions were an order of magnitude above the standard model predictions. The CMS (Compact Muon Solenoid) and LHCb (Large Hadron Collider beauty) collaborations have performed a joint analysis of the data from proton–proton collisions that they collected in 2011 at a centre-of-mass energy of seven teraelectronvolts and in 2012 at eight teraelectronvolts. Here we report the first observation of the B(s)(0) → µ+µ− decay, with a statistical significance exceeding six standard deviations, and the best measurement so far of its branching fraction. Furthermore, we obtained evidence for the B(0) → µ+µ− decay with a statistical significance of three standard deviations. Both measurements are statistically compatible with standard model predictions and allow stringent constraints to be placed on theories beyond the standard model. The LHC experiments will resume taking data in 2015, recording proton–proton collisions at a centre-of-mass energy of 13 teraelectronvolts, which will approximately double the production rates of B(s)(0) and B0 mesons and lead to further improvements in the precision of these crucial tests of the standard model.

  14. Statistical methodology: II. Reliability and validity assessment in study design, Part B.

    PubMed

    Karras, D J

    1997-02-01

    Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.

  15. Standard deviation of scatterometer measurements from space.

    NASA Technical Reports Server (NTRS)

    Fischer, R. E.

    1972-01-01

    The standard deviation of scatterometer measurements has been derived under assumptions applicable to spaceborne scatterometers. Numerical results are presented which show that, with sufficiently long integration times, input signal-to-noise ratios below unity do not cause excessive degradation of measurement accuracy. The effects on measurement accuracy due to varying integration times and changing the ratio of signal bandwidth to IF filter-noise bandwidth are also plotted. The results of the analysis may resolve a controversy by showing that in fact statistically useful scatterometer measurements can be made from space using a 20-W transmitter, such as will be used on the S-193 experiment for Skylab-A.

  16. Gambling as a teaching aid in the introductory physics laboratory

    NASA Astrophysics Data System (ADS)

    Horodynski-Matsushigue, L. B.; Pascholati, P. R.; Vanin, V. R.; Dias, J. F.; Yoneama, M.-L.; Siqueira, P. T. D.; Amaku, M.; Duarte, J. L. M.

    1998-07-01

    Dice throwing is used to illustrate relevant concepts of the statistical theory of uncertainties, in particular the meaning of a limiting distribution, the standard deviation, and the standard deviation of the mean. It is an important part in a sequence of especially programmed laboratory activities, developed for freshmen, at the Institute of Physics of the University of São Paulo. It is shown how this activity is employed within a constructive teaching approach, which aims at a growing understanding of the measuring processes and of the fundamentals of correct statistical handling of experimental data.

  17. Chemokine Prostate Cancer Biomarkers — EDRN Public Portal

    Cancer.gov

    STUDY DESIGN 1. The need for pre-validation studies. Preliminary data from our laboratory demonstrates a potential utility for CXCL5 and CXCL12 as biomarkers to distinguish between patients at high-risk versus low-risk for harboring prostate malignancies. However, this pilot and feasibility study utilized a very small sample size of 51 patients, which limited the ability of this study to adequately assess certain technical aspects of the ELISA technique and statistical aspects of we propose studies designed assess the robustness (Specific Aim 1) and predictive value (Specific Aim 2) of these markers in a larger study population. 2. ELISA Assays. Serum, plasma, or urine chemokine levels are assessed using 50 ul frozen specimen per sandwich ELISA in duplicate using the appropriate commercially-available capture antibodies, detection antibodies, and standard ELISA reagents (R&D; Systems), as we have described previously (15, 17, 18). Measures within each patient group are regarded as biological replicates and permit statistical comparisons between groups. For all ELISAs, a standard curve is generated with the provided standards and utilized to calculate the quantity of chemokine in the sample tested. These assays provide measures of protein concentration with excellent reproducibility, with replicate measures characterized by standard deviations from the mean on the order of <3%.

  18. Measurements of Time-Dependent CP-Asymmetry Parameters in B Meson Decays to η' K 0 and of Branching Fractions of SU(3) Related Modes with BaBar Experiment at SLAC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Biassoni, Pietro

    2009-01-01

    In this thesis work we have measured the following upper limits at 90% of confidence level, for B meson decays (in units of 10 -6), using a statistics of 465.0 x 10 6 Bmore » $$\\bar{B}$$ pairs: β(B 0 → ηK 0) < 1.6 β(B 0 → ηη) < 1.4 β(B 0 → η'η') < 2.1 β(B 0 → ηΦ) < 0.52 β(B 0 → ηω) < 1.6 β(B 0 → η'Φ) < 1.2 β(B 0 → η'ω) < 1.7 We have no observation of any decay mode, statistical significance for our measurements is in the range 1.3-3.5 standard deviation. We have a 3.5σ evidence for B → ηω and a 3.1 σ evidence for B → η'ω. The absence of observation of the B 0 → ηK 0 open an issue related to the large difference compared to the charged mode B + → ηK + branching fraction, which is measured to be 3.7 ± 0.4 ± 0.1 [118]. Our results represent substantial improvements of the previous ones [109, 110, 111] and are consistent with theoretical predictions. All these results were presented at Flavor Physics and CP Violation (FPCP) 2008 Conference, that took place in Taipei, Taiwan. They will be soon included into a paper to be submitted to Physical Review D. For time-dependent analysis, we have reconstructed 1820 ± 48 flavor-tagged B 0 → η'K 0 events, using the final BABAR statistic of 467.4 x 10 6 B$$\\bar{B}$$ pairs. We use these events to measure the time-dependent asymmetry parameters S and C. We find S = 0.59 ± 0.08 ± 0.02, and C = -0.06 ± 0.06 ± 0.02. A non-zero value of C would represent a directly CP non-conserving component in B 0 → η'K 0, while S would be equal to sin2β measured in B 0 → J/ΨK s 0 [108], a mixing-decay interference effect, provided the decay is dominated by amplitudes of a single weak phase. The new measured value of S can be considered in agreement with the expectations of the 'Standard Model', inside the experimental and theoretical uncertainties. Inconsistency of our result for S with CP conservation (S = 0) has a significance of 7.1 standard deviations (statistical and systematics included). Our result for the direct-CP violation parameter C is 0.9 standard deviations from zero (statistical and systematics included). Our results are in agreement with the previous ones [18]. Despite the statistics is only 20% larger than the one used in previous measurement, we improved of 20% the error on S and of 14% the error on C. This error is the smaller ever achieved, by both BABAR and Belle, in Time-Dependent CP Violation Parameters measurement is a b → s transition.« less

  19. Comparison of low-altitude wind-shear statistics derived from measured and proposed standard wind profiles

    NASA Technical Reports Server (NTRS)

    Usry, J. W.

    1983-01-01

    Wind shear statistics were calculated for a simulated set of wind profiles based on a proposed standard wind field data base. Wind shears were grouped in altitude in altitude bands of 100 ft between 100 and 1400 ft and in wind shear increments of 0.025 knot/ft. Frequency distributions, means, and standard deviations for each altitude band were derived for the total sample were derived for both sets. It was found that frequency distributions in each altitude band for the simulated data set were more dispersed below 800 ft and less dispersed above 900 ft than those for the measured data set. Total sample frequency of occurrence for the two data sets was about equal for wind shear values between +0.075 knot/ft, but the simulated data set had significantly larger values for all wind shears outside these boundaries. It is shown that normal distribution in both data sets neither data set was normally distributed; similar results are observed from the cumulative frequency distributions.

  20. UNITY: Confronting Supernova Cosmology's Statistical and Systematic Uncertainties in a Unified Bayesian Framework

    NASA Astrophysics Data System (ADS)

    Rubin, D.; Aldering, G.; Barbary, K.; Boone, K.; Chappell, G.; Currie, M.; Deustua, S.; Fagrelius, P.; Fruchter, A.; Hayden, B.; Lidman, C.; Nordin, J.; Perlmutter, S.; Saunders, C.; Sofiatti, C.; Supernova Cosmology Project, The

    2015-11-01

    While recent supernova (SN) cosmology research has benefited from improved measurements, current analysis approaches are not statistically optimal and will prove insufficient for future surveys. This paper discusses the limitations of current SN cosmological analyses in treating outliers, selection effects, shape- and color-standardization relations, unexplained dispersion, and heterogeneous observations. We present a new Bayesian framework, called UNITY (Unified Nonlinear Inference for Type-Ia cosmologY), that incorporates significant improvements in our ability to confront these effects. We apply the framework to real SN observations and demonstrate smaller statistical and systematic uncertainties. We verify earlier results that SNe Ia require nonlinear shape and color standardizations, but we now include these nonlinear relations in a statistically well-justified way. This analysis was primarily performed blinded, in that the basic framework was first validated on simulated data before transitioning to real data. We also discuss possible extensions of the method.

  1. Flexible statistical modelling detects clinical functional magnetic resonance imaging activation in partially compliant subjects.

    PubMed

    Waites, Anthony B; Mannfolk, Peter; Shaw, Marnie E; Olsrud, Johan; Jackson, Graeme D

    2007-02-01

    Clinical functional magnetic resonance imaging (fMRI) occasionally fails to detect significant activation, often due to variability in task performance. The present study seeks to test whether a more flexible statistical analysis can better detect activation, by accounting for variance associated with variable compliance to the task over time. Experimental results and simulated data both confirm that even at 80% compliance to the task, such a flexible model outperforms standard statistical analysis when assessed using the extent of activation (experimental data), goodness of fit (experimental data), and area under the operator characteristic curve (simulated data). Furthermore, retrospective examination of 14 clinical fMRI examinations reveals that in patients where the standard statistical approach yields activation, there is a measurable gain in model performance in adopting the flexible statistical model, with little or no penalty in lost sensitivity. This indicates that a flexible model should be considered, particularly for clinical patients who may have difficulty complying fully with the study task.

  2. Effect of mechanical behaviour of the brachial artery on blood pressure measurement during both cuff inflation and cuff deflation.

    PubMed

    Zheng, Dingchang; Pan, Fan; Murray, Alan

    2013-10-01

    The aim of this study was to investigate the effect of different mechanical behaviour of the brachial artery on blood pressure (BP) measurements during cuff inflation and deflation. BP measurements were taken from each of 40 participants, with three repeat sessions under three randomized cuff deflation/inflation conditions. Cuff pressure was linearly deflated and inflated at a standard rate of 2-3 mmHg/s and also linearly inflated at a fast rate of 5-6 mmHg/s. Manual auscultatory systolic and diastolic BPs, and pulse pressure (SBP, DBP, PP) were measured. Automated BPs were determined from digitally recorded cuff pressures by fitting a polynomial model to the oscillometric pulse amplitudes. The BPs from cuff deflation and inflation were then compared. Repeatable measurements between sessions and between the sequential order of inflation/deflation conditions (all P > 0.1) indicated stability of arterial mechanical behaviour with repeat measurements. Comparing BPs obtained by standard inflation with those from standard deflation, manual SBP was 2.6 mmHg lower (P < 0.01), manual DBP was 1.5 mmHg higher (P < 0.01), manual PP was 4.2 mmHg lower (P < 0.001), automated DBP was 6.7 mmHg higher (P < 0.001) and automatic PP was 7.5 mmHg lower (P < 0.001). There was no statistically significant difference for any automated BPs between fast and standard cuff inflation. The statistically significant BP differences between inflation and deflation suggest different arterial mechanical behaviour between arterial opening and closing during BP measurement. We have shown that the mechanical behaviour of the brachial artery during BP measurement differs between cuff deflation and cuff inflation.

  3. Automated measurement of uptake in cerebellum, liver, and aortic arch in full-body FDG PET/CT scans.

    PubMed

    Bauer, Christian; Sun, Shanhui; Sun, Wenqing; Otis, Justin; Wallace, Audrey; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M; Beichel, Reinhard R

    2012-06-01

    The purpose of this work was to develop and validate fully automated methods for uptake measurement of cerebellum, liver, and aortic arch in full-body PET/CT scans. Such measurements are of interest in the context of uptake normalization for quantitative assessment of metabolic activity and/or automated image quality control. Cerebellum, liver, and aortic arch regions were segmented with different automated approaches. Cerebella were segmented in PET volumes by means of a robust active shape model (ASM) based method. For liver segmentation, a largest possible hyperellipsoid was fitted to the liver in PET scans. The aortic arch was first segmented in CT images of a PET/CT scan by a tubular structure analysis approach, and the segmented result was then mapped to the corresponding PET scan. For each of the segmented structures, the average standardized uptake value (SUV) was calculated. To generate an independent reference standard for method validation, expert image analysts were asked to segment several cross sections of each of the three structures in 134 F-18 fluorodeoxyglucose (FDG) PET/CT scans. For each case, the true average SUV was estimated by utilizing statistical models and served as the independent reference standard. For automated aorta and liver SUV measurements, no statistically significant scale or shift differences were observed between automated results and the independent standard. In the case of the cerebellum, the scale and shift were not significantly different, if measured in the same cross sections that were utilized for generating the reference. In contrast, automated results were scaled 5% lower on average although not shifted, if FDG uptake was calculated from the whole segmented cerebellum volume. The estimated reduction in total SUV measurement error ranged between 54.7% and 99.2%, and the reduction was found to be statistically significant for cerebellum and aortic arch. With the proposed methods, the authors have demonstrated that automated SUV uptake measurements in cerebellum, liver, and aortic arch agree with expert-defined independent standards. The proposed methods were found to be accurate and showed less intra- and interobserver variability, compared to manual analysis. The approach provides an alternative to manual uptake quantification, which is time-consuming. Such an approach will be important for application of quantitative PET imaging to large scale clinical trials. © 2012 American Association of Physicists in Medicine.

  4. Statistical inference with quantum measurements: methodologies for nitrogen vacancy centers in diamond

    NASA Astrophysics Data System (ADS)

    Hincks, Ian; Granade, Christopher; Cory, David G.

    2018-01-01

    The analysis of photon count data from the standard nitrogen vacancy (NV) measurement process is treated as a statistical inference problem. This has applications toward gaining better and more rigorous error bars for tasks such as parameter estimation (e.g. magnetometry), tomography, and randomized benchmarking. We start by providing a summary of the standard phenomenological model of the NV optical process in terms of Lindblad jump operators. This model is used to derive random variables describing emitted photons during measurement, to which finite visibility, dark counts, and imperfect state preparation are added. NV spin-state measurement is then stated as an abstract statistical inference problem consisting of an underlying biased coin obstructed by three Poisson rates. Relevant frequentist and Bayesian estimators are provided, discussed, and quantitatively compared. We show numerically that the risk of the maximum likelihood estimator is well approximated by the Cramér-Rao bound, for which we provide a simple formula. Of the estimators, we in particular promote the Bayes estimator, owing to its slightly better risk performance, and straightforward error propagation into more complex experiments. This is illustrated on experimental data, where quantum Hamiltonian learning is performed and cross-validated in a fully Bayesian setting, and compared to a more traditional weighted least squares fit.

  5. Relationships between digit ratio (2D:4D) and basketball performance in Australian men.

    PubMed

    Frick, Nathan A; Hull, Melissa J; Manning, John T; Tomkinson, Grant R

    2017-05-06

    To investigate relationships between the digit ratio (2D:4D) and competitive basketball performance in Australian men. Using an observational cross-sectional design a total of 221 Australian basketball players who competed in the Olympic Games, International Basketball Federation World Championships/Cup, Australian National Basketball League, Central Australian Basketball League or socially had their 2D:4Ds measured. Analysis of variance was used to assess differences in mean 2D:4Ds between men playing at different competitive standards, with relationships between 2D:4Ds and basketball game-related statistics assessed using Pearson's product moment correlations in men playing at a single competitive standard. There were significant differences between competitive standards for the left 2D:4D following Bonferroni correction, but not for the right 2D:4D, with basketballers who achieved higher competitive standards tending to have lower left 2D:4Ds. No important correlations between 2D:4D and basketball game-related statistics were found, with correlations typically negligible. This study indicated that the 2D:4D can discriminate between basketballers competing at different standards, but not between basketballers within a single competitive standard using objective game-related statistics. © 2016 Wiley Periodicals, Inc.

  6. Measuring Equity: Creating a New Standard for Inputs and Outputs

    ERIC Educational Resources Information Center

    Knoeppel, Robert C.; Della Sala, Matthew R.

    2013-01-01

    The purpose of this article is to introduce a new statistic to capture the ratio of equitable student outcomes given equitable inputs. Given the fact that finance structures should be aligned to outcome standards according to judicial interpretation, a ratio of outputs to inputs, or "equity ratio," is introduced to discern if conclusions can be…

  7. Toward standardized reporting for a cohort study on functioning: The Swiss Spinal Cord Injury Cohort Study.

    PubMed

    Prodinger, Birgit; Ballert, Carolina S; Brach, Mirjam; Brinkhof, Martin W G; Cieza, Alarcos; Hug, Kerstin; Jordan, Xavier; Post, Marcel W M; Scheel-Sailer, Anke; Schubert, Martin; Tennant, Alan; Stucki, Gerold

    2016-02-01

    Functioning is an important outcome to measure in cohort studies. Clear and operational outcomes are needed to judge the quality of a cohort study. This paper outlines guiding principles for reporting functioning in cohort studies and addresses some outstanding issues. Principles of how to standardize reporting of data from a cohort study on functioning, by deriving scores that are most useful for further statistical analysis and reporting, are outlined. The Swiss Spinal Cord Injury Cohort Study Community Survey serves as a case in point to provide a practical application of these principles. Development of reporting scores must be conceptually coherent and metrically sound. The International Classification of Functioning, Disability and Health (ICF) can serve as the frame of reference for this, with its categories serving as reference units for reporting. To derive a score for further statistical analysis and reporting, items measuring a single latent trait must be invariant across groups. The Rasch measurement model is well suited to test these assumptions. Our approach is a valuable guide for researchers and clinicians, as it fosters comparability of data, strengthens the comprehensiveness of scope, and provides invariant, interval-scaled data for further statistical analyses of functioning.

  8. Retrospective correction of bias in diffusion tensor imaging arising from coil combination mode.

    PubMed

    Sakaie, Ken; Lowe, Mark

    2017-04-01

    To quantify and retrospectively correct for systematic differences in diffusion tensor imaging (DTI) measurements due to differences in coil combination mode. Multi-channel coils are now standard among MRI systems. There are several options for combining signal from multiple coils during image reconstruction, including sum-of-squares (SOS) and adaptive combine (AC). This contribution examines the bias between SOS- and AC-derived measures of tissue microstructure and a strategy for limiting that bias. Five healthy subjects were scanned under an institutional review board-approved protocol. Each set of raw image data was reconstructed twice-once with SOS and once with AC. The diffusion tensor was calculated from SOS- and AC-derived data by two algorithms-standard log-linear least squares and an approach that accounts for the impact of coil combination on signal statistics. Systematic differences between SOS and AC in terms of tissue microstructure (axial diffusivity, radial diffusivity, mean diffusivity and fractional anisotropy) were evaluated on a voxel-by-voxel basis. SOS-based tissue microstructure values are systematically lower than AC-based measures throughout the brain in each subject when using the standard tensor calculation method. The difference between SOS and AC can be virtually eliminated by taking into account the signal statistics associated with coil combination. The impact of coil combination mode on diffusion tensor-based measures of tissue microstructure is statistically significant but can be corrected retrospectively. The ability to do so is expected to facilitate pooling of data among imaging protocols. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Characterization of solar cells for space applications. Volume 5: Electrical characteristics of OCLI 225-micron MLAR wraparound cells as a function of intensity, temperature, and irradiation

    NASA Technical Reports Server (NTRS)

    Anspaugh, B. E.; Miyahira, T. F.; Weiss, R. S.

    1979-01-01

    Computed statistical averages and standard deviations with respect to the measured cells for each intensity temperature measurement condition are presented. Display averages and standard deviations of the cell characteristics in a two dimensional array format are shown: one dimension representing incoming light intensity, and another, the cell temperature. Programs for calculating the temperature coefficients of the pertinent cell electrical parameters are presented, and postirradiation data are summarized.

  10. A drift correction optimization technique for the reduction of the inter-measurement dispersion of isotope ratios measured using a multi-collector plasma mass spectrometer

    NASA Astrophysics Data System (ADS)

    Doherty, W.; Lightfoot, P. C.; Ames, D. E.

    2014-08-01

    The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.

  11. Validity of linear measurements of the jaws using ultralow-dose MDCT and the iterative techniques of ASIR and MBIR.

    PubMed

    Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig

    2016-10-01

    To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.

  12. Measure of horizontal and vertical displacement of the acromioclavicular joint after cutting ligament using X-ray and opto-electronic system.

    PubMed

    Rochcongar, Goulven; Emily, Sébastien; Lebel, Benoit; Pineau, Vincent; Burdin, Gilles; Hulet, Christophe

    2012-09-01

    Surgical versus orthopedic treatments of acromioclavicular disjunction are still debated. The aim of this study was to measure horizontal and vertical acromion's displacement after cutting the ligament using standard X-ray and an opto-electronic system on cadaver. Ten cadaveric shoulders were studied. A sequential ligament's section was operated by arthroscopy. The sequence of cutting was chosen to fit with Rockwood's grade. The displacement of the acromion was measured on standard X-ray and with an opto-electronic system allowing measuring of the horizontal displacement. Statistical comparisons were performed using a paired Student's t test with significance set at p < 0.05. Cutting the coracoclavicular ligament and delto-trapezius muscles cause a statistical downer displacement of the acromion, but not after sectioning the acromioclavicular ligament. The contact surface between the acromion and the clavicle decreases statistically after sectioning the acromioclavicular ligament and the coracoclavicular ligament with no effect of sectioning the delto-trapezius muscles. Those results are superposing with those dealing with the anterior translation. The measure concerning the acromioclavicular distance and the coracoclavicular distance are superposing with those of Rockwood. However, there is a significant horizontal translation after cutting the acromioclavicular ligament. Taking into account this displacement, it may be interesting to choose either surgical or orthopedic treatment. There is a correlation between anatomical damage and importance of instability. Horizontal instability is misevaluated in clinical practice.

  13. Standardized Reporting of the Eczema Area and Severity Index (EASI) and the Patient-Oriented Eczema Measure (POEM): A Recommendation by the Harmonising Outcome Measures for Eczema (HOME) Initiative.

    PubMed

    Grinich, E; Schmitt, J; Küster, D; Spuls, P I; Williams, H C; Chalmers, J R; Thomas, K S; Apfelbacher, C; Prinsen, C A C; Furue, M; Stuart, B; Carter, B; Simpson, E

    2018-05-10

    Several organizations from multiple fields of medicine are setting standards for clinical research including protocol development, 1 harmonization of outcome reporting, 2 statistical analysis, 3 quality assessment 4 and reporting of findings. 1 Clinical research standardization facilitates the interpretation and synthesis of data, increases the usability of trial results for guideline groups and shared decision-making, and reduces selective outcome reporting bias. The mission of the Harmonising Outcome Measures for Eczema (HOME) initiative is to establish an agreed-upon core set of outcomes to be measured and reported in all clinical trials of atopic dermatitis (AD). This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  14. Precision electroweak physics at LEP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mannelli, M.

    1994-12-01

    Copious event statistics, a precise understanding of the LEP energy scale, and a favorable experimental situation at the Z{sup 0} resonance have allowed the LEP experiments to provide both dramatic confirmation of the Standard Model of strong and electroweak interactions and to place substantially improved constraints on the parameters of the model. The author concentrates on those measurements relevant to the electroweak sector. It will be seen that the precision of these measurements probes sensitively the structure of the Standard Model at the one-loop level, where the calculation of the observables measured at LEP is affected by the value chosenmore » for the top quark mass. One finds that the LEP measurements are consistent with the Standard Model, but only if the mass of the top quark is measured to be within a restricted range of about 20 GeV.« less

  15. A Sociodemographic Risk Index

    ERIC Educational Resources Information Center

    Moore, Kristin Anderson; Vandivere, Sharon; Redd, Zakia

    2006-01-01

    In this paper, we conceptualize and develop an index of sociodemographic risk that we hypothesize will be an improvement over the standard poverty measure as a measure of risk for children's development. The poverty line is widely used in government statistics and in research but is also widely acknowledged to have multiple shortcomings. Using…

  16. Time-of-Flight Measurements as a Possible Method to Observe Anyonic Statistics

    NASA Astrophysics Data System (ADS)

    Umucalılar, R. O.; Macaluso, E.; Comparin, T.; Carusotto, I.

    2018-06-01

    We propose a standard time-of-flight experiment as a method for observing the anyonic statistics of quasiholes in a fractional quantum Hall state of ultracold atoms. The quasihole states can be stably prepared by pinning the quasiholes with localized potentials and a measurement of the mean square radius of the freely expanding cloud, which is related to the average total angular momentum of the initial state, offers direct signatures of the statistical phase. Our proposed method is validated by Monte Carlo calculations for ν =1 /2 and 1 /3 fractional quantum Hall liquids containing a realistic number of particles. Extensions to quantum Hall liquids of light and to non-Abelian anyons are briefly discussed.

  17. A 52-Week Study of Olanzapine with a Randomized Behavioral Weight Counseling Intervention in Adolescents with Schizophrenia or Bipolar I Disorder.

    PubMed

    Detke, Holland C; DelBello, Melissa P; Landry, John; Hoffmann, Vicki Poole; Heinloth, Alexandra; Dittmann, Ralf W

    2016-12-01

    To evaluate the 52-week safety/tolerability of oral olanzapine for adolescents with schizophrenia or bipolar mania and compare effectiveness of a standard versus intense behavioral weight intervention in mitigating risk of weight gain. Patients 13-17 years old with schizophrenia (Brief Psychiatric Rating Scale for Children [BPRS-C] total score >30; item score ≥3 for hallucinations, delusions, or peculiar fantasies) or bipolar I disorder (manic or mixed episode; Young Mania Rating Scale [YMRS] total score ≥15) received open-label olanzapine (2.5-20 mg/day) and were randomized to standard (n = 102; a single weight counseling session) or intense (n = 101; weight counseling at each study visit) weight intervention. The primary outcome measure was mean change in body mass index (BMI) from baseline to 52 weeks using mixed-model repeated measures. Symptomatology was also assessed. No statistically significant differences between groups were observed in mean baseline-to-52-week change in BMI (standard: +3.6 kg/m 2 ; intense: +2.8 kg/m 2 ; p = 0.150) or weight (standard: +12.1 kg; intense: +9.6 kg; p = 0.148). Percentage of patients at endpoint who had gained ≥15% of their baseline weight was 40% for the standard group and 31% for the intense group (p = 0.187). Safety/tolerability results were generally consistent with those of previous olanzapine studies in adolescents, with the most notable exception being the finding of a mean decrease in prolactin. On symptomatology measures, patients with schizophrenia had a mean baseline-to-52-week change in BPRS-C of -32.5 (standard deviation [SD] = 10.8), and patients with bipolar disorder had a mean change in YMRS of -16.7 (SD = 8.9), with clinically and statistically significant improvement starting at 3-4 days for each. Long-term weight gain was high in both groups, with no statistically significant differences between the standard or intense behavioral weight interventions in BMI or weight. Safety, tolerability, and effectiveness findings were generally consistent with the known profile of olanzapine in adolescents.

  18. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    PubMed Central

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  19. Influence of scan mode (180°/360°) of the cone beam computed tomography for preoperative dental implant measurements.

    PubMed

    Neves, Frederico S; Vasconcelos, Taruska V; Campos, Paulo S F; Haiter-Neto, Francisco; Freitas, Deborah Q

    2014-02-01

    The aim of this study was to evaluate the effect of scan mode of the cone beam computed tomography (CBCT) in the preoperative dental implant measurements. Completely edentulous mandibles with entirely resorbed alveolar processes were selected for this study. Five regions were selected (incisor, canine, premolar, first molar, and second molar). The mandibles were scanned with Next Generation i-CAT CBCT unit (Imaging Sciences International, Inc, Hatfield, PA, USA) with half (180°) and full (360°) mode. Two oral radiologists performed vertical measurements in all selected regions; the measurements of half of the sample were repeated within an interval of 30 days. The mandibles were sectioned using an electrical saw in all evaluated regions to obtain the gold standard. The intraclass correlation coefficient was calculated for the intra- and interobserver agreement. Descriptive statistics were calculated as mean, median, and standard deviation. Wilcoxon signed rank test was used to determine the correlation between the measurements obtained in different scan mode with the gold standard. The significance level was 5%. The values of intra- and interobserver reproducibility indicated a strong agreement. In the dental implant measurements, except the bone height of the second molar region in full scan mode (P = 0.02), the Wilcoxon signed rank test did not show statistical significant difference with the gold standard (P > 0.05). Both modes provided real measures, necessary when performing implant planning; however, half scan mode uses smaller doses, following the principle of effectiveness. We believe that this method should be used because of the best dose-effect relationship and offer less risk to the patient. © 2012 John Wiley & Sons A/S.

  20. Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable

    PubMed Central

    2012-01-01

    Background When outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model. Methods An analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition. Results Under the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition. Conclusions The discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population. PMID:22716998

  1. The Muon $g$-$2$ Experiment at Fermilab

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gohn, Wesley

    A new measurement of the anomalous magnetic moment of the muon,more » $$a_{\\mu} \\equiv (g-2)/2$$, will be performed at the Fermi National Accelerator Laboratory with data taking beginning in 2017. The most recent measurement, performed at Brookhaven National Laboratory (BNL) and completed in 2001, shows a 3.5 standard deviation discrepancy with the standard model value of $$a_\\mu$$. The new measurement will accumulate 21 times the BNL statistics using upgraded magnet, detector, and storage ring systems, enabling a measurement of $$a_\\mu$$ to 140 ppb, a factor of 4 improvement in the uncertainty the previous measurement. This improvement in precision, combined with recent improvements in our understanding of the QCD contributions to the muon $g$-$2$, could provide a discrepancy from the standard model greater than 7$$\\sigma$$ if the central value is the same as that measured by the BNL experiment, which would be a clear indication of new physics.« less

  2. [Anthropometric measures in urban child population from 6 to 12 years from the northwest of México].

    PubMed

    Brito-Zurita, Olga Rosa; López-Leal, Josefa; Exiga-González, Emma Beatriz; Armenta-Llanes, Oscar; Jorge-Plascencia, Blanca; Domínguez-Banda, Alberto; López-Morales, Mónica; Ornelas-Aguirre, José Manuel; Sabag-Ruiz, Enrique

    2014-01-01

    The degree of overweight-obesity varies according to the conditions of each population and depending on geographical area, race or ethnicity, socioeconomic status, and susceptibility of each individual. The aim of this study was to determine anthropometric measures in urban child population from 6 to 12 years of Ciudad Obregón, Sonora. We studied 684 schoolchildren from 6 to 12 years of age, of both genders in the urban area of Ciudad Obregón, Sonora. We measured weight, height, arm circumference (AC), waist, and body mass index (BMI). We used descriptive statistics (frequencies, percentages), and to compare the growth charts of this study vs. the reference standards (CDC and Ramos-Galván), we employed statistical inference (Student t test). On average, weight, height, AC, BMI for age by gender were higher than the reference standards at all ages. Seventy-four boys (22 %) and 51 girls (14.5 %) were above 95th percentile. With regards to size, 42 children (12.6 %) were below the 5th percentile and 37 (10.5 %) above the 95th percentile. Schoolchildren in the southern zone of Sonora showed a higher anthropometric pattern than the reference standards.

  3. Counting Microfiche: The Utilization of the Microform Section of the ANSI Standard Z39.7-1983 "Library Statistics"; Microfiche Curl; and "Poly" or "Cell"?

    ERIC Educational Resources Information Center

    Caldwell-Wood, Naomi; And Others

    1987-01-01

    The first of three articles describes procedures for using ANSI statistical methods for estimating the number of pieces in large homogeneous collections of microfiche. The second discusses causes of curl, its control, and measurement, and the third compares the advantages and disadvantages of cellulose acetate and polyester base for microforms.…

  4. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  5. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  6. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  7. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  8. 40 CFR Appendix T to Part 50 - Interpretation of the Primary National Ambient Air Quality Standards for Oxides of Sulfur (Sulfur...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...-hour SO2 concentration values measured from midnight to midnight (local standard time) that are used in NAAQS computations. Design values are the metrics (i.e., statistics) that are compared to the NAAQS levels to determine compliance, calculated as specified in section 5 of this appendix. The design value...

  9. Constraints on non-Standard Model Higgs boson interactions in an effective Lagrangian using differential cross sections measured in the H→γγ decay channel at \\(\\sqrt{s} = 8\\) TeV with the ATLAS detector

    DOE PAGES

    Aad, G.

    2015-12-02

    The strength and tensor structure of the Higgs boson's interactions are investigated using an effective Lagrangian, which introduces additional CP-even and CP-odd interactions that lead to changes in the kinematic properties of the Higgs boson and associated jet spectra with respect to the Standard Model. We found that the parameters of the effective Lagrangian are probed using a fit to five differential cross sections previously measured by the ATLAS experiment in the H→γγ decay channel with an integrated luminosity of 20.3 fb -1 at \\(\\sqrt{s} = 8\\) TeV. In order to perform a simultaneous fit to the five distributions, themore » statistical correlations between them are determined by re-analysing the H→γγ candidate events in the proton–proton collision data. No significant deviations from the Standard Model predictions are observed and limits on the effective Lagrangian parameters are derived. These statistical correlations are made publicly available to allow for future analysis of theories with non-Standard Model interactions.« less

  10. Constraints on non-Standard Model Higgs boson interactions in an effective Lagrangian using differential cross sections measured in the H→γγ decay channel at \\(\\sqrt{s} = 8\\) TeV with the ATLAS detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aad, G.

    The strength and tensor structure of the Higgs boson's interactions are investigated using an effective Lagrangian, which introduces additional CP-even and CP-odd interactions that lead to changes in the kinematic properties of the Higgs boson and associated jet spectra with respect to the Standard Model. We found that the parameters of the effective Lagrangian are probed using a fit to five differential cross sections previously measured by the ATLAS experiment in the H→γγ decay channel with an integrated luminosity of 20.3 fb -1 at \\(\\sqrt{s} = 8\\) TeV. In order to perform a simultaneous fit to the five distributions, themore » statistical correlations between them are determined by re-analysing the H→γγ candidate events in the proton–proton collision data. No significant deviations from the Standard Model predictions are observed and limits on the effective Lagrangian parameters are derived. These statistical correlations are made publicly available to allow for future analysis of theories with non-Standard Model interactions.« less

  11. BTS statistical standards manual

    DOT National Transportation Integrated Search

    2005-10-01

    The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...

  12. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    PubMed

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed to by learning. We suggest therefore that motor learning studies could complement their p -value-based analyses of difference with statistics such as SEM and MDC in order to inform as to the likely cause or origin of any reported changes in performance.

  13. Multimedia Presentations in Educational Measurement and Statistics: Design Considerations and Instructional Approaches

    ERIC Educational Resources Information Center

    Sklar, Jeffrey C.; Zwick, Rebecca

    2009-01-01

    Proper interpretation of standardized test scores is a crucial skill for K-12 teachers and school personnel; however, many do not have sufficient knowledge of measurement concepts to appropriately interpret and communicate test results. In a recent four-year project funded by the National Science Foundation, three web-based instructional…

  14. A Psychometric Investigation of the Marlowe-Crowne Social Desirability Scale Using Rasch Measurement

    ERIC Educational Resources Information Center

    Seol, Hyunsoo

    2007-01-01

    The author used Rasch measurement to examine the reliability and validity of 382 Korean university students' scores on the Marlowe-Crowne Social Desirability Scale (MCSDS; D. P. Crowne and D. Marlowe, 1960). Results revealed that item-fit statistics and principal component analysis with standardized residuals provide evidence of MCSDS'…

  15. Toward Global Comparability of Sexual Orientation Data in Official Statistics: A Conceptual Framework of Sexual Orientation for Health Data Collection in New Zealand's Official Statistics System

    PubMed Central

    Gray, Alistair; Veale, Jaimie F.; Binson, Diane; Sell, Randell L.

    2013-01-01

    Objective. Effectively addressing health disparities experienced by sexual minority populations requires high-quality official data on sexual orientation. We developed a conceptual framework of sexual orientation to improve the quality of sexual orientation data in New Zealand's Official Statistics System. Methods. We reviewed conceptual and methodological literature, culminating in a draft framework. To improve the framework, we held focus groups and key-informant interviews with sexual minority stakeholders and producers and consumers of official statistics. An advisory board of experts provided additional guidance. Results. The framework proposes working definitions of the sexual orientation topic and measurement concepts, describes dimensions of the measurement concepts, discusses variables framing the measurement concepts, and outlines conceptual grey areas. Conclusion. The framework proposes standard definitions and concepts for the collection of official sexual orientation data in New Zealand. It presents a model for producers of official statistics in other countries, who wish to improve the quality of health data on their citizens. PMID:23840231

  16. Metrological traceability in education: A practical online system for measuring and managing middle school mathematics instruction

    NASA Astrophysics Data System (ADS)

    Torres Irribarra, D.; Freund, R.; Fisher, W.; Wilson, M.

    2015-02-01

    Computer-based, online assessments modelled, designed, and evaluated for adaptively administered invariant measurement are uniquely suited to defining and maintaining traceability to standardized units in education. An assessment of this kind is embedded in the Assessing Data Modeling and Statistical Reasoning (ADM) middle school mathematics curriculum. Diagnostic information about middle school students' learning of statistics and modeling is provided via computer-based formative assessments for seven constructs that comprise a learning progression for statistics and modeling from late elementary through the middle school grades. The seven constructs are: Data Display, Meta-Representational Competence, Conceptions of Statistics, Chance, Modeling Variability, Theory of Measurement, and Informal Inference. The end product is a web-delivered system built with Ruby on Rails for use by curriculum development teams working with classroom teachers in designing, developing, and delivering formative assessments. The online accessible system allows teachers to accurately diagnose students' unique comprehension and learning needs in a common language of real-time assessment, logging, analysis, feedback, and reporting.

  17. Evaluating Statistical Process Control (SPC) techniques and computing the uncertainty of force calibrations

    NASA Technical Reports Server (NTRS)

    Navard, Sharon E.

    1989-01-01

    In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.

  18. Free tropospheric measurements of CS2 over a 45 deg N to 45 deg S latitude range

    NASA Technical Reports Server (NTRS)

    Tucker, B. J.; Maroulis, P. J.; Bandy, A. R.

    1985-01-01

    The mean value obtained from 52 free tropospheric measurements of CS2 over the 45 deg N-45 deg S latitude range was 5.7 pptv, with standard deviation and standard error of 1.9 and 0.3 pptv, respectively. Large fluctuations in the CS2 concentration are observed which reflect the apparent short atmospheric residence time and inhomogeneities in the surface sources of CS2. The amounts of CS2 in the Northern and Southern Hemispheres are statistically equal.

  19. Statistical tests for power-law cross-correlated processes

    NASA Astrophysics Data System (ADS)

    Podobnik, Boris; Jiang, Zhi-Qiang; Zhou, Wei-Xing; Stanley, H. Eugene

    2011-12-01

    For stationary time series, the cross-covariance and the cross-correlation as functions of time lag n serve to quantify the similarity of two time series. The latter measure is also used to assess whether the cross-correlations are statistically significant. For nonstationary time series, the analogous measures are detrended cross-correlations analysis (DCCA) and the recently proposed detrended cross-correlation coefficient, ρDCCA(T,n), where T is the total length of the time series and n the window size. For ρDCCA(T,n), we numerically calculated the Cauchy inequality -1≤ρDCCA(T,n)≤1. Here we derive -1≤ρDCCA(T,n)≤1 for a standard variance-covariance approach and for a detrending approach. For overlapping windows, we find the range of ρDCCA within which the cross-correlations become statistically significant. For overlapping windows we numerically determine—and for nonoverlapping windows we derive—that the standard deviation of ρDCCA(T,n) tends with increasing T to 1/T. Using ρDCCA(T,n) we show that the Chinese financial market's tendency to follow the U.S. market is extremely weak. We also propose an additional statistical test that can be used to quantify the existence of cross-correlations between two power-law correlated time series.

  20. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  1. Applications of the DOE/NASA wind turbine engineering information system

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1981-01-01

    A statistical analysis of data obtained from the Technology and Engineering Information Systems was made. The systems analyzed consist of the following elements: (1) sensors which measure critical parameters (e.g., wind speed and direction, output power, blade loads and component vibrations); (2) remote multiplexing units (RMUs) on each wind turbine which frequency-modulate, multiplex and transmit sensor outputs; (3) on-site instrumentation to record, process and display the sensor output; and (4) statistical analysis of data. Two examples of the capabilities of these systems are presented. The first illustrates the standardized format for application of statistical analysis to each directly measured parameter. The second shows the use of a model to estimate the variability of the rotor thrust loading, which is a derived parameter.

  2. Data precision of X-ray fluorescence (XRF) scanning of discrete samples with the ITRAX XRF core-scanner exemplified on loess-paleosol samples

    NASA Astrophysics Data System (ADS)

    Profe, Jörn; Ohlendorf, Christian

    2017-04-01

    XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.

  3. A Simple Graphical Method for Quantification of Disaster Management Surge Capacity Using Computer Simulation and Process-control Tools.

    PubMed

    Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco

    2015-02-01

    Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.

  4. Methods in pharmacoepidemiology: a review of statistical analyses and data reporting in pediatric drug utilization studies.

    PubMed

    Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio

    2013-03-01

    To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.

  5. The effect of sensor sheltering and averaging techniques on wind measurements at the Shuttle Landing Facility

    NASA Technical Reports Server (NTRS)

    Merceret, Francis J.

    1995-01-01

    This document presents results of a field study of the effect of sheltering of wind sensors by nearby foliage on the validity of wind measurements at the Space Shuttle Landing Facility (SLF). Standard measurements are made at one second intervals from 30-feet (9.1-m) towers located 500 feet (152 m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. A companion study, Merceret (1995), quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. This work examines the effect of nearby foliage on the accuracy of the measurements made by any one sensor, and the effects of averaging on interpretation of the measurements. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions as a function of distance from the obstructing foliage. Appropriate statistics were computed. The results suggest that accurate measurements require foliage be cut back to OFCM standards. Analysis of averaging techniques showed that there is no significant difference between vector and scalar averages. Longer averaging periods reduce measurement error but do not otherwise change the measurement in reasonably steady flow regimes. In rapidly changing conditions, shorter averaging periods may be required to capture trends.

  6. The Prediction Properties of Inverse and Reverse Regression for the Simple Linear Calibration Problem

    NASA Technical Reports Server (NTRS)

    Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.

    2010-01-01

    The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.

  7. Tolerancing aspheres based on manufacturing statistics

    NASA Astrophysics Data System (ADS)

    Wickenhagen, S.; Möhl, A.; Fuchs, U.

    2017-11-01

    A standard way of tolerancing optical elements or systems is to perform a Monte Carlo based analysis within a common optical design software package. Although, different weightings and distributions are assumed they are all counting on statistics, which usually means several hundreds or thousands of systems for reliable results. Thus, employing these methods for small batch sizes is unreliable, especially when aspheric surfaces are involved. The huge database of asphericon was used to investigate the correlation between the given tolerance values and measured data sets. The resulting probability distributions of these measured data were analyzed aiming for a robust optical tolerancing process.

  8. Mutual information and phase dependencies: measures of reduced nonlinear cardiorespiratory interactions after myocardial infarction.

    PubMed

    Hoyer, Dirk; Leder, Uwe; Hoyer, Heike; Pompe, Bernd; Sommer, Michael; Zwiener, Ulrich

    2002-01-01

    The heart rate variability (HRV) is related to several mechanisms of the complex autonomic functioning such as respiratory heart rate modulation and phase dependencies between heart beat cycles and breathing cycles. The underlying processes are basically nonlinear. In order to understand and quantitatively assess those physiological interactions an adequate coupling analysis is necessary. We hypothesized that nonlinear measures of HRV and cardiorespiratory interdependencies are superior to the standard HRV measures in classifying patients after acute myocardial infarction. We introduced mutual information measures which provide access to nonlinear interdependencies as counterpart to the classically linear correlation analysis. The nonlinear statistical autodependencies of HRV were quantified by auto mutual information, the respiratory heart rate modulation by cardiorespiratory cross mutual information, respectively. The phase interdependencies between heart beat cycles and breathing cycles were assessed basing on the histograms of the frequency ratios of the instantaneous heart beat and respiratory cycles. Furthermore, the relative duration of phase synchronized intervals was acquired. We investigated 39 patients after acute myocardial infarction versus 24 controls. The discrimination of these groups was improved by cardiorespiratory cross mutual information measures and phase interdependencies measures in comparison to the linear standard HRV measures. This result was statistically confirmed by means of logistic regression models of particular variable subsets and their receiver operating characteristics.

  9. New output improvements for CLASSY

    NASA Technical Reports Server (NTRS)

    Rassbach, M. E. (Principal Investigator)

    1981-01-01

    Additional output data and formats for the CLASSY clustering algorithm were developed. Four such aids to the CLASSY user are described. These are: (1) statistical measures; (2) special map types; (3) formats for standard output; and (4) special cluster display method.

  10. Using Group Projects to Assess the Learning of Sampling Distributions

    ERIC Educational Resources Information Center

    Neidigh, Robert O.; Dunkelberger, Jake

    2012-01-01

    In an introductory business statistics course, student groups used sample data to compare a set of sample means to the theoretical sampling distribution. Each group was given a production measurement with a population mean and standard deviation. The groups were also provided an excel spreadsheet with 40 sample measurements per week for 52 weeks…

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  12. Shot Group Statistics for Small Arms Applications

    DTIC Science & Technology

    2017-06-01

    standard deviation. Analysis is presented as applied to one , n-round shot group and then is extended to treat multiple, n-round shot groups. A...dispersion measure for multiple, n-round shot groups can be constructed by selecting one of the dispersion measures listed above, measuring the dispersion of...as applied to one , n-round shot group and then is extended to treat multiple, n-round shot groups. A dispersion measure for multiple, n- round shot

  13. An Overview of Interrater Agreement on Likert Scales for Researchers and Practitioners

    PubMed Central

    O'Neill, Thomas A.

    2017-01-01

    Applications of interrater agreement (IRA) statistics for Likert scales are plentiful in research and practice. IRA may be implicated in job analysis, performance appraisal, panel interviews, and any other approach to gathering systematic observations. Any rating system involving subject-matter experts can also benefit from IRA as a measure of consensus. Further, IRA is fundamental to aggregation in multilevel research, which is becoming increasingly common in order to address nesting. Although, several technical descriptions of a few specific IRA statistics exist, this paper aims to provide a tractable orientation to common IRA indices to support application. The introductory overview is written with the intent of facilitating contrasts among IRA statistics by critically reviewing equations, interpretations, strengths, and weaknesses. Statistics considered include rwg, rwg*, r′wg, rwg(p), average deviation (AD), awg, standard deviation (Swg), and the coefficient of variation (CVwg). Equations support quick calculation and contrasting of different agreement indices. The article also includes a “quick reference” table and three figures in order to help readers identify how IRA statistics differ and how interpretations of IRA will depend strongly on the statistic employed. A brief consideration of recommended practices involving statistical and practical cutoff standards is presented, and conclusions are offered in light of the current literature. PMID:28553257

  14. Morphology of the Corneal Limbus Following Standard and Accelerated Corneal Collagen Cross-Linking (9 mW/cm2) for Keratoconus.

    PubMed

    Uçakhan, Ömür Ö; Bayraktutar, Betül

    2017-01-01

    To evaluate the morphological features of the corneal limbus as measured by in vivo confocal microscopy (IVCM) following standard and accelerated corneal collagen cross-linking (CXL) for keratoconus. Patients with progressive keratoconus scheduled to undergo standard CXL (group 1; 31 patients, 3 mW/cm, 370 nm, 30 minutes), or accelerated CXL (group 2; 20 patients, 9 mW/cm, 370 nm, 10 minutes) in the worse eye were included in this prospective study. Thirty eyes of 30 age-matched patients served as controls (group 3). All patient eyes underwent IVCM scanning of the central cornea and the inferior limbal area at baseline and 1, 3, and 6 months after CXL. After CXL, epithelial regrowth was complete by day 4 in both groups 1 and 2. There were no statistically significant differences between the baseline mean central corneal wing or basal cell density, limbus-palisade middle or basal cell densities of groups 1, 2, or 3. At postoperative months 1, 3, and 6, there were no statistically significant differences in either central or limbus-palisade epithelial cell densities or diameters in keratoconic eyes that underwent standard or accelerated CXL (P > 0.05). The morphology of the limbal cells was preserved as well. The morphology of limbus structures seems to be preserved following standard and accelerated CXL in short-term follow-up, as measured using IVCM.

  15. Feasibility of Coherent and Incoherent Backscatter Experiments from the AMPS Laboratory. Technical Section

    NASA Technical Reports Server (NTRS)

    Mozer, F. S.

    1976-01-01

    A computer program simulated the spectrum which resulted when a radar signal was transmitted into the ionosphere for a finite time and received for an equal finite interval. The spectrum derived from this signal is statistical in nature because the signal is scattered from the ionosphere, which is statistical in nature. Many estimates of any property of the ionosphere can be made. Their average value will approach the average property of the ionosphere which is being measured. Due to the statistical nature of the spectrum itself, the estimators will vary about this average. The square root of the variance about this average is called the standard deviation, an estimate of the error which exists in any particular radar measurement. In order to determine the feasibility of the space shuttle radar, the magnitude of these errors for measurements of physical interest must be understood.

  16. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    PubMed

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  17. The reliability of the newly developed bending tester for the measurement of flexural rigidity of textile materials

    NASA Astrophysics Data System (ADS)

    Haji Musa, A. Binti; Malengier, B.; Van Langenhove, L.; Stevens, C.

    2017-10-01

    A new automated bending tester was developed in Ghent University, Belgium to reduce the human interference in the bending measurement. This paper reports the investigations made on the tester in order to confirm the reliability of its measurement. For that, 11 types of fabrics with different construction parameters were tested for their bending length and flexural rigidity using the new bending tester and the results were compared with that of the standard or manual bending tester, which were conducted in accordance with BS 3356:1990 standard method. Statistical analysis confirms that both measurements are strongly correlated with Pearson’s R≥ 0.90 for all the measurements made. It means that the results from the new automated tester show good correlations with the standard measurement. Nevertheless, this prototype version of the new tester still needs to be adjusted to optimise the functionality of it and further investigations should be done to justify the robustness of the results.

  18. An ROC-type measure of diagnostic accuracy when the gold standard is continuous-scale.

    PubMed

    Obuchowski, Nancy A

    2006-02-15

    ROC curves and summary measures of accuracy derived from them, such as the area under the ROC curve, have become the standard for describing and comparing the accuracy of diagnostic tests. Methods for estimating ROC curves rely on the existence of a gold standard which dichotomizes patients into disease present or absent. There are, however, many examples of diagnostic tests whose gold standards are not binary-scale, but rather continuous-scale. Unnatural dichotomization of these gold standards leads to bias and inconsistency in estimates of diagnostic accuracy. In this paper, we propose a non-parametric estimator of diagnostic test accuracy which does not require dichotomization of the gold standard. This estimator has an interpretation analogous to the area under the ROC curve. We propose a confidence interval for test accuracy and a statistical test for comparing accuracies of tests from paired designs. We compare the performance (i.e. CI coverage, type I error rate, power) of the proposed methods with several alternatives. An example is presented where the accuracies of two quick blood tests for measuring serum iron concentrations are estimated and compared.

  19. Cost-Effectiveness Analysis: a proposal of new reporting standards in statistical analysis

    PubMed Central

    Bang, Heejung; Zhao, Hongwei

    2014-01-01

    Cost-effectiveness analysis (CEA) is a method for evaluating the outcomes and costs of competing strategies designed to improve health, and has been applied to a variety of different scientific fields. Yet, there are inherent complexities in cost estimation and CEA from statistical perspectives (e.g., skewness, bi-dimensionality, and censoring). The incremental cost-effectiveness ratio that represents the additional cost per one unit of outcome gained by a new strategy has served as the most widely accepted methodology in the CEA. In this article, we call for expanded perspectives and reporting standards reflecting a more comprehensive analysis that can elucidate different aspects of available data. Specifically, we propose that mean and median-based incremental cost-effectiveness ratios and average cost-effectiveness ratios be reported together, along with relevant summary and inferential statistics as complementary measures for informed decision making. PMID:24605979

  20. Descriptive statistics.

    PubMed

    Nick, Todd G

    2007-01-01

    Statistics is defined by the Medical Subject Headings (MeSH) thesaurus as the science and art of collecting, summarizing, and analyzing data that are subject to random variation. The two broad categories of summarizing and analyzing data are referred to as descriptive and inferential statistics. This chapter considers the science and art of summarizing data where descriptive statistics and graphics are used to display data. In this chapter, we discuss the fundamentals of descriptive statistics, including describing qualitative and quantitative variables. For describing quantitative variables, measures of location and spread, for example the standard deviation, are presented along with graphical presentations. We also discuss distributions of statistics, for example the variance, as well as the use of transformations. The concepts in this chapter are useful for uncovering patterns within the data and for effectively presenting the results of a project.

  1. Relationship between preventable hospital deaths and other measures of safety: an exploratory study.

    PubMed

    Hogan, Helen; Healey, Frances; Neale, Graham; Thomson, Richard; Vincent, Charles; Black, Nick

    2014-06-01

    To explore associations between the proportion of hospital deaths that are preventable and other measures of safety. Retrospective case record review to provide estimates of preventable death proportions. Simple monotonic correlations using Spearman's rank correlation coefficient to establish the relationship with eight other measures of patient safety. Ten English acute hospital trusts. One thousand patients who died during 2009. The proportion of preventable deaths varied between hospitals (3-8%) but was not statistically significant (P = 0.94). Only one of the eight measures of safety (Methicillin-resistant Staphylococcus aureus bacteraemia rate) was clinically and statistically significantly associated with preventable death proportion (r = 0.73; P < 0.02). There were no significant associations with the other measures including hospital standardized mortality ratios (r = -0.01). There was a suggestion that preventable deaths may be more strongly associated with some other measures of outcome than with process or with structure measures. The exploratory nature of this study inevitably limited its power to provide definitive results. The observed relationships between safety measures suggest that a larger more powerful study is needed to establish the inter-relationship of different measures of safety (structure, process and outcome), in particular the widely used standardized mortality ratios. © The Author 2014. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  2. Results of module electrical measurement of the DOE 46-kilowatt procurement

    NASA Technical Reports Server (NTRS)

    Curtis, H. B.

    1978-01-01

    Current-voltage measurements have been made on terrestrial solar cell modules of the DOE/JPL Low Cost Silicon Solar Array procurement. Data on short circuit current, open circuit voltage, and maximum power for the four types of modules are presented in normalized form, showing distribution of the measured values. Standard deviations from the mean values are also given. Tests of the statistical significance of the data are discussed.

  3. [Reliability and reproducibility of the Fitzpatrick phototype scale for skin sensitivity to ultraviolet light].

    PubMed

    Sánchez, Guillermo; Nova, John; Arias, Nilsa; Peña, Bibiana

    2008-12-01

    The Fitzpatrick phototype scale has been used to determine skin sensitivity to ultraviolet light. The reliability of this scale in estimating sensitivity permits risk evaluation of skin cancer based on phototype. Reliability and changes in intra and inter-observer concordance was determined for the Fitzpatrick phototype scale after the assessment methods for establishing the phototype were standardized. An analytical study of intra and inter-observer concordance was performed. The Fitzpatrick phototype scale was standardized using focus group methodology. To determine intra and inter-observer agreement, the weighted kappa statistical method was applied. The standardization effect was measured using the equal kappa contrast hypothesis and Wald test for dependent measurements. The phototype scale was applied to 155 patients over 15 years of age who were assessed four times by two independent observers. The sample was drawn from patients of the Centro Dermatol6gico Federico Lleras Acosta. During the pre-standardization phase, the baseline and six-week inter-observer weighted kappa were 0.31 and 0.40, respectively. The intra-observer kappa values for observers A and B were 0.47 and 0.51, respectively. After the standardization process, the baseline and six-week inter-observer weighted kappa values were 0.77, and 0.82, respectively. Intra-observer kappa coefficients for observers A and B were 0.78 and 0.82. Statistically significant differences were found between coefficients before and after standardization (p<0.001) in all comparisons. Following a standardization exercise, the Fitzpatrick phototype scale yielded reliable, reproducible and consistent results.

  4. Estimation of selected streamflow statistics for a network of low-flow partial-record stations in areas affected by Base Realignment and Closure (BRAC) in Maryland

    USGS Publications Warehouse

    Ries, Kernell G.; Eng, Ken

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima

  5. Verification of calculated skin doses in postmastectomy helical tomotherapy.

    PubMed

    Ito, Shima; Parker, Brent C; Levine, Renee; Sanders, Mary Ella; Fontenot, Jonas; Gibbons, John; Hogstrom, Kenneth

    2011-10-01

    To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi·Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. The mean difference and standard error of the mean difference between measurement and calculation for the scar measurements was -1.8% ± 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% ± 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% ± 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%. Copyright © 2011 Elsevier Inc. All rights reserved.

  6. Verification of Calculated Skin Doses in Postmastectomy Helical Tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ito, Shima; Parker, Brent C., E-mail: bcparker@marybird.com; Mary Bird Perkins Cancer Center, Baton Rouge, LA

    2011-10-01

    Purpose: To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). Methods and Materials: In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi.Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. Results: The mean difference and standard errormore » of the mean difference between measurement and calculation for the scar measurements was -1.8% {+-} 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% {+-} 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% {+-} 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. Conclusions: The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%.« less

  7. Generalized t-statistic for two-group classification.

    PubMed

    Komori, Osamu; Eguchi, Shinto; Copas, John B

    2015-06-01

    In the classic discriminant model of two multivariate normal distributions with equal variance matrices, the linear discriminant function is optimal both in terms of the log likelihood ratio and in terms of maximizing the standardized difference (the t-statistic) between the means of the two distributions. In a typical case-control study, normality may be sensible for the control sample but heterogeneity and uncertainty in diagnosis may suggest that a more flexible model is needed for the cases. We generalize the t-statistic approach by finding the linear function which maximizes a standardized difference but with data from one of the groups (the cases) filtered by a possibly nonlinear function U. We study conditions for consistency of the method and find the function U which is optimal in the sense of asymptotic efficiency. Optimality may also extend to other measures of discriminatory efficiency such as the area under the receiver operating characteristic curve. The optimal function U depends on a scalar probability density function which can be estimated non-parametrically using a standard numerical algorithm. A lasso-like version for variable selection is implemented by adding L1-regularization to the generalized t-statistic. Two microarray data sets in the study of asthma and various cancers are used as motivating examples. © 2014, The International Biometric Society.

  8. Cosmic distance duality and cosmic transparency

    NASA Astrophysics Data System (ADS)

    Nair, Remya; Jhingan, Sanjay; Jain, Deepak

    2012-12-01

    We compare distance measurements obtained from two distance indicators, Supernovae observations (standard candles) and Baryon acoustic oscillation data (standard rulers). The Union2 sample of supernovae with BAO data from SDSS, 6dFGS and the latest BOSS and WiggleZ surveys is used in search for deviations from the distance duality relation. We find that the supernovae are brighter than expected from BAO measurements. The luminosity distances tend to be smaller then expected from angular diameter distance estimates as also found in earlier works on distance duality, but the trend is not statistically significant. This further constrains the cosmic transparency.

  9. Efforts to improve international migration statistics: a historical perspective.

    PubMed

    Kraly, E P; Gnanasekaran, K S

    1987-01-01

    During the past decade, the international statistical community has made several efforts to develop standards for the definition, collection and publication of statistics on international migration. This article surveys the history of official initiatives to standardize international migration statistics by reviewing the recommendations of the International Statistical Institute, International Labor Organization, and the UN, and reports a recently proposed agenda for moving toward comparability among national statistical systems. Heightening awareness of the benefits of exchange and creating motivation to implement international standards requires a 3-pronged effort from the international statistical community. 1st, it is essential to continue discussion about the significance of improvement, specifically standardization, of international migration statistics. The move from theory to practice in this area requires ongoing focus by migration statisticians so that conformity to international standards itself becomes a criterion by which national statistical practices are examined and assessed. 2nd, the countries should be provided with technical documentation to support and facilitate the implementation of the recommended statistical systems. Documentation should be developed with an understanding that conformity to international standards for migration and travel statistics must be achieved within existing national statistical programs. 3rd, the call for statistical research in this area requires more efforts by the community of migration statisticians, beginning with the mobilization of bilateral and multilateral resources to undertake the preceding list of activities.

  10. 75 FR 37245 - 2010 Standards for Delineating Metropolitan and Micropolitan Statistical Areas

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-28

    ... Micropolitan Statistical Areas; Notice #0;#0;Federal Register / Vol. 75, No. 123 / Monday, June 28, 2010... and Micropolitan Statistical Areas AGENCY: Office of Information and Regulatory Affairs, Office of... Statistical Areas. The 2010 standards replace and supersede the 2000 Standards for Defining Metropolitan and...

  11. Evaluation of the effectiveness of thoracic sympathectomy in the treatment of primary hyperhidrosis of hands and armpits using the measurement of skin resistance

    PubMed Central

    Jabłoński, Sławomir; Rzepkowska-Misiak, Beata; Piskorz, Łukasz; Brocki, Marian; Wcisło, Szymon; Smigielski, Jacek; Kordiak, Jacek

    2012-01-01

    Introduction Hyperhidrosis is excessive sweating beyond the needs of thermoregulation. It is disease which mostly affects young people, often carrying a considerable amount of socio-economic implications. Thoracic sympathectomy is now considered to be the "gold standard" in the treatment of idiopathic hyperhidrosis of hands and armpits. Aim Assessment of early effectiveness of thoracic sympathectomy using skin resistance measurements performed before surgery and in the postoperative period. Material and methods A group of 20 patients with idiopathic excessive sweating of hands and the armpit was enrolled in the study. Patients underwent two-stage thoracic sympathectomy with resection of Th2-Th4 ganglions. The skin resistance measurements were made at six previously designated points on the day of surgery and the first day after the operation. Results In all operated patients we obtained complete remission of symptoms on the first day after the surgery. Inhibition of sweating was confirmed using the standard starch iodine (Minor) test. At all measurement points we obtained a statistically significant increase of skin resistance, assuming p < 0.05. To check whether there is a statistically significant difference in the results before and after surgery we used sequence pairs Wilcoxon test. Conclusions Thoracic sympathectomy is an effective curative treatment for primary hyperhidrosis of hands and armpits. Statistically significant increase of skin resistance in all cases is a good method of assessing the effectiveness of the above surgery in the early postoperative period. PMID:23256019

  12. Particle size distributions by transmission electron microscopy: an interlaboratory comparison case study

    PubMed Central

    Rice, Stephen B; Chan, Christopher; Brown, Scott C; Eschbach, Peter; Han, Li; Ensor, David S; Stefaniak, Aleksandr B; Bonevich, John; Vladár, András E; Hight Walker, Angela R; Zheng, Jiwen; Starnes, Catherine; Stromberg, Arnold; Ye, Jia; Grulke, Eric A

    2015-01-01

    This paper reports an interlaboratory comparison that evaluated a protocol for measuring and analysing the particle size distribution of discrete, metallic, spheroidal nanoparticles using transmission electron microscopy (TEM). The study was focused on automated image capture and automated particle analysis. NIST RM8012 gold nanoparticles (30 nm nominal diameter) were measured for area-equivalent diameter distributions by eight laboratories. Statistical analysis was used to (1) assess the data quality without using size distribution reference models, (2) determine reference model parameters for different size distribution reference models and non-linear regression fitting methods and (3) assess the measurement uncertainty of a size distribution parameter by using its coefficient of variation. The interlaboratory area-equivalent diameter mean, 27.6 nm ± 2.4 nm (computed based on a normal distribution), was quite similar to the area-equivalent diameter, 27.6 nm, assigned to NIST RM8012. The lognormal reference model was the preferred choice for these particle size distributions as, for all laboratories, its parameters had lower relative standard errors (RSEs) than the other size distribution reference models tested (normal, Weibull and Rosin–Rammler–Bennett). The RSEs for the fitted standard deviations were two orders of magnitude higher than those for the fitted means, suggesting that most of the parameter estimate errors were associated with estimating the breadth of the distributions. The coefficients of variation for the interlaboratory statistics also confirmed the lognormal reference model as the preferred choice. From quasi-linear plots, the typical range for good fits between the model and cumulative number-based distributions was 1.9 fitted standard deviations less than the mean to 2.3 fitted standard deviations above the mean. Automated image capture, automated particle analysis and statistical evaluation of the data and fitting coefficients provide a framework for assessing nanoparticle size distributions using TEM for image acquisition. PMID:26361398

  13. Building a Biomedical Cyberinfrastructure for Collaborative Research

    PubMed Central

    Schad, Peter A.; Mobley, Lee Rivers; Hamilton, Carol M.

    2018-01-01

    For the potential power of genome-wide association studies (GWAS) and translational medicine to be realized, the biomedical research community must adopt standard measures, vocabularies, and systems to establish an extensible biomedical cyberinfrastructure. Incorporating standard measures will greatly facilitate combining and comparing studies via meta-analysis, which is a means for deriving larger populations, needed for increased statistical power to detect less apparent and more complex associations (gene-environment interactions and polygenic gene-gene interactions). Incorporating consensus-based and well-established measures into various studies should reduce the variability across studies due to attributes of measurement, making findings across studies more comparable. This article describes two consensus-based approaches to establishing standard measures and systems: PhenX (consensus measures for Phenotypes and eXposures), and the Open Geospatial Consortium (OGC). National Institutes of Health support for these efforts has produced the PhenX Toolkit, an assembled catalog of standard measures for use in GWAS and other large-scale genomic research efforts, and the RTI Spatial Impact Factor Database (SIFD), a comprehensive repository of georeferenced variables and extensive metadata that conforms to OGC standards. The need for coordinated development of cyberinfrastructure to support collaboration and data interoperability is clear, and we discuss standard protocols for ensuring data compatibility and interoperability. Adopting a cyberinfrastructure that includes standard measures, vocabularies, and open-source systems architecture will enhance the potential of future biomedical and translational research. Establishing and maintaining the cyberinfrastructure will require a fundamental change in the way researchers think about study design, collaboration, and data storage and analysis. PMID:21521587

  14. Investigation of Magnetic Field Phenomena in the Ionosphere

    DTIC Science & Technology

    1979-01-01

    several days so that a statistical measure of comparison may be developed, i.e. how well the fluxgate magnetometer replicates the standard values Because...Fig. 7 schematically 15 shows these changes. 4) Transients in the sensor to amplifier lines have caused failures of the chopper transitor . Back to back...weakness of this method is that the drop out must be longer than 100 ms. However, drop outs of durations shorter than this are statistically very small

  15. [Poverty and Health: The Living Standard Approach as a Supplementary Concept to Measure Relative Poverty. Results from the German Socio-Economic Panel (GSOEP 2011)].

    PubMed

    Pförtner, T-K

    2016-06-01

    A common indicator of the measurement of relative poverty is the disposable income of a household. Current research introduces the living standard approach as an alternative concept for describing and measuring relative poverty. This study compares both approaches with regard to subjective health status of the German population, and provides theoretical implications for the utilisation of the income and living standard approach in health research. Analyses are based on the German Socio-Economic Panel (GSOEP) from the year 2011 that includes 12 290 private households and 21106 survey members. Self-rated health was based on a subjective assessment of general health status. Income poverty is based on the equalised disposable income and is applied to a threshold of 60% of the median-based average income. A person will be denoted as deprived (inadequate living standard) if 3 or more out of 11 living standard items are lacking due to financial reasons. To calculate the discriminate power of both poverty indicators, descriptive analyses and stepwise logistic regression models were applied separately for men and women adjusted for age, residence, nationality, educational level, occupational status and marital status. The results of the stepwise regression revealed a stronger poverty-health relationship for the living standard indicator. After adjusting for all control variables and the respective poverty indicator, income poverty was statistically not significantly associated with a poor subjective health status among men (OR Men: 1.33; 95% CI: 1.00-1.77) and women (OR Women: 0.98; 95% CI: 0.78-1.22). In contrast, the association between deprivation and subjective health status was statistically significant for men (OR Men: 2.00; 95% CI: 1.57-2.52) and women (OR Women: 2.11; 95% CI: 1.76-2.64). The results of the present study indicate that the income and standard of living approach measure different dimensions of poverty. In comparison to the income approach, the living standard approach measures stronger shortages of wealth and is relatively robust towards gender differences. This study expands the current debate about complementary research on the association between poverty and health. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Measurement of the relationship between perceived and computed color differences

    NASA Astrophysics Data System (ADS)

    García, Pedro A.; Huertas, Rafael; Melgosa, Manuel; Cui, Guihua

    2007-07-01

    Using simulated data sets, we have analyzed some mathematical properties of different statistical measurements that have been employed in previous literature to test the performance of different color-difference formulas. Specifically, the properties of the combined index PF/3 (performance factor obtained as average of three terms), widely employed in current literature, have been considered. A new index named standardized residual sum of squares (STRESS), employed in multidimensional scaling techniques, is recommended. The main difference between PF/3 and STRESS is that the latter is simpler and allows inferences on the statistical significance of two color-difference formulas with respect to a given set of visual data.

  17. Using the Bootstrap Method to Evaluate the Critical Range of Misfit for Polytomous Rasch Fit Statistics.

    PubMed

    Seol, Hyunsoo

    2016-06-01

    The purpose of this study was to apply the bootstrap procedure to evaluate how the bootstrapped confidence intervals (CIs) for polytomous Rasch fit statistics might differ according to sample sizes and test lengths in comparison with the rule-of-thumb critical value of misfit. A total of 25 simulated data sets were generated to fit the Rasch measurement and then a total of 1,000 replications were conducted to compute the bootstrapped CIs under each of 25 testing conditions. The results showed that rule-of-thumb critical values for assessing the magnitude of misfit were not applicable because the infit and outfit mean square error statistics showed different magnitudes of variability over testing conditions and the standardized fit statistics did not exactly follow the standard normal distribution. Further, they also do not share the same critical range for the item and person misfit. Based on the results of the study, the bootstrapped CIs can be used to identify misfitting items or persons as they offer a reasonable alternative solution, especially when the distributions of the infit and outfit statistics are not well known and depend on sample size. © The Author(s) 2016.

  18. Analysis of repeated measurement data in the clinical trials

    PubMed Central

    Singh, Vineeta; Rana, Rakesh Kumar; Singhal, Richa

    2013-01-01

    Statistics is an integral part of Clinical Trials. Elements of statistics span Clinical Trial design, data monitoring, analyses and reporting. A solid understanding of statistical concepts by clinicians improves the comprehension and the resulting quality of Clinical Trials. In biomedical research it has been seen that researcher frequently use t-test and ANOVA to compare means between the groups of interest irrespective of the nature of the data. In Clinical Trials we record the data on the patients more than two times. In such a situation using the standard ANOVA procedures is not appropriate as it does not consider dependencies between observations within subjects in the analysis. To deal with such types of study data Repeated Measure ANOVA should be used. In this article the application of One-way Repeated Measure ANOVA has been demonstrated by using the software SPSS (Statistical Package for Social Sciences) Version 15.0 on the data collected at four time points 0 day, 15th day, 30th day, and 45th day of multicentre clinical trial conducted on Pandu Roga (~Iron Deficiency Anemia) with an Ayurvedic formulation Dhatrilauha. PMID:23930038

  19. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  20. 40 CFR 1065.1005 - Symbols, abbreviations, acronyms, and units of measure.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... least squares regression β ratio of diameters meter per meter m/m 1 β atomic oxygen to carbon ratio mole... consumption gram per kilowatt hour g/(kW·hr) g·3.6−1·106·m−2·kg·s2 F F-test statistic f frequency hertz Hz s−1... standard deviation S Sutherland constant kelvin K K SEE standard estimate of error T absolute temperature...

  1. Data-optimized source modeling with the Backwards Liouville Test–Kinetic method

    DOE PAGES

    Woodroffe, J. R.; Brito, T. V.; Jordanova, V. K.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution were used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. Our study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra,more » Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  2. Estimating the mass variance in neutron multiplicity counting-A comparison of approaches

    NASA Astrophysics Data System (ADS)

    Dubi, C.; Croft, S.; Favalli, A.; Ocherashvili, A.; Pedersen, B.

    2017-12-01

    In the standard practice of neutron multiplicity counting , the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α , n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy, sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.

  3. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dubi, C.; Croft, S.; Favalli, A.

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  4. Estimating the mass variance in neutron multiplicity counting $-$ A comparison of approaches

    DOE PAGES

    Dubi, C.; Croft, S.; Favalli, A.; ...

    2017-09-14

    In the standard practice of neutron multiplicity counting, the first three sampled factorial moments of the event triggered neutron count distribution are used to quantify the three main neutron source terms: the spontaneous fissile material effective mass, the relative (α,n) production and the induced fission source responsible for multiplication. This study compares three methods to quantify the statistical uncertainty of the estimated mass: the bootstrap method, propagation of variance through moments, and statistical analysis of cycle data method. Each of the three methods was implemented on a set of four different NMC measurements, held at the JRC-laboratory in Ispra, Italy,more » sampling four different Pu samples in a standard Plutonium Scrap Multiplicity Counter (PSMC) well counter.« less

  5. Reducing the standard deviation in multiple-assay experiments where the variation matters but the absolute value does not.

    PubMed

    Echenique-Robba, Pablo; Nelo-Bazán, María Alejandra; Carrodeguas, José A

    2013-01-01

    When the value of a quantity x for a number of systems (cells, molecules, people, chunks of metal, DNA vectors, so on) is measured and the aim is to replicate the whole set again for different trials or assays, despite the efforts for a near-equal design, scientists might often obtain quite different measurements. As a consequence, some systems' averages present standard deviations that are too large to render statistically significant results. This work presents a novel correction method of a very low mathematical and numerical complexity that can reduce the standard deviation of such results and increase their statistical significance. Two conditions are to be met: the inter-system variations of x matter while its absolute value does not, and a similar tendency in the values of x must be present in the different assays (or in other words, the results corresponding to different assays must present a high linear correlation). We demonstrate the improvements this method offers with a cell biology experiment, but it can definitely be applied to any problem that conforms to the described structure and requirements and in any quantitative scientific field that deals with data subject to uncertainty.

  6. Review of research designs and statistical methods employed in dental postgraduate dissertations.

    PubMed

    Shirahatti, Ravi V; Hegde-Shetiya, Sahana

    2015-01-01

    There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to comply with standard guidelines of reporting research.

  7. Statistical similarity measures for link prediction in heterogeneous complex networks

    NASA Astrophysics Data System (ADS)

    Shakibian, Hadi; Charkari, Nasrollah Moghadam

    2018-07-01

    The majority of the link prediction measures in heterogeneous complex networks rely on the nodes connectivities while less attention has been paid to the importance of the nodes and paths. In this paper, we propose some new meta-path based statistical similarity measures to properly perform link prediction task. The main idea in the proposed measures is to drive some co-occurrence events in a number of co-occurrence matrices that are occurred between the visited nodes obeying a meta-path. The extracted co-occurrence matrices are analyzed in terms of the energy, inertia, local homogeneity, correlation, and information measure of correlation to determine various information theoretic measures. We evaluate the proposed measures, denoted as link energy, link inertia, link local homogeneity, link correlation, and link information measure of correlation, using a standard DBLP network data set. The results of the AUC score and Precision rate indicate the validity and accuracy of the proposed measures in comparison to the popular meta-path based similarity measures.

  8. Scaled test statistics and robust standard errors for non-normal data in covariance structure analysis: a Monte Carlo study.

    PubMed

    Chou, C P; Bentler, P M; Satorra, A

    1991-11-01

    Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.

  9. Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island

    NASA Astrophysics Data System (ADS)

    E Komalasari, K.; Pawitan, H.; Faqih, A.

    2017-03-01

    This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.

  10. High-Throughput Nanoindentation for Statistical and Spatial Property Determination

    NASA Astrophysics Data System (ADS)

    Hintsala, Eric D.; Hangen, Ude; Stauffer, Douglas D.

    2018-04-01

    Standard nanoindentation tests are "high throughput" compared to nearly all other mechanical tests, such as tension or compression. However, the typical rates of tens of tests per hour can be significantly improved. These higher testing rates enable otherwise impractical studies requiring several thousands of indents, such as high-resolution property mapping and detailed statistical studies. However, care must be taken to avoid systematic errors in the measurement, including choosing of the indentation depth/spacing to avoid overlap of plastic zones, pileup, and influence of neighboring microstructural features in the material being tested. Furthermore, since fast loading rates are required, the strain rate sensitivity must also be considered. A review of these effects is given, with the emphasis placed on making complimentary standard nanoindentation measurements to address these issues. Experimental applications of the technique, including mapping of welds, microstructures, and composites with varying length scales, along with studying the effect of surface roughness on nominally homogeneous specimens, will be presented.

  11. Priorities for Standards and Measurements to Accelerate Innovations in Nano-Electrotechnologies: Analysis of the NIST-Energetics-IEC TC 113 Survey+,*

    PubMed Central

    Bennett, Herbert S.; Andres, Howard; Pellegrino, Joan; Kwok, Winnie; Fabricius, Norbert; Chapin, J. Thomas

    2009-01-01

    In 2008, the National Institute of Standards and Technology and Energetics Incorporated collaborated with the International Electrotechnical Commission Technical Committee 113 (IEC TC 113) on nano-electrotechnologies to survey members of the international nanotechnologies community about priorities for standards and measurements to accelerate innovations in nano-electrotechnologies. In this paper, we analyze the 459 survey responses from 45 countries as one means to begin building a consensus on a framework leading to nano-electrotechnologies standards development by standards organizations and national measurement institutes. The distributions of priority rankings from all 459 respondents are such that there are perceived distinctions with statistical confidence between the relative international priorities for the several items ranked in each of the following five Survey category types: 1) Nano-electrotechnology Properties, 2) Nano-electrotechnology Taxonomy: Products, 3) Nano-electrotechnology Taxonomy: Cross-Cutting Technologies, 4) IEC General Discipline Areas, and 5) Stages of the Linear Economic Model. The global consensus prioritizations for ranked items in the above five category types suggest that the IEC TC 113 should focus initially on standards and measurements for electronic and electrical properties of sensors and fabrication tools that support performance assessments of nano-technology enabled sub-assemblies used in energy, medical, and computer products. PMID:27504216

  12. Summary Statistics for Homemade ?Play Dough? -- Data Acquired at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Morales, K E; Whipple, R E

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a homemade Play Dough{trademark}-like material, designated as PDA. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2700 LMHU{sub D} 100kVp to a low of about 1200 LMHUD at 300kVp. The standard deviation of each measurement is around 10% to 15% of the mean. The entropy covers the range from 6.0 to 7.4. Ordinarily, we would model the LAC of themore » material and compare the modeled values to the measured values. In this case, however, we did not have the detailed chemical composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 10. LLNL prepared about 50mL of the homemade 'Play Dough' in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less

  13. Accuracy of two osmometers on standard samples: electrical impedance technique and freezing point depression technique

    NASA Astrophysics Data System (ADS)

    García-Resúa, Carlos; Pena-Verdeal, Hugo; Miñones, Mercedes; Gilino, Jorge; Giraldez, Maria J.; Yebra-Pimentel, Eva

    2013-11-01

    High tear fluid osmolarity is a feature common to all types of dry eye. This study was designed to establish the accuracy of two osmometers, a freezing point depression osmometer (Fiske 110) and an electrical impedance osmometer (TearLab™) by using standard samples. To assess the accuracy of the measurements provided by the two instruments we used 5 solutions of known osmolarity/osmolality; 50, 290 and 850 mOsm/kg and 292 and 338 mOsm/L. Fiske 110 is designed to be used in samples of 20 μl, so measurements were made on 1:9, 1:4, 1:1 and 1:0 dilutions of the standards. Tear Lab is addressed to be used in tear film and only a sample of 0.05 μl is required, so no dilutions were employed. Due to the smaller measurement range of the TearLab, the 50 and 850 mOsm/kg standards were not included. 20 measurements per standard sample were used and differences with the reference value was analysed by one sample t-test. Fiske 110 showed that osmolarity measurements differed statistically from standard values except those recorded for 290 mOsm/kg standard diluted 1:1 (p = 0.309), the 292 mOsm/L H2O sample (1:1) and 338 mOsm/L H2O standard (1:4). The more diluted the sample, the higher the error rate. For the TearLab measurements, one-sample t-test indicated that all determinations differed from the theoretical values (p = 0.001), though differences were always small. For undiluted solutions, Fiske 110 shows similar performance than TearLab. However, for the diluted standards, Fiske 110 worsens.

  14. Agreement, the F-Measure, and Reliability in Information Retrieval

    PubMed Central

    Hripcsak, George; Rothschild, Adam S.

    2005-01-01

    Information retrieval studies that involve searching the Internet or marking phrases usually lack a well-defined number of negative cases. This prevents the use of traditional interrater reliability metrics like the κ statistic to assess the quality of expert-generated gold standards. Such studies often quantify system performance as precision, recall, and F-measure, or as agreement. It can be shown that the average F-measure among pairs of experts is numerically identical to the average positive specific agreement among experts and that κ approaches these measures as the number of negative cases grows large. Positive specific agreement—or the equivalent F-measure—may be an appropriate way to quantify interrater reliability and therefore to assess the reliability of a gold standard in these studies. PMID:15684123

  15. Comparison of Scientific Calipers and Computer-Enabled CT Review for the Measurement of Skull Base and Craniomaxillofacial Dimensions

    PubMed Central

    Citardi, Martin J.; Herrmann, Brian; Hollenbeak, Chris S.; Stack, Brendan C.; Cooper, Margaret; Bucholz, Richard D.

    2001-01-01

    Traditionally, cadaveric studies and plain-film cephalometrics provided information about craniomaxillofacial proportions and measurements; however, advances in computer technology now permit software-based review of computed tomography (CT)-based models. Distances between standardized anatomic points were measured on five dried human skulls with standard scientific calipers (Geneva Gauge, Albany, NY) and through computer workstation (StealthStation 2.6.4, Medtronic Surgical Navigation Technology, Louisville, CO) review of corresponding CT scans. Differences in measurements between the caliper and CT model were not statistically significant for each parameter. Measurements obtained by computer workstation CT review of the cranial skull base are an accurate representation of actual bony anatomy. Such information has important implications for surgical planning and clinical research. ImagesFigure 1Figure 2Figure 3 PMID:17167599

  16. Electronic trigger for capacitive touchscreen and extension of ISO 15781 standard time lag measurements to smartphones

    NASA Astrophysics Data System (ADS)

    Bucher, François-Xavier; Cao, Frédéric; Viard, Clément; Guichard, Frédéric

    2014-03-01

    We present in this paper a novel capacitive device that stimulates the touchscreen interface of a smartphone (or of any imaging device equipped with a capacitive touchscreen) and synchronizes triggering with the DxO LED Universal Timer to measure shooting time lag and shutter lag according to ISO 15781:2013. The device and protocol extend the time lag measurement beyond the standard by including negative shutter lag, a phenomenon that is more and more commonly found in smartphones. The device is computer-controlled, and this feature, combined with measurement algorithms, makes it possible to automatize a large series of captures so as to provide more refined statistical analyses when, for example, the shutter lag of "zero shutter lag" devices is limited by the frame time as our measurements confirm.

  17. High-Accuracy Surface Figure Measurement of Silicon Mirrors at 80 K

    NASA Technical Reports Server (NTRS)

    Blake, Peter; Mink, Ronald G.; Chambers, John; Davila, Pamela; Robinson, F. David

    2004-01-01

    This report describes the equipment, experimental methods, and first results at a new facility for interferometric measurement of cryogenically-cooled spherical mirrors at the Goddard Space Flight Center Optics Branch. The procedure, using standard phase-shifting interferometry, has an standard combined uncertainty of 3.6 nm rms in its representation of the two-dimensional surface figure error at 80, and an uncertainty of plus or minus 1 nm in the rms statistic itself. The first mirror tested was a concave spherical silicon foam-core mirror, with a clear aperture of 120 mm. The optic surface was measured at room temperature using standard absolute techniques; and then the change in surface figure error from room temperature to 80 K was measured. The mirror was cooled within a cryostat. and its surface figure error measured through a fused-silica window. The facility and techniques will be used to measure the surface figure error at 20K of prototype lightweight silicon carbide and Cesic mirrors developed by Galileo Avionica (Italy) for the European Space Agency (ESA).

  18. Effect of multizone refractive multifocal contact lenses on standard automated perimetry.

    PubMed

    Madrid-Costa, David; Ruiz-Alcocer, Javier; García-Lázaro, Santiago; Albarrán-Diego, César; Ferrer-Blasco, Teresa

    2012-09-01

    The aim of this study was to evaluate whether the creation of 2 foci (distance and near) provided by multizone refractive multifocal contact lenses (CLs) for presbyopia correction affects the measurements on Humphreys 24-2 Swedish interactive threshold algorithm (SITA) standard automated perimetry (SAP). In this crossover study, 30 subjects were fitted in random order with either a multifocal CL or a monofocal CL. After 1 month, a Humphrey 24-2 SITA standard strategy was performed. The visual field global indices (the mean deviation [MD] and pattern standard deviation [PSD]), reliability indices, test duration, and number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% on pattern deviation probability plots were determined and compared between multifocal and monofocal CLs. Thirty eyes of 30 subjects were included in this study. There were no statistically significant differences in reliability indices or test duration. There was a statistically significant reduction in the MD with the multifocal CL compared with monfocal CL (P=0.001). Differences were not found in PSD nor in the number of depressed points deviating at P<5%, P<2%, P<1%, and P<0.5% in the pattern deviation probability maps studied. The results of this study suggest that the multizone refractive lens produces a generalized depression in threshold sensitivity as measured by the Humphreys 24-2 SITA SAP.

  19. Atmospheric pollution measurement by optical cross correlation methods - A concept

    NASA Technical Reports Server (NTRS)

    Fisher, M. J.; Krause, F. R.

    1971-01-01

    Method combines standard spectroscopy with statistical cross correlation analysis of two narrow light beams for remote sensing to detect foreign matter of given particulate size and consistency. Method is applicable in studies of generation and motion of clouds, nuclear debris, ozone, and radiation belts.

  20. The Real World Significance of Performance Prediction

    ERIC Educational Resources Information Center

    Pardos, Zachary A.; Wang, Qing Yang; Trivedi, Shubhendu

    2012-01-01

    In recent years, the educational data mining and user modeling communities have been aggressively introducing models for predicting student performance on external measures such as standardized tests as well as within-tutor performance. While these models have brought statistically reliable improvement to performance prediction, the real world…

  1. A statistical analysis of seat belt effectiveness in 1973-1975 model cars involved in towaway crashes. Volume 1

    DOT National Transportation Integrated Search

    1976-09-01

    Standardized injury rates and seat belt effectiveness measures are derived from a probability sample of towaway accidents involving 1973-1975 model cars. The data were collected in five different geographic regions. Weighted sample size available for...

  2. 45 CFR 305.65 - State cooperation in audit.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PROGRAM PERFORMANCE MEASURES, STANDARDS, FINANCIAL INCENTIVES, AND PENALTIES § 305.65 State cooperation in... submitted on the Federal statistical and financial reports that will be used to calculate the State's performance. The State shall also make available personnel associated with the State's IV-D program to provide...

  3. Magnetic Field Measurements of the Spotted Yellow Dwarf DE Boo During 2001-2004

    NASA Astrophysics Data System (ADS)

    Plachinda, S.; Baklanova, D.; Butkovskaya, V.; Pankov, N.

    2017-06-01

    Spectropolarimetric observations of DE Boo have been performed at Crimean astrophysical observatory during 18 nights in 2001-2004. We present the result of the longitudinal magnetic field measurements on this star. The magnetic field varies from +44 G to -36 G with mean Standard Error (SE) of 8.2 G. For full array of the magnetic field measurements the difference between experimental errors and Monte Carlo errors is not statistically significant.

  4. Statistics of the stochastically forced Lorenz attractor by the Fokker-Planck equation and cumulant expansions.

    PubMed

    Allawala, Altan; Marston, J B

    2016-11-01

    We investigate the Fokker-Planck description of the equal-time statistics of the three-dimensional Lorenz attractor with additive white noise. The invariant measure is found by computing the zero (or null) mode of the linear Fokker-Planck operator as a problem of sparse linear algebra. Two variants are studied: a self-adjoint construction of the linear operator and the replacement of diffusion with hyperdiffusion. We also access the low-order statistics of the system by a perturbative expansion in equal-time cumulants. A comparison is made to statistics obtained by the standard approach of accumulation via direct numerical simulation. Theoretical and computational aspects of the Fokker-Planck and cumulant expansion methods are discussed.

  5. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning.

    PubMed

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-06-17

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults.

  6. Fault Diagnosis for Rotating Machinery Using Vibration Measurement Deep Statistical Feature Learning

    PubMed Central

    Li, Chuan; Sánchez, René-Vinicio; Zurita, Grover; Cerrada, Mariela; Cabrera, Diego

    2016-01-01

    Fault diagnosis is important for the maintenance of rotating machinery. The detection of faults and fault patterns is a challenging part of machinery fault diagnosis. To tackle this problem, a model for deep statistical feature learning from vibration measurements of rotating machinery is presented in this paper. Vibration sensor signals collected from rotating mechanical systems are represented in the time, frequency, and time-frequency domains, each of which is then used to produce a statistical feature set. For learning statistical features, real-value Gaussian-Bernoulli restricted Boltzmann machines (GRBMs) are stacked to develop a Gaussian-Bernoulli deep Boltzmann machine (GDBM). The suggested approach is applied as a deep statistical feature learning tool for both gearbox and bearing systems. The fault classification performances in experiments using this approach are 95.17% for the gearbox, and 91.75% for the bearing system. The proposed approach is compared to such standard methods as a support vector machine, GRBM and a combination model. In experiments, the best fault classification rate was detected using the proposed model. The results show that deep learning with statistical feature extraction has an essential improvement potential for diagnosing rotating machinery faults. PMID:27322273

  7. Comparative evaluation of the accuracy of linear measurements between cone beam computed tomography and 3D microtomography.

    PubMed

    Mangione, Francesca; Meleo, Deborah; Talocco, Marco; Pecci, Raffaella; Pacifici, Luciano; Bedini, Rossella

    2013-01-01

    The aim of this study was to evaluate the influence of artifacts on the accuracy of linear measurements estimated with a common cone beam computed tomography (CBCT) system used in dental clinical practice, by comparing it with microCT system as standard reference. Ten bovine bone cylindrical samples containing one implant each, able to provide both points of reference and image quality degradation, have been scanned by CBCT and microCT systems. Thanks to the software of the two systems, for each cylindrical sample, two diameters taken at different levels, by using implants different points as references, have been measured. Results have been analyzed by ANOVA and a significant statistically difference has been found. Due to the obtained results, in this work it is possible to say that the measurements made with the two different instruments are still not statistically comparable, although in some samples were obtained similar performances and therefore not statistically significant. With the improvement of the hardware and software of CBCT systems, in the near future the two instruments will be able to provide similar performances.

  8. Automatic detection of health changes using statistical process control techniques on measured transfer times of elderly.

    PubMed

    Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom

    2015-01-01

    It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.

  9. E-Hitz: a word frequency list and a program for deriving psycholinguistic statistics in an agglutinative language (Basque).

    PubMed

    Perea, Manuel; Urkia, Miriam; Davis, Colin J; Agirre, Ainhoa; Laseka, Edurne; Carreiras, Manuel

    2006-11-01

    We describe a Windows program that enables users to obtain a broad range of statistics concerning the properties of word and nonword stimuli in an agglutinative language (Basque), including measures of word frequency (at the whole-word and lemma levels), bigram and biphone frequency, orthographic similarity, orthographic and phonological structure, and syllable-based measures. It is designed for use by researchers in psycholinguistics, particularly those concerned with recognition of isolated words and morphology. In addition to providing standard orthographic and phonological neighborhood measures, the program can be used to obtain information about other forms of orthographic similarity, such as transposed-letter similarity and embedded-word similarity. It is available free of charge from www .uv.es/mperea/E-Hitz.zip.

  10. Determination of the criterion-related validity of hip joint angle test for estimating hamstring flexibility using a contemporary statistical approach.

    PubMed

    Sainz de Baranda, Pilar; Rodríguez-Iniesta, María; Ayala, Francisco; Santonja, Fernando; Cejudo, Antonio

    2014-07-01

    To examine the criterion-related validity of the horizontal hip joint angle (H-HJA) test and vertical hip joint angle (V-HJA) test for estimating hamstring flexibility measured through the passive straight-leg raise (PSLR) test using contemporary statistical measures. Validity study. Controlled laboratory environment. One hundred thirty-eight professional trampoline gymnasts (61 women and 77 men). Hamstring flexibility. Each participant performed 2 trials of H-HJA, V-HJA, and PSLR tests in a randomized order. The criterion-related validity of H-HJA and V-HJA tests was measured through the estimation equation, typical error of the estimate (TEEST), validity correlation (β), and their respective confidence limits. The findings from this study suggest that although H-HJA and V-HJA tests showed moderate to high validity scores for estimating hamstring flexibility (standardized TEEST = 0.63; β = 0.80), the TEEST statistic reported for both tests was not narrow enough for clinical purposes (H-HJA = 10.3 degrees; V-HJA = 9.5 degrees). Subsequently, the predicted likely thresholds for the true values that were generated were too wide (H-HJA = predicted value ± 13.2 degrees; V-HJA = predicted value ± 12.2 degrees). The results suggest that although the HJA test showed moderate to high validity scores for estimating hamstring flexibility, the prediction intervals between the HJA and PSLR tests are not strong enough to suggest that clinicians and sport medicine practitioners should use the HJA and PSLR tests interchangeably as gold standard measurement tools to evaluate and detect short hamstring muscle flexibility.

  11. Toward a Better Understanding of the Relationship between Belief in the Paranormal and Statistical Bias: The Potential Role of Schizotypy

    PubMed Central

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2016-01-01

    The present paper examined relationships between schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experience; O-LIFE scale brief), belief in the paranormal (assessed via the Revised Paranormal Belief Scale; RPBS) and proneness to statistical bias (i.e., perception of randomness and susceptibility to conjunction fallacy). Participants were 254 volunteers recruited via convenience sampling. Probabilistic reasoning problems appeared framed within both standard and paranormal contexts. Analysis revealed positive correlations between the Unusual Experience (UnExp) subscale of O-LIFE and paranormal belief measures [RPBS full scale, traditional paranormal beliefs (TPB) and new age philosophy]. Performance on standard problems correlated negatively with UnExp and belief in the paranormal (particularly the TPB dimension of the RPBS). Consideration of specific problem types revealed that perception of randomness associated more strongly with belief in the paranormal than conjunction; both problem types related similarly to UnExp. Structural equation modeling specified that belief in the paranormal mediated the indirect relationship between UnExp and statistical bias. For problems presented in a paranormal context a framing effect occurred. Whilst UnExp correlated positively with conjunction proneness (controlling for perception of randomness), there was no association between UnExp and perception of randomness (controlling for conjunction). PMID:27471481

  12. The effects of organizational flexibility on nurse utilization and vacancy statistics in Ontario hospitals.

    PubMed

    Fisher, Anita; Baumann, Andrea; Blythe, Jennifer

    2007-01-01

    Social and economic changes in industrial societies during the past quarter-century encouraged organizations to develop greater flexibility in their employment systems in order to adapt to organizational restructuring and labour market shifts (Kallenberg 2003). During the 1990s this trend became evident in healthcare organizations. Before healthcare restructuring, employment in the acute hospital sector was more stable, with higher levels of full-time staff. However, in the downsizing era, employers favoured more flexible, contingent workforces (Zeytinoglu 1999). As healthcare systems evolved, staffing patterns became more chaotic and predicting staffing requirements more complex. Increased use of casual and part-time staff, overtime and agency nurses, as well as alterations in skills mix, masked vacancy counts and thus rendered this measurement of nursing demand increasingly difficult. This study explores flexible nurse staffing practices and demonstrates how data such as nurse vacancy statistics, considered in isolation from nurse utilization information, are inaccurate indicators of nursing demand and nurse shortage. It develops an algorithm that provides a standard methodology for improved monitoring and management of nurse utilization data and better quantification of vacancy statistics. Use of standard methodology promotes more accurate measurement of nurse utilization and shortage. Furthermore, it provides a solid base for improved nursing workforce planning, production and management.

  13. Toward a Better Understanding of the Relationship between Belief in the Paranormal and Statistical Bias: The Potential Role of Schizotypy.

    PubMed

    Dagnall, Neil; Denovan, Andrew; Drinkwater, Kenneth; Parker, Andrew; Clough, Peter

    2016-01-01

    The present paper examined relationships between schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experience; O-LIFE scale brief), belief in the paranormal (assessed via the Revised Paranormal Belief Scale; RPBS) and proneness to statistical bias (i.e., perception of randomness and susceptibility to conjunction fallacy). Participants were 254 volunteers recruited via convenience sampling. Probabilistic reasoning problems appeared framed within both standard and paranormal contexts. Analysis revealed positive correlations between the Unusual Experience (UnExp) subscale of O-LIFE and paranormal belief measures [RPBS full scale, traditional paranormal beliefs (TPB) and new age philosophy]. Performance on standard problems correlated negatively with UnExp and belief in the paranormal (particularly the TPB dimension of the RPBS). Consideration of specific problem types revealed that perception of randomness associated more strongly with belief in the paranormal than conjunction; both problem types related similarly to UnExp. Structural equation modeling specified that belief in the paranormal mediated the indirect relationship between UnExp and statistical bias. For problems presented in a paranormal context a framing effect occurred. Whilst UnExp correlated positively with conjunction proneness (controlling for perception of randomness), there was no association between UnExp and perception of randomness (controlling for conjunction).

  14. Cross-validation of Peak Oxygen Consumption Prediction Models From OMNI Perceived Exertion.

    PubMed

    Mays, R J; Goss, F L; Nagle, E F; Gallagher, M; Haile, L; Schafer, M A; Kim, K H; Robertson, R J

    2016-09-01

    This study cross-validated statistical models for prediction of peak oxygen consumption using ratings of perceived exertion from the Adult OMNI Cycle Scale of Perceived Exertion. 74 participants (men: n=36; women: n=38) completed a graded cycle exercise test. Ratings of perceived exertion for the overall body, legs, and chest/breathing were recorded each test stage and entered into previously developed 3-stage peak oxygen consumption prediction models. There were no significant differences (p>0.05) between measured and predicted peak oxygen consumption from ratings of perceived exertion for the overall body, legs, and chest/breathing within men (mean±standard deviation: 3.16±0.52 vs. 2.92±0.33 vs. 2.90±0.29 vs. 2.90±0.26 L·min(-1)) and women (2.17±0.29 vs. 2.02±0.22 vs. 2.03±0.19 vs. 2.01±0.19 L·min(-1)) participants. Previously developed statistical models for prediction of peak oxygen consumption based on subpeak OMNI ratings of perceived exertion responses were similar to measured peak oxygen consumption in a separate group of participants. These findings provide practical implications for the use of the original statistical models in standard health-fitness settings. © Georg Thieme Verlag KG Stuttgart · New York.

  15. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites

    NASA Astrophysics Data System (ADS)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-01

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy's Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness all quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the lognormal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.

  16. Statistical characteristics of cloud variability. Part 1: Retrieved cloud liquid water path at three ARM sites: Observed cloud variability at ARM sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Dong; Campos, Edwin; Liu, Yangang

    2014-09-17

    Statistical characteristics of cloud variability are examined for their dependence on averaging scales and best representation of probability density function with the decade-long retrieval products of cloud liquid water path (LWP) from the tropical western Pacific (TWP), Southern Great Plains (SGP), and North Slope of Alaska (NSA) sites of the Department of Energy’s Atmospheric Radiation Measurement Program. The statistical moments of LWP show some seasonal variation at the SGP and NSA sites but not much at the TWP site. It is found that the standard deviation, relative dispersion (the ratio of the standard deviation to the mean), and skewness allmore » quickly increase with the averaging window size when the window size is small and become more or less flat when the window size exceeds 12 h. On average, the cloud LWP at the TWP site has the largest values of standard deviation, relative dispersion, and skewness, whereas the NSA site exhibits the least. Correlation analysis shows that there is a positive correlation between the mean LWP and the standard deviation. The skewness is found to be closely related to the relative dispersion with a correlation coefficient of 0.6. The comparison further shows that the log normal, Weibull, and gamma distributions reasonably explain the observed relationship between skewness and relative dispersion over a wide range of scales.« less

  17. Descriptive Statistics: Reporting the Answers to the 5 Basic Questions of Who, What, Why, When, Where, and a Sixth, So What?

    PubMed

    Vetter, Thomas R

    2017-11-01

    Descriptive statistics are specific methods basically used to calculate, describe, and summarize collected research data in a logical, meaningful, and efficient way. Descriptive statistics are reported numerically in the manuscript text and/or in its tables, or graphically in its figures. This basic statistical tutorial discusses a series of fundamental concepts about descriptive statistics and their reporting. The mean, median, and mode are 3 measures of the center or central tendency of a set of data. In addition to a measure of its central tendency (mean, median, or mode), another important characteristic of a research data set is its variability or dispersion (ie, spread). In simplest terms, variability is how much the individual recorded scores or observed values differ from one another. The range, standard deviation, and interquartile range are 3 measures of variability or dispersion. The standard deviation is typically reported for a mean, and the interquartile range for a median. Testing for statistical significance, along with calculating the observed treatment effect (or the strength of the association between an exposure and an outcome), and generating a corresponding confidence interval are 3 tools commonly used by researchers (and their collaborating biostatistician or epidemiologist) to validly make inferences and more generalized conclusions from their collected data and descriptive statistics. A number of journals, including Anesthesia & Analgesia, strongly encourage or require the reporting of pertinent confidence intervals. A confidence interval can be calculated for virtually any variable or outcome measure in an experimental, quasi-experimental, or observational research study design. Generally speaking, in a clinical trial, the confidence interval is the range of values within which the true treatment effect in the population likely resides. In an observational study, the confidence interval is the range of values within which the true strength of the association between the exposure and the outcome (eg, the risk ratio or odds ratio) in the population likely resides. There are many possible ways to graphically display or illustrate different types of data. While there is often latitude as to the choice of format, ultimately, the simplest and most comprehensible format is preferred. Common examples include a histogram, bar chart, line chart or line graph, pie chart, scatterplot, and box-and-whisker plot. Valid and reliable descriptive statistics can answer basic yet important questions about a research data set, namely: "Who, What, Why, When, Where, How, How Much?"

  18. From Exploratory Talk to Abstract Reasoning: A Case for Far Transfer?

    ERIC Educational Resources Information Center

    Webb, Paul; Whitlow, J. W., Jr.; Venter, Danie

    2017-01-01

    Research has shown improvements in science, mathematics, and language scores when classroom discussion is employed in school-level science and mathematics classes. Studies have also shown statistically and practically significant gains in children's reasoning abilities as measured by the Raven's Standard Progressive Matrices test when employing…

  19. A New Look at Bias in Aptitude Tests.

    ERIC Educational Resources Information Center

    Scheuneman, Janice Dowd

    1981-01-01

    Statistical bias in measurement and ethnic-group bias in testing are discussed, reviewing predictive and construct validity studies. Item bias is reconceptualized to include distance of item content from respondent's experience. Differing values of mean and standard deviation for bias parameter are analyzed in a simulation. References are…

  20. School Libraries and Science Achievement: A View from Michigan's Middle Schools

    ERIC Educational Resources Information Center

    Mardis, Marcia

    2007-01-01

    If strong school library media centers (SLMCs) positively impact middle school student reading achievement, as measured on standardized tests, are they also beneficial for middle school science achievement? To answer this question, the researcher built upon the statistical analyses used in previous school library impact studies with qualitative…

  1. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.

  2. An epidemiologic study on anthropometric dimensions of 7-11-year-old Iranian children: considering ethnic differences.

    PubMed

    Mirmohammadi, Seyyed Jalil; Hafezi, Rahmatollah; Mehrparvar, Amir Houshang; Gerdfaramarzi, Raziyeh Soltani; Mostaghaci, Mehrdad; Nodoushan, Reza Jafari; Rezaeian, Bibiseyedeh

    2013-01-01

    Anthropometric data can be used to identify the physical dimensions of equipment, furniture, clothing and workstations. The use of poorly designed furniture that fails to fulfil the users' anthropometric dimensions, has a negative impact on human health. In this study, we measured some anthropometric dimensions of Iranian children from different ethnicities. A total of 12,731 Iranian primary school children aged 7-11 years were included in the study and their static anthropometric dimensions were measured. Descriptive statistics such as mean, standard deviation and key percentiles were calculated. All dimensions were compared among different ethnicities and different genders. This study showed significant differences in a set of 22 anthropometric dimensions with regard to gender, age and ethnicity. Turk boys and Arab girls were larger than their contemporaries in different ages. According to the results of this study, difference between genders and among different ethnicities should be taken into account by designers and manufacturers of school furniture. In this study, we measured 22 static anthropometric dimensions of 12,731 Iranian primary school children aged 7-11 years from different ethnicities. Descriptive statistics such as mean, standard deviation and key percentiles were measured for each dimension. This study showed significant differences in a set of 22 anthropometric dimensions in different genders, ages and ethnicities.

  3. Comparison of Gasoline Direct-Injection (GDI) and Port Fuel Injection (PFI) Vehicle Emissions: Emission Certification Standards, Cold-Start, Secondary Organic Aerosol Formation Potential, and Potential Climate Impacts.

    PubMed

    Saliba, Georges; Saleh, Rawad; Zhao, Yunliang; Presto, Albert A; Lambe, Andrew T; Frodin, Bruce; Sardar, Satya; Maldonado, Hector; Maddox, Christine; May, Andrew A; Drozd, Greg T; Goldstein, Allen H; Russell, Lynn M; Hagen, Fabian; Robinson, Allen L

    2017-06-06

    Recent increases in the Corporate Average Fuel Economy standards have led to widespread adoption of vehicles equipped with gasoline direct-injection (GDI) engines. Changes in engine technologies can alter emissions. To quantify these effects, we measured gas- and particle-phase emissions from 82 light-duty gasoline vehicles recruited from the California in-use fleet tested on a chassis dynamometer using the cold-start unified cycle. The fleet included 15 GDI vehicles, including 8 GDIs certified to the most-stringent emissions standard, superultra-low-emission vehicles (SULEV). We quantified the effects of engine technology, emission certification standards, and cold-start on emissions. For vehicles certified to the same emissions standard, there is no statistical difference of regulated gas-phase pollutant emissions between PFIs and GDIs. However, GDIs had, on average, a factor of 2 higher particulate matter (PM) mass emissions than PFIs due to higher elemental carbon (EC) emissions. SULEV certified GDIs have a factor of 2 lower PM mass emissions than GDIs certified as ultralow-emission vehicles (3.0 ± 1.1 versus 6.3 ± 1.1 mg/mi), suggesting improvements in engine design and calibration. Comprehensive organic speciation revealed no statistically significant differences in the composition of the volatile organic compounds emissions between PFI and GDIs, including benzene, toluene, ethylbenzene, and xylenes (BTEX). Therefore, the secondary organic aerosol and ozone formation potential of the exhaust does not depend on engine technology. Cold-start contributes a larger fraction of the total unified cycle emissions for vehicles meeting more-stringent emission standards. Organic gas emissions were the most sensitive to cold-start compared to the other pollutants tested here. There were no statistically significant differences in the effects of cold-start on GDIs and PFIs. For our test fleet, the measured 14.5% decrease in CO 2 emissions from GDIs was much greater than the potential climate forcing associated with higher black carbon emissions. Thus, switching from PFI to GDI vehicles will likely lead to a reduction in net global warming.

  4. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  5. Robust tissue-air volume segmentation of MR images based on the statistics of phase and magnitude: Its applications in the display of susceptibility-weighted imaging of the brain.

    PubMed

    Du, Yiping P; Jin, Zhaoyang

    2009-10-01

    To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.

  6. Magnetic Johnson Noise Constraints on Electron Electric Dipole Moment Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Munger, C.

    2004-11-18

    Magnetic fields from statistical fluctuations in currents in conducting materials broaden atomic linewidths by the Zeeman effect. The constraints so imposed on the design of experiments to measure the electric dipole moment of the electron are analyzed. Contrary to the predictions of Lamoreaux [S.K. Lamoreaux, Phys. Rev. A60, 1717(1999)], the standard material for high-permeability magnetic shields proves to be as significant a source of broadening as an ordinary metal. A scheme that would replace this standard material with ferrite is proposed.

  7. Silicon solar cell process. Development, fabrication and analysis

    NASA Technical Reports Server (NTRS)

    Yoo, H. I.; Iles, P. A.; Tanner, D. P.

    1978-01-01

    Solar cells were fabricated from unconventional silicon sheets, and the performances were characterized with an emphasis on statistical evaluation. A number of solar cell fabrication processes were used and conversion efficiency was measured under AMO condition at 25 C. Silso solar cells using standard processing showed an average efficiency of about 9.6%. Solar cells with back surface field process showed about the same efficiency as the cells from standard process. Solar cells from grain boundary passivation process did not show any improvements in solar cell performance.

  8. Wavelength dependence of position angle in polarization standards

    NASA Astrophysics Data System (ADS)

    Dolan, J. F.; Tapia, S.

    1986-08-01

    Eleven of the 15 stars on Serkowski's (1974) list of "Standard Stars with Large Interstellar Polarization" were investigated to determine whether the orientation of the plane of their linear polarization showed any dependence on wavelength. Nine of the eleven stars exhibited a statistically significant wavelength dependence of position angle when measured with an accuracy of ≡0°.1 standard deviation. For the majority of these stars, the effect is caused primarily by intrinsic polarization. The calibration of polarimeter position angles in a celestial coordinate frame must evidently be done at the 0°.1 level of accuracy by using only carefully selected standard stars or by using other astronomical or laboratory methods.

  9. First uncertainty evaluation of the FoCS-2 primary frequency standard

    NASA Astrophysics Data System (ADS)

    Jallageas, A.; Devenoges, L.; Petersen, M.; Morel, J.; Bernier, L. G.; Schenker, D.; Thomann, P.; Südmeyer, T.

    2018-06-01

    We report the uncertainty evaluation of the Swiss continuous primary frequency standard FoCS-2 (Fontaine Continue Suisse). Unlike other primary frequency standards which are working with clouds of cold atoms, this fountain uses a continuous beam of cold caesium atoms bringing a series of metrological advantages and specific techniques for the evaluation of the uncertainty budget. Recent improvements of FoCS-2 have made possible the evaluation of the frequency shifts and of their uncertainties in the order of . When operating in an optimal regime a relative frequency instability of is obtained. The relative standard uncertainty reported in this article, , is strongly dominated by the statistics of the frequency measurements.

  10. Wavelength dependence of position angle in polarization standards. [of stellar systems

    NASA Technical Reports Server (NTRS)

    Dolan, J. F.; Tapia, S.

    1986-01-01

    Eleven of the 15 stars on Serkowski's (1974) list of 'Standard Stars with Large Interstellar Polarization' were investigated to determine whether the orientation of the plane of their linear polarization showed any dependence on wavelength. Nine of the eleven stars exhibited a statistically significant wavelength dependence of position angle when measured with an accuracy of about 0.1 deg standard deviation. For the majority of these stars, the effect is caused primarily by intrinsic polarization. The calibration of polarimeter position angles in a celestial coordinate frame must evidently be done at the 0.1 deg level of accuracy by using only carefully selected standard stars or by using other astronomical or laboratory methods.

  11. Measuring, Estimating, and Deciding under Uncertainty.

    PubMed

    Michel, Rolf

    2016-03-01

    The problem of uncertainty as a general consequence of incomplete information and the approach to quantify uncertainty in metrology is addressed. Then, this paper discusses some of the controversial aspects of the statistical foundation of the concepts of uncertainty in measurements. The basics of the ISO Guide to the Expression of Uncertainty in Measurement as well as of characteristic limits according to ISO 11929 are described and the needs for a revision of the latter standard are explained. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. Precision Measurement and Calibration. Volume 1. Statistical Concepts and Procedures

    DTIC Science & Technology

    1969-02-01

    due to growth measure of the loss of radium D and polonium - 210 of polonium - 210 in A. B, and D, on the one hand, in the transfer of June 1930. and in C...tracting the contributions of polonium - 210 and tion to Washington, D. C., of these standards; to nuclear recoils and of radium E from the energy W...event, the balance for the absolute measurement of radiation.w a n rwith applications to radium and its emanation, Proc. polonium - 210 correction will

  13. Zonal average earth radiation budget measurements from satellites for climate studies

    NASA Technical Reports Server (NTRS)

    Ellis, J. S.; Haar, T. H. V.

    1976-01-01

    Data from 29 months of satellite radiation budget measurements, taken intermittently over the period 1964 through 1971, are composited into mean month, season and annual zonally averaged meridional profiles. Individual months, which comprise the 29 month set, were selected as representing the best available total flux data for compositing into large scale statistics for climate studies. A discussion of spatial resolution of the measurements along with an error analysis, including both the uncertainty and standard error of the mean, are presented.

  14. Investigation of the Statistics of Pure Tone Sound Power Injection from Low Frequency, Finite Sized Sources in a Reverberant Room

    NASA Technical Reports Server (NTRS)

    Smith, Wayne Farrior

    1973-01-01

    The effect of finite source size on the power statistics in a reverberant room for pure tone excitation was investigated. Theoretical results indicate that the standard deviation of low frequency, pure tone finite sources is always less than that predicted by point source theory and considerably less when the source dimension approaches one-half an acoustic wavelength or greater. A supporting experimental study was conducted utilizing an eight inch loudspeaker and a 30 inch loudspeaker at eleven source positions. The resulting standard deviation of sound power output of the smaller speaker is in excellent agreement with both the derived finite source theory and existing point source theory, if the theoretical data is adjusted to account for experimental incomplete spatial averaging. However, the standard deviation of sound power output of the larger speaker is measurably lower than point source theory indicates, but is in good agreement with the finite source theory.

  15. Standardized Symptom Measurement of Individuals with Early Lyme Disease Over Time.

    PubMed

    Bechtold, Kathleen T; Rebman, Alison W; Crowder, Lauren A; Johnson-Greene, Doug; Aucott, John N

    2017-03-01

    Understanding the Lyme disease (LD) literature is challenging given the lack of consistent methodology and standardized measurement of symptoms and the impact on functioning. This prospective study incorporates well-validated measures to capture the symptom picture of individuals with early LD from time of diagnosis through 6-months post-treatment. One hundred seven patients with confirmed early LD and 26 healthy controls were evaluated using standardized instruments for pain, fatigue, depressive symptoms, functional impact, and cognitive functioning. Prior to antibiotic treatment, patients experience notable symptoms of fatigue and pain statistically higher than controls. After treatment, there are no group differences, suggesting that symptoms resolve and that there are no residual cognitive impairments at the level of group analysis. However, using subgroup analyses, some individuals experience persistent symptoms that lead to functional decline and these individuals can be identified immediately post-completion of standard antibiotic treatment using well-validated symptom measures. Overall, the findings suggest that ideally-treated early LD patients recover well and experience symptom resolution over time, though a small subgroup continue to suffer with symptoms that lead to functional decline. The authors discuss use of standardized instruments for identification of individuals who warrant further clinical follow-up. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Comparative measurements using different particle size instruments

    NASA Technical Reports Server (NTRS)

    Chigier, N.

    1984-01-01

    This paper discusses the measurement and comparison of particle size and velocity measurements in sprays. The general nature of sprays and the development of standard, consistent research sprays are described. The instruments considered in this paper are: pulsed laser photography, holography, television, and cinematography; laser anemometry and interferometry using visibility, peak amplitude, and intensity ratioing; and laser diffraction. Calibration is by graticule, reticle, powders with known size distributions in liquid cells, monosize sprays, and, eventually, standard sprays. Statistical analyses including spatial and temporal long-time averaging as well as high-frequency response time histories with conditional sampling are examined. Previous attempts at comparing instruments, the making of simultaneous or consecutive measurements with similar types and different types of imaging, interferometric, and diffraction instruments are reviewed. A program of calibration and experiments for comparing and assessing different instruments is presented.

  17. Analyses and assessments of span wise gust gradient data from NASA B-57B aircraft

    NASA Technical Reports Server (NTRS)

    Frost, Walter; Chang, Ho-Pen; Ringnes, Erik A.

    1987-01-01

    Analysis of turbulence measured across the airfoil of a Cambera B-57 aircraft is reported. The aircraft is instrumented with probes for measuring wind at both wing tips and at the nose. Statistical properties of the turbulence are reported. These consist of the standard deviations of turbulence measured by each individual probe, standard deviations and probability distribution of differences in turbulence measured between probes and auto- and two-point spatial correlations and spectra. Procedures associated with calculations of two-point spatial correlations and spectra utilizing data were addressed. Methods and correction procedures for assuring the accuracy of aircraft measured winds are also described. Results are found, in general, to agree with correlations existing in the literature. The velocity spatial differences fit a Gaussian/Bessel type probability distribution. The turbulence agrees with the von Karman turbulence correlation and with two-point spatial correlations developed from the von Karman correlation.

  18. How random is a random vector?

    NASA Astrophysics Data System (ADS)

    Eliazar, Iddo

    2015-12-01

    Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.

  19. Statistical Association Criteria in Forensic Psychiatry–A criminological evaluation of casuistry

    PubMed Central

    Gheorghiu, V; Buda, O; Popescu, I; Trandafir, MS

    2011-01-01

    Purpose. Identification of potential shared primary psychoprophylaxis and crime prevention is measured by analyzing the rate of commitments for patients–subjects to forensic examination. Material and method. The statistic trial is a retrospective, document–based study. The statistical lot consists of 770 initial examination reports performed and completed during the whole year 2007, primarily analyzed in order to summarize the data within the National Institute of Forensic Medicine, Bucharest, Romania (INML), with one of the group variables being ‘particularities of the psychiatric patient history’, containing the items ‘forensic onset’, ‘commitments within the last year prior to the examination’ and ‘absence of commitments within the last year prior to the examination’. The method used was the Kendall bivariate correlation. For this study, the authors separately analyze only the two items regarding commitments by other correlation alternatives and by modern, elaborate statistical analyses, i.e. recording of the standard case study variables, Kendall bivariate correlation, cross tabulation, factor analysis and hierarchical cluster analysis. Results. The results are varied, from theoretically presumed clinical nosography (such as schizophrenia or manic depression), to non–presumed (conduct disorders) or unexpected behavioral acts, and therefore difficult to interpret. Conclusions. One took into consideration the features of the batch as well as the results of the previous standard correlation of the whole statistical lot. The authors emphasize the role of medical security measures that are actually applied in the therapeutic management in general and in risk and second offence management in particular, as well as the role of forensic psychiatric examinations in the detection of certain aspects related to the monitoring of mental patients. PMID:21505571

  20. Research on Time Selection of Mass Sports in Tibetan Areas Plateau of Gansu Province Based on Environmental Science

    NASA Astrophysics Data System (ADS)

    Gao, Jike

    2018-01-01

    Through using the method of literature review, instrument measuring, questionnaire and mathematical statistics, this paper analyzed the current situation in Mass Sports of Tibetan Areas Plateau in Gansu Province. Through experimental test access to Tibetan areas in gansu province of air pollutants and meteorological index data as the foundation, control related national standard and exercise science, statistical analysis of data, the Tibetan plateau, gansu province people participate in physical exercise is dedicated to providing you with scientific methods and appropriate time.

  1. SOME CHARACTERISTICS OF THE ORAL CAVITY AND TEETH OF COSMONAUTS ON MISSIONS TO THE INTERNATIONAL SPACE STATION.

    PubMed

    Ilyin, V K; Shumilina, G A; Solovieva, Z O; Nosovsky, A M; Kaminskaya, E V

    Earlier studies were furthered by examination of parodentium anaerobic microbiota and investigation of gingival liquid immunological factors in space flight. Immunoglobulins were measured using the .enzyme immunoassay (EM). The qualitative content of keya parodentium pathogens is determined with state-of-the-art molecular biology technologies such as the polymerase chain reaction. Statistical data processing was performed using the principle component analysis and ensuing standard statistical analysis. Thereupon, recommendations on cosmonaut's oral and dental hygiene during space mission were developed.

  2. 6.6-hour inhalation of ozone concentrations from 60 to 87 parts per billion in healthy humans.

    PubMed

    Schelegle, Edward S; Morales, Christopher A; Walby, William F; Marion, Susan; Allen, Roblee P

    2009-08-01

    Identification of the minimal ozone (O(3)) concentration and/or dose that induces measurable lung function decrements in humans is considered in the risk assessment leading to establishing an appropriate National Ambient Air Quality Standard for O(3) that protects public health. To identify and/or predict the minimal mean O(3) concentration that produces a decrement in FEV(1) and symptoms in healthy individuals completing 6.6-hour exposure protocols. Pulmonary function and subjective symptoms were measured in 31 healthy adults (18-25 yr, male and female, nonsmokers) who completed five 6.6-hour chamber exposures: filtered air and four variable hourly patterns with mean O(3) concentrations of 60, 70, 80, and 87 parts per billion (ppb). Compared with filtered air, statistically significant decrements in FEV(1) and increases in total subjective symptoms scores (P < 0.05) were measured after exposure to mean concentrations of 70, 80, and 87 ppb O(3). The mean percent change in FEV(1) (+/-standard error) at the end of each protocol was 0.80 +/- 0.90, -2.72 +/- 1.48, -5.34 +/- 1.42, -7.02 +/- 1.60, and -11.42 +/- 2.20% for exposure to filtered air and 60, 70, 80, and 87 ppb O(3), respectively. Inhalation of 70 ppb O(3) for 6.6 hours, a concentration below the current 8-hour National Ambient Air Quality Standard of 75 ppb, is sufficient to induce statistically significant decrements in FEV(1) in healthy young adults.

  3. Statistical analysis of water-quality data containing multiple detection limits: S-language software for regression on order statistics

    USGS Publications Warehouse

    Lee, L.; Helsel, D.

    2005-01-01

    Trace contaminants in water, including metals and organics, often are measured at sufficiently low concentrations to be reported only as values below the instrument detection limit. Interpretation of these "less thans" is complicated when multiple detection limits occur. Statistical methods for multiply censored, or multiple-detection limit, datasets have been developed for medical and industrial statistics, and can be employed to estimate summary statistics or model the distributions of trace-level environmental data. We describe S-language-based software tools that perform robust linear regression on order statistics (ROS). The ROS method has been evaluated as one of the most reliable procedures for developing summary statistics of multiply censored data. It is applicable to any dataset that has 0 to 80% of its values censored. These tools are a part of a software library, or add-on package, for the R environment for statistical computing. This library can be used to generate ROS models and associated summary statistics, plot modeled distributions, and predict exceedance probabilities of water-quality standards. ?? 2005 Elsevier Ltd. All rights reserved.

  4. Standardization in gully erosion studies: methodology and interpretation of magnitudes from a global review

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; Gomez, Jose Alfonso

    2016-04-01

    Standardization is the process of developing common conventions or proceedings to facilitate the communication, use, comparison and exchange of products or information among different parties. It has been an useful tool in different fields from industry to statistics due to technical, economic and social reasons. In science the need for standardization has been recognised in the definition of methods as well as in publication formats. With respect to gully erosion, a number of initiatives have been carried out to propose common methodologies, for instance, for gully delineation (Castillo et al., 2014) and geometrical measurements (Casalí et al., 2015). The main aims of this work are: 1) to examine previous proposals in gully erosion literature implying standardization processes; 2) to contribute with new approaches to improve the homogeneity of methodologies and presentation of results for a better communication among the gully erosion community. For this purpose, we evaluated the basic information provided on environmental factors, discussed the delineation and measurement procedures proposed in previous works and, finally, we analysed statistically the severity of degradation levels derived from different indicators at the world scale. As a result, we presented suggestions aiming to serve as guidance for survey design as well as for the interpretation of vulnerability levels and degradation rates for future gully erosion studies. References Casalí, J., Giménez, R., and Campo-Bescós, M. A.: Gully geometry: what are we measuring?, SOIL, 1, 509-513, doi:10.5194/soil-1-509-2015, 2015. Castillo C., Taguas E. V., Zarco-Tejada P., James M. R., and Gómez J. A. (2014), The normalized topographic method: an automated procedure for gully mapping using GIS, Earth Surf. Process. Landforms, 39, 2002-2015, doi: 10.1002/esp.3595

  5. The International Index of Erectile Function: a methodological critique and suggestions for improvement.

    PubMed

    Yule, Morag; Davison, Joyce; Brotto, Lori

    2011-01-01

    The International Index of Erectile Function is a well-worded and psychometrically valid self-report questionnaire widely used as the standard for the evaluation of male sexual function. However, some conceptual and statistical problems arise when using the measure with men who are not sexually active. These problems are illustrated using 2 empirical examples, and the authors provide recommended solutions to further strengthen the efficacy and validity of this measure.

  6. Statistical Measurement and Analysis of Claimant and Demographic Variables Affecting Processing and Adjudication Duration in The United States Army Physical Disability Evaluation System.

    DTIC Science & Technology

    1997-02-06

    Adjudication Duration 2 2. INTRODUCTION This retrospective study analyzes relationships of variables to adjudication and processing duration in the Army...Package for Social Scientists (SPSS), Standard Version 6.1, June 1994, to determine relationships among the dependent and independent variables... consanguinity between variables. Content and criterion validity is employed to determine the measure of scientific validity. Reliability is also

  7. Fish: A New Computer Program for Friendly Introductory Statistics Help

    ERIC Educational Resources Information Center

    Brooks, Gordon P.; Raffle, Holly

    2005-01-01

    All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…

  8. Never too old for anonymity: a statistical standard for demographic data sharing via the HIPAA Privacy Rule

    PubMed Central

    Benitez, Kathleen; Masys, Daniel

    2010-01-01

    Objective Healthcare organizations must de-identify patient records before sharing data. Many organizations rely on the Safe Harbor Standard of the HIPAA Privacy Rule, which enumerates 18 identifiers that must be suppressed (eg, ages over 89). An alternative model in the Privacy Rule, known as the Statistical Standard, can facilitate the sharing of more detailed data, but is rarely applied because of a lack of published methodologies. The authors propose an intuitive approach to de-identifying patient demographics in accordance with the Statistical Standard. Design The authors conduct an analysis of the demographics of patient cohorts in five medical centers developed for the NIH-sponsored Electronic Medical Records and Genomics network, with respect to the US census. They report the re-identification risk of patient demographics disclosed according to the Safe Harbor policy and the relative risk rate for sharing such information via alternative policies. Measurements The re-identification risk of Safe Harbor demographics ranged from 0.01% to 0.19%. The findings show alternative de-identification models can be created with risks no greater than Safe Harbor. The authors illustrate that the disclosure of patient ages over the age of 89 is possible when other features are reduced in granularity. Limitations The de-identification approach described in this paper was evaluated with demographic data only and should be evaluated with other potential identifiers. Conclusion Alternative de-identification policies to the Safe Harbor model can be derived for patient demographics to enable the disclosure of values that were previously suppressed. The method is generalizable to any environment in which population statistics are available. PMID:21169618

  9. Odor measurements according to EN 13725: A statistical analysis of variance components

    NASA Astrophysics Data System (ADS)

    Klarenbeek, Johannes V.; Ogink, Nico W. M.; van der Voet, Hilko

    2014-04-01

    In Europe, dynamic olfactometry, as described by the European standard EN 13725, has become the preferred method for evaluating odor emissions emanating from industrial and agricultural sources. Key elements of this standard are the quality criteria for trueness and precision (repeatability). Both are linked to standard values of n-butanol in nitrogen. It is assumed in this standard that whenever a laboratory complies with the overall sensory quality criteria for n-butanol, the quality level is transferable to other, environmental, odors. Although olfactometry is well established, little has been done to investigate inter laboratory variance (reproducibility). Therefore, the objective of this study was to estimate the reproducibility of odor laboratories complying with EN 13725 as well as to investigate the transferability of n-butanol quality criteria to other odorants. Based upon the statistical analysis of 412 odor measurements on 33 sources, distributed in 10 proficiency tests, it was established that laboratory, panel and panel session are components of variance that significantly differ between n-butanol and other odorants (α = 0.05). This finding does not support the transferability of the quality criteria, as determined on n-butanol, to other odorants and as such is a cause for reconsideration of the present single reference odorant as laid down in EN 13725. In case of non-butanol odorants, repeatability standard deviation (sr) and reproducibility standard deviation (sR) were calculated to be 0.108 and 0.282 respectively (log base-10). The latter implies that the difference between two consecutive single measurements, performed on the same testing material by two or more laboratories under reproducibility conditions, will not be larger than a factor 6.3 in 95% of cases. As far as n-butanol odorants are concerned, it was found that the present repeatability standard deviation (sr = 0.108) compares favorably to that of EN 13725 (sr = 0.172). It is therefore suggested that the repeatability limit (r), as laid down in EN 13725, can be reduced from r ≤ 0.477 to r ≤ 0.31.

  10. The measurement of enamel wear of two toothpastes.

    PubMed

    Joiner, Andrew; Weader, Elizabeth; Cox, Trevor F

    2004-01-01

    The aim of this study was to compare the enamel abrasivity of a whitening toothpaste with a standard silica toothpaste. Polished human enamel blocks (4 x 4 mm) were indented with a Knoop diamond. The enamel blocks were attached to the posterior buccal surfaces of full dentures and worn by adult volunteers for 24 hours per day. The blocks were brushed ex vivo for 30 seconds, twice per day with the randomly assigned toothpaste (n = 10 per treatment). The products used were either a whitening toothpaste containing Perlite or a standard silica toothpaste. After four, eight and twelve weeks, one block per subject was removed and the geometry of each Knoop indent was re-measured. From the baseline and post-treatment values of indent length, the amount of enamel wear was calculated from the change in the indent depth. The mean enamel wear (sd) for the whitening toothpaste and the standard silica toothpaste after four weeks was 0.20 (0.11) and 0.14 (0.10); after 8 weeks was 0.44 (0.33) and 0.18 (0.17), and after 12 weeks was 0.60 (0.72) and 0.67 (0.77) microns respectively. After four, eight and twelve weeks, the difference in enamel wear between the two toothpastes was not of statistical significance (p > 0.05, 2 sample t-test) at any time point. The whitening toothpaste did not give a statistically significantly greater level of enamel wear as compared to a standard silica toothpaste over a 4-, 8- and 12-weeks period.

  11. Rethinking Teacher Evaluation: A Conversation about Statistical Inferences and Value-Added Models

    ERIC Educational Resources Information Center

    Callister Everson, Kimberlee; Feinauer, Erika; Sudweeks, Richard R.

    2013-01-01

    In this article, the authors provide a methodological critique of the current standard of value-added modeling forwarded in educational policy contexts as a means of measuring teacher effectiveness. Conventional value-added estimates of teacher quality are attempts to determine to what degree a teacher would theoretically contribute, on average,…

  12. A Comparison of Latent Growth Models for Constructs Measured by Multiple Items

    ERIC Educational Resources Information Center

    Leite, Walter L.

    2007-01-01

    Univariate latent growth modeling (LGM) of composites of multiple items (e.g., item means or sums) has been frequently used to analyze the growth of latent constructs. This study evaluated whether LGM of composites yields unbiased parameter estimates, standard errors, chi-square statistics, and adequate fit indexes. Furthermore, LGM was compared…

  13. Issues in Relating Evaluation to Theory, Policy, and Practice in Continuing Education and Health Education.

    ERIC Educational Resources Information Center

    Green, Lawrence W.; Lewis, Frances Marcus

    1981-01-01

    Reviews a number of issues relating to the results of evaluation studies: standards of acceptability; clinical vs. statistical significance; fallacies in the use of theory; and program, theory, measurement, and design failure. (Journal availability: Subscription Manager, MOBIUS, University of California Press, Berkeley, CA 94720.) (SK)

  14. Survey of Working Conditions. Final Report on Univariate and Bivariate Tables.

    ERIC Educational Resources Information Center

    Michigan Univ., Ann Arbor. Survey Research Center.

    A nationwide survey of employed persons was conducted to provide information on labor standards problems, assess the impact of working conditions on workers, develop job satisfaction measures, and establish statistics for similar data collections. The survey revealed that the majority of workers expressed satisfaction with their jobs but they also…

  15. The Price of a Good Education

    ERIC Educational Resources Information Center

    Schachter, Ron

    2010-01-01

    There are plenty of statistics available for measuring the performance, potential and problems of school districts, from standardized test scores to the number of students eligible for free or reduced-price lunch. Last June, another metric came into sharper focus when the U.S. Census Bureau released its latest state-by-state data on per-pupil…

  16. Representation of microstructural features and magnetic anisotropy of electrical steels in an energy-based vector hysteresis model

    NASA Astrophysics Data System (ADS)

    Jacques, Kevin; Steentjes, Simon; Henrotte, François; Geuzaine, Christophe; Hameyer, Kay

    2018-04-01

    This paper demonstrates how the statistical distribution of pinning fields in a ferromagnetic material can be identified systematically from standard magnetic measurements, Epstein frame or Single Sheet Tester (SST). The correlation between the pinning field distribution and microstructural parameters of the material is then analyzed.

  17. Estimating Standardized Linear Contrasts of Means with Desired Precision

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2009-01-01

    L. Wilkinson and the Task Force on Statistical Inference (1999) recommended reporting confidence intervals for measures of effect sizes. If the sample size is too small, the confidence interval may be too wide to provide meaningful information. Recently, K. Kelley and J. R. Rausch (2006) used an iterative approach to computer-generate tables of…

  18. Assessment of statistic analysis in non-radioisotopic local lymph node assay (non-RI-LLNA) with alpha-hexylcinnamic aldehyde as an example.

    PubMed

    Takeyoshi, Masahiro; Sawaki, Masakuni; Yamasaki, Kanji; Kimber, Ian

    2003-09-30

    The murine local lymph node assay (LLNA) is used for the identification of chemicals that have the potential to cause skin sensitization. However, it requires specific facility and handling procedures to accommodate a radioisotopic (RI) endpoint. We have developed non-radioisotopic (non-RI) endpoint of LLNA based on BrdU incorporation to avoid a use of RI. Although this alternative method appears viable in principle, it is somewhat less sensitive than the standard assay. In this study, we report investigations to determine the use of statistical analysis to improve the sensitivity of a non-RI LLNA procedure with alpha-hexylcinnamic aldehyde (HCA) in two separate experiments. Consequently, the alternative non-RI method required HCA concentrations of greater than 25% to elicit a positive response based on the criterion for classification as a skin sensitizer in the standard LLNA. Nevertheless, dose responses to HCA in the alternative method were consistent in both experiments and we examined whether the use of an endpoint based upon the statistical significance of induced changes in LNC turnover, rather than an SI of 3 or greater, might provide for additional sensitivity. The results reported here demonstrate that with HCA at least significant responses were, in each of two experiments, recorded following exposure of mice to 25% of HCA. These data suggest that this approach may be more satisfactory-at least when BrdU incorporation is measured. However, this modification of the LLNA is rather less sensitive than the standard method if employing statistical endpoint. Taken together the data reported here suggest that a modified LLNA in which BrdU is used in place of radioisotope incorporation shows some promise, but that in its present form, even with the use of a statistical endpoint, lacks some of the sensitivity of the standard method. The challenge is to develop strategies for further refinement of this approach.

  19. Multi-scale structure and topological anomaly detection via a new network statistic: The onion decomposition.

    PubMed

    Hébert-Dufresne, Laurent; Grochow, Joshua A; Allard, Antoine

    2016-08-18

    We introduce a network statistic that measures structural properties at the micro-, meso-, and macroscopic scales, while still being easy to compute and interpretable at a glance. Our statistic, the onion spectrum, is based on the onion decomposition, which refines the k-core decomposition, a standard network fingerprinting method. The onion spectrum is exactly as easy to compute as the k-cores: It is based on the stages at which each vertex gets removed from a graph in the standard algorithm for computing the k-cores. Yet, the onion spectrum reveals much more information about a network, and at multiple scales; for example, it can be used to quantify node heterogeneity, degree correlations, centrality, and tree- or lattice-likeness. Furthermore, unlike the k-core decomposition, the combined degree-onion spectrum immediately gives a clear local picture of the network around each node which allows the detection of interesting subgraphs whose topological structure differs from the global network organization. This local description can also be leveraged to easily generate samples from the ensemble of networks with a given joint degree-onion distribution. We demonstrate the utility of the onion spectrum for understanding both static and dynamic properties on several standard graph models and on many real-world networks.

  20. An entropy-based statistic for genomewide association studies.

    PubMed

    Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao

    2005-07-01

    Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.

  1. 77 FR 34044 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-08

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS); Subcommittee on Standards. Time and Date: June 20, 2012, 9 a.m.-5 p.m. EST..., Executive Secretary, NCVHS, National Center for Health Statistics, Centers for Disease Control and...

  2. [Assessment comparison between area sampling and personal sampling noise measurement in new thermal power plant].

    PubMed

    Zhang, Hua; Chen, Qing-song; Li, Nan; Hua, Yan; Zeng, Lin; Xu, Guo-yang; Tao, Li-yuan; Zhao, Yi-ming

    2013-05-01

    To compare the results of noise hazard evaluations based on area sampling and personal sampling in a new thermal power plant and to analyze the similarities and differences between the two measurement methods. According to Measurement of Physical agents in Workplace Part 8: Noise(GBZff 189.8-2007), area sampling was performed at various operating points for noise measurement, and meanwhile the workers under different types of work wore noise dosimeters for personal noise exposure measurement. The two measurement methods were used to evaluate the level of noise hazards in the enterprise according to the corresponding occupational health standards, and the evaluation results were compared. Area sampling was performed at 99 operating points, the mean noise level was 88.9 ± 11.1 dB (A)(range, 51.3-107.0 dB (A)), with an over-standard rate of 75.8%. Personal sampling was performed (73 person times),and the mean noise level was 79.3 ± 6.3 dB (A), with an over-standard rate of 6.6% ( 16/241 ). There was a statistically significant difference in the over-standard rate between the evaluation results of the two measurement methods ( x2=53.869, ?<0.001 ). Because of the characteristics of the work in new thermal power plants, the noise hazard evaluation based on area sampling cannot be used instead of personal noise exposure measurement among workers. Personal sampling should be used in the noise measurement in new thermal power plant.

  3. Measuring socioeconomic status in multicountry studies: results from the eight-country MAL-ED study

    PubMed Central

    2014-01-01

    Background There is no standardized approach to comparing socioeconomic status (SES) across multiple sites in epidemiological studies. This is particularly problematic when cross-country comparisons are of interest. We sought to develop a simple measure of SES that would perform well across diverse, resource-limited settings. Methods A cross-sectional study was conducted with 800 children aged 24 to 60 months across eight resource-limited settings. Parents were asked to respond to a household SES questionnaire, and the height of each child was measured. A statistical analysis was done in two phases. First, the best approach for selecting and weighting household assets as a proxy for wealth was identified. We compared four approaches to measuring wealth: maternal education, principal components analysis, Multidimensional Poverty Index, and a novel variable selection approach based on the use of random forests. Second, the selected wealth measure was combined with other relevant variables to form a more complete measure of household SES. We used child height-for-age Z-score (HAZ) as the outcome of interest. Results Mean age of study children was 41 months, 52% were boys, and 42% were stunted. Using cross-validation, we found that random forests yielded the lowest prediction error when selecting assets as a measure of household wealth. The final SES index included access to improved water and sanitation, eight selected assets, maternal education, and household income (the WAMI index). A 25% difference in the WAMI index was positively associated with a difference of 0.38 standard deviations in HAZ (95% CI 0.22 to 0.55). Conclusions Statistical learning methods such as random forests provide an alternative to principal components analysis in the development of SES scores. Results from this multicountry study demonstrate the validity of a simplified SES index. With further validation, this simplified index may provide a standard approach for SES adjustment across resource-limited settings. PMID:24656134

  4. Summary Statistics for Fun Dough Data Acquired at LLNL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kallman, J S; Morales, K E; Whipple, R E

    Using x-ray computerized tomography (CT), we have characterized the x-ray linear attenuation coefficients (LAC) of a Play Dough{trademark}-like product, Fun Dough{trademark}, designated as PD. Table 1 gives the first-order statistics for each of four CT measurements, estimated with a Gaussian kernel density estimator (KDE) analysis. The mean values of the LAC range from a high of about 2100 LMHU{sub D} at 100kVp to a low of about 1100 LMHU{sub D} at 300kVp. The standard deviation of each measurement is around 1% of the mean. The entropy covers the range from 3.9 to 4.6. Ordinarily, we would model the LAC ofmore » the material and compare the modeled values to the measured values. In this case, however, we did not have the composition of the material and therefore did not model the LAC. Using a method recently proposed by Lawrence Livermore National Laboratory (LLNL), we estimate the value of the effective atomic number, Z{sub eff}, to be near 8.5. LLNL prepared about 50mL of the Fun Dough{trademark} in a polypropylene vial and firmly compressed it immediately prior to the x-ray measurements. Still, layers can plainly be seen in the reconstructed images, indicating that the bulk density of the material in the container is affected by voids and bubbles. We used the computer program IMGREC to reconstruct the CT images. The values of the key parameters used in the data capture and image reconstruction are given in this report. Additional details may be found in the experimental SOP and a separate document. To characterize the statistical distribution of LAC values in each CT image, we first isolated an 80% central-core segment of volume elements ('voxels') lying completely within the specimen, away from the walls of the polypropylene vial. All of the voxels within this central core, including those comprised of voids and inclusions, are included in the statistics. We then calculated the mean value, standard deviation and entropy for (a) the four image segments and for (b) their digital gradient images. (A digital gradient image of a given image was obtained by taking the absolute value of the difference between the initial image and that same image offset by one voxel horizontally, parallel to the rows of the x-ray detector array.) The statistics of the initial image of LAC values are called 'first order statistics;' those of the gradient image, 'second order statistics.'« less

  5. Estimation of stature from the foot and its segments in a sub-adult female population of North India

    PubMed Central

    2011-01-01

    Background Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. Methods The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. Results The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. Conclusions The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults. PMID:22104433

  6. Estimation of stature from the foot and its segments in a sub-adult female population of North India.

    PubMed

    Krishan, Kewal; Kanchan, Tanuj; Passi, Neelam

    2011-11-21

    Establishing personal identity is one of the main concerns in forensic investigations. Estimation of stature forms a basic domain of the investigation process in unknown and co-mingled human remains in forensic anthropology case work. The objective of the present study was to set up standards for estimation of stature from the foot and its segments in a sub-adult female population. The sample for the study constituted 149 young females from the Northern part of India. The participants were aged between 13 and 18 years. Besides stature, seven anthropometric measurements that included length of the foot from each toe (T1, T2, T3, T4, and T5 respectively), foot breadth at ball (BBAL) and foot breadth at heel (BHEL) were measured on both feet in each participant using standard methods and techniques. The results indicated that statistically significant differences (p < 0.05) between left and right feet occur in both the foot breadth measurements (BBAL and BHEL). Foot length measurements (T1 to T5 lengths) did not show any statistically significant bilateral asymmetry. The correlation between stature and all the foot measurements was found to be positive and statistically significant (p-value < 0.001). Linear regression models and multiple regression models were derived for estimation of stature from the measurements of the foot. The present study indicates that anthropometric measurements of foot and its segments are valuable in the estimation of stature. Foot length measurements estimate stature with greater accuracy when compared to foot breadth measurements. The present study concluded that foot measurements have a strong relationship with stature in the sub-adult female population of North India. Hence, the stature of an individual can be successfully estimated from the foot and its segments using different regression models derived in the study. The regression models derived in the study may be applied successfully for the estimation of stature in sub-adult females, whenever foot remains are brought for forensic examination. Stepwise multiple regression models tend to estimate stature more accurately than linear regression models in female sub-adults.

  7. Statistical analysis of fNIRS data: a comprehensive review.

    PubMed

    Tak, Sungho; Ye, Jong Chul

    2014-01-15

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Uncertainty Analysis of Inertial Model Attitude Sensor Calibration and Application with a Recommended New Calibration Method

    NASA Technical Reports Server (NTRS)

    Tripp, John S.; Tcheng, Ping

    1999-01-01

    Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.

  9. The Standard Model in the history of the Natural Sciences, Econometrics, and the social sciences

    NASA Astrophysics Data System (ADS)

    Fisher, W. P., Jr.

    2010-07-01

    In the late 18th and early 19th centuries, scientists appropriated Newton's laws of motion as a model for the conduct of any other field of investigation that would purport to be a science. This early form of a Standard Model eventually informed the basis of analogies for the mathematical expression of phenomena previously studied qualitatively, such as cohesion, affinity, heat, light, electricity, and magnetism. James Clerk Maxwell is known for his repeated use of a formalized version of this method of analogy in lectures, teaching, and the design of experiments. Economists transferring skills learned in physics made use of the Standard Model, especially after Maxwell demonstrated the value of conceiving it in abstract mathematics instead of as a concrete and literal mechanical analogy. Haavelmo's probability approach in econometrics and R. Fisher's Statistical Methods for Research Workers brought a statistical approach to bear on the Standard Model, quietly reversing the perspective of economics and the social sciences relative to that of physics. Where physicists, and Maxwell in particular, intuited scientific method as imposing stringent demands on the quality and interrelations of data, instruments, and theory in the name of inferential and comparative stability, statistical models and methods disconnected theory from data by removing the instrument as an essential component. New possibilities for reconnecting economics and the social sciences to Maxwell's sense of the method of analogy are found in Rasch's probabilistic models for measurement.

  10. New methods and results for quantification of lightning-aircraft electrodynamics

    NASA Technical Reports Server (NTRS)

    Pitts, Felix L.; Lee, Larry D.; Perala, Rodney A.; Rudolph, Terence H.

    1987-01-01

    The NASA F-106 collected data on the rates of change of electromagnetic parameters on the aircraft surface during over 700 direct lightning strikes while penetrating thunderstorms at altitudes from 15,000 t0 40,000 ft (4,570 to 12,190 m). These in situ measurements provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircraft appropriate for determining indirect lightning effects on aircraft. These data are used to update previous lightning criteria and standards developed over the years from ground-based measurements. The proposed standards will be the first which reflect actual aircraft responses measured at flight altitudes. Nonparametric maximum likelihood estimates of the distribution of the peak electromagnetic rates of change for consideration in the new standards are obtained based on peak recorder data for multiple-strike flights. The linear and nonlinear modeling techniques developed provide means to interpret and understand the direct-strike electromagnetic data acquired on the F-106. The reasonable results obtained with the models, compared with measured responses, provide increased confidence that the models may be credibly applied to other aircraft.

  11. The Mine Safety and Health Administration's criterion threshold value policy increases miners' risk of pneumoconiosis.

    PubMed

    Weeks, James L

    2006-06-01

    The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.

  12. Growth rate measurement in free jet experiments

    NASA Astrophysics Data System (ADS)

    Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent

    2017-07-01

    An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.

  13. Quasi-probabilities in conditioned quantum measurement and a geometric/statistical interpretation of Aharonov's weak value

    NASA Astrophysics Data System (ADS)

    Lee, Jaeha; Tsutsui, Izumi

    2017-05-01

    We show that the joint behavior of an arbitrary pair of (generally noncommuting) quantum observables can be described by quasi-probabilities, which are an extended version of the standard probabilities used for describing the outcome of measurement for a single observable. The physical situations that require these quasi-probabilities arise when one considers quantum measurement of an observable conditioned by some other variable, with the notable example being the weak measurement employed to obtain Aharonov's weak value. Specifically, we present a general prescription for the construction of quasi-joint probability (QJP) distributions associated with a given combination of observables. These QJP distributions are introduced in two complementary approaches: one from a bottom-up, strictly operational construction realized by examining the mathematical framework of the conditioned measurement scheme, and the other from a top-down viewpoint realized by applying the results of the spectral theorem for normal operators and their Fourier transforms. It is then revealed that, for a pair of simultaneously measurable observables, the QJP distribution reduces to the unique standard joint probability distribution of the pair, whereas for a noncommuting pair there exists an inherent indefiniteness in the choice of such QJP distributions, admitting a multitude of candidates that may equally be used for describing the joint behavior of the pair. In the course of our argument, we find that the QJP distributions furnish the space of operators in the underlying Hilbert space with their characteristic geometric structures such that the orthogonal projections and inner products of observables can be given statistical interpretations as, respectively, “conditionings” and “correlations”. The weak value Aw for an observable A is then given a geometric/statistical interpretation as either the orthogonal projection of A onto the subspace generated by another observable B, or equivalently, as the conditioning of A given B with respect to the QJP distribution under consideration.

  14. Malocclusion Class II division 1 skeletal and dental relationships measured by cone-beam computed tomography.

    PubMed

    Xu, Yiling; Oh, Heesoo; Lagravère, Manuel O

    2017-09-01

    The purpose of this study was to locate traditionally-used landmarks in two-dimensional (2D) images and newly-suggested ones in three-dimensional (3D) images (cone-beam computer tomographies [CBCTs]) and determine possible relationships between them to categorize patients with Class II-1 malocclusion. CBCTs from 30 patients diagnosed with Class II-1 malocclusion were obtained from the University of Alberta Graduate Orthodontic Program database. The reconstructed images were downloaded and visualized using the software platform AVIZO ® . Forty-two landmarks were chosen and the coordinates were then obtained and analyzed using linear and angular measurements. Ten images were analyzed three times to determine the reliability and measurement error of each landmark using Intra-Class Correlation coefficient (ICC). Descriptive statistics were done using the SPSS statistical package to determine any relationships. ICC values were excellent for all landmarks in all axes, with the highest measurement error of 2mm in the y-axis for the Gonion Left landmark. Linear and angular measurements were calculated using the coordinates of each landmark. Descriptive statistics showed that the linear and angular measurements used in the 2D images did not correlate well with the 3D images. The lowest standard deviation obtained was 0.6709 for S-GoR/N-Me, with a mean of 0.8016. The highest standard deviation was 20.20704 for ANS-InfraL, with a mean of 41.006. The traditional landmarks used for 2D malocclusion analysis show good reliability when transferred to 3D images. However, they did not reveal specific skeletal or dental patterns when trying to analyze 3D images for malocclusion. Thus, another technique should be considered when classifying 3D CBCT images for Class II-1malocclusion. Copyright © 2017 CEO. Published by Elsevier Masson SAS. All rights reserved.

  15. [Development of a multimedia learning DM diet education program using standardized patients and analysis of its effects on clinical competency and learning satisfaction for nursing students].

    PubMed

    Hyun, Kyung Sun; Kang, Hyun Sook; Kim, Won Ock; Park, Sunhee; Lee, Jia; Sok, Sohyune

    2009-04-01

    The purpose of this study was to develop a multimedia learning program for patients with diabetes mellitus (DM) diet education using standardized patients and to examine the effects of the program on educational skills, communication skills, DM diet knowledge and learning satisfaction. The study employed a randomized control posttest non-synchronized design. The participants were 108 third year nursing students (52 experimental group, 56 control group) at K university in Seoul, Korea. The experimental group had regular lectures and the multimedia learning program for DM diet education using standardized patients while the control group had regular lectures only. The DM educational skills were measured by trained research assistants. The students who received the multimedia learning program scored higher for DM diet educational skills, communication skills and DM diet knowledge compared to the control group. Learning satisfaction of the experimental group was higher than the control group, but statistically insignificant. Clinical competency was improved for students receiving the multimedia learning program for DM diet education using standardized patients, but there was no statistically significant effect on learning satisfaction. In the nursing education system there is a need to develop and apply more multimedia materials for education and to use standardized patients effectively.

  16. The Next-Generation PCR-Based Quantification Method for Ambient Waters: Digital PCR.

    PubMed

    Cao, Yiping; Griffith, John F; Weisberg, Stephen B

    2016-01-01

    Real-time quantitative PCR (qPCR) is increasingly being used for ambient water monitoring, but development of digital polymerase chain reaction (digital PCR) has the potential to further advance the use of molecular techniques in such applications. Digital PCR refines qPCR by partitioning the sample into thousands to millions of miniature reactions that are examined individually for binary endpoint results, with DNA density calculated from the fraction of positives using Poisson statistics. This direct quantification removes the need for standard curves, eliminating the labor and materials associated with creating and running standards with each batch, and removing biases associated with standard variability and mismatching amplification efficiency between standards and samples. Confining reactions and binary endpoint measurements to small partitions also leads to other performance advantages, including reduced susceptibility to inhibition, increased repeatability and reproducibility, and increased capacity to measure multiple targets in one analysis. As such, digital PCR is well suited for ambient water monitoring applications and is particularly advantageous as molecular methods move toward autonomous field application.

  17. Exocrine Dysfunction Correlates with Endocrinal Impairment of Pancreas in Type 2 Diabetes Mellitus.

    PubMed

    Prasanna Kumar, H R; Gowdappa, H Basavana; Hosmani, Tejashwi; Urs, Tejashri

    2018-01-01

    Diabetes mellitus (DM) is a chronic abnormal metabolic condition, which manifests elevated blood sugar level over a prolonged period. The pancreatic endocrine system generally gets affected during diabetes, but often abnormal exocrine functions are also manifested due to its proximity to the endocrine system. Fecal elastase-1 (FE-1) is found to be an ideal biomarker to reflect the exocrine insufficiency of the pancreas. The aim of this study was conducted to assess exocrine dysfunction of the pancreas in patients with type-2 DM (T2DM) by measuring FE levels and to associate the level of hyperglycemia with exocrine pancreatic dysfunction. A prospective, cross-sectional comparative study was conducted on both T2DM patients and healthy nondiabetic volunteers. FE-1 levels were measured using a commercial kit (Human Pancreatic Elastase ELISA BS 86-01 from Bioserv Diagnostics). Data analysis was performed based on the important statistical parameters such as mean, standard deviation, standard error, t -test-independent samples, and Chi-square test/cross tabulation using SPSS for Windows version 20.0. Statistically nonsignificant ( P = 0.5051) relationship between FE-1 deficiency and age was obtained, which implied age as a noncontributing factor toward exocrine pancreatic insufficiency among diabetic patients. Statistically significant correlation ( P = 0.003) between glycated hemoglobin and FE-1 levels was also noted. The association between retinopathy ( P = 0.001) and peripheral pulses ( P = 0.001) with FE-1 levels were found to be statistically significant. This study validates the benefit of FE-1 estimation, as a surrogate marker of exocrine pancreatic insufficiency, which remains unmanifest and subclinical.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiaogang; Biesiada, Marek; Cao, Shuo

    A new compilation of 012 angular-size/redshift data for compact radio quasars from very-long-baseline interferometry (VLBI) surveys motivates us to revisit the interaction between dark energy and dark matter with these probes reaching high redshifts z ∼ 3.0. In this paper, we investigate observational constraints on different phenomenological interacting dark energy (IDE) models with the intermediate-luminosity radio quasars acting as individual standard rulers, combined with the newest BAO and CMB observation from Planck results acting as statistical rulers. The results obtained from the MCMC method and other statistical methods including figure of Merit and Information Criteria show that: (1) Compared withmore » the current standard candle data and standard clock data, the intermediate-luminosity radio quasar standard rulers , probing much higher redshifts, could provide comparable constraints on different IDE scenarios. (2) The strong degeneracies between the interaction term and Hubble constant may contribute to alleviate the tension of H {sub 0} between the recent Planck and HST measurements. (3) Concerning the ranking of competing dark energy models, IDE with more free parameters are substantially penalized by the BIC criterion, which agrees very well with the previous results derived from other cosmological probes.« less

  19. 78 FR 65317 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-31

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: November 12, 2013 8:30 a.m.-5:30 p.m. EST. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...

  20. 78 FR 54470 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-04

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards Time and Date: September 18, 2013 8:30 p.m.--5:00 p.m. EDT. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...

  1. 78 FR 942 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-01-07

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: February 27, 2013 9:30 a.m.-5:00 p.m... electronic claims attachments. The National Committee on Vital Health Statistics is the public advisory body...

  2. 78 FR 34100 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-06-06

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: June 17, 2013 1:00 p.m.-5:00 p.m. e.d..., National Center for Health Statistics, 3311 Toledo Road, Auditorium B & C, Hyattsville, Maryland 20782...

  3. Nutrient-enriched formula versus standard term formula for preterm infants following hospital discharge.

    PubMed

    Henderson, G; Fahey, T; McGuire, W

    2007-10-17

    Preterm infants are often growth-restricted at hospital discharge. Feeding infants after hospital discharge with nutrient-enriched formula rather than standard term formula might facilitate "catch-up" growth and improve development. To determine the effect of feeding nutrient-enriched formula compared with standard term formula on growth and development for preterm infants following hospital discharge. The standard search strategy of the Cochrane Neonatal Review Group were used. This included searches of the Cochrane Central Register of Controlled Trials (CENTRAL, The Cochrane Library, Issue 2, 2007), MEDLINE (1966 - May 2007), EMBASE (1980 - May 2007), CINAHL (1982 - May 2007), conference proceedings, and previous reviews. Randomised or quasi-randomised controlled trials that compared the effect of feeding preterm infants following hospital discharge with nutrient-enriched formula compared with standard term formula. Data was extracted using the standard methods of the Cochrane Neonatal Review Group, with separate evaluation of trial quality and data extraction by two authors, and synthesis of data using weighted mean difference and a fixed effects model for meta-analysis. Seven trials were found that were eligible for inclusion. These recruited a total of 631 infants and were generally of good methodological quality. The trials found little evidence that feeding with nutrient-enriched formula milk affected growth and development. Because of differences in the way individual trials measured and presented outcomes, data synthesis was limited. Growth data from two trials found that, at six months post-term, infants fed with nutrient-enriched formula had statistically significantly lower weights [weighted mean difference: -601 (95% confidence interval -1028, -174) grams], lengths [-18.8 (-30.0, -7.6) millimetres], and head circumferences [-10.2 ( -18.0, -2.4) millimetres], than infants fed standard term formula. At 12 to 18 months post-term, meta-analyses of data from three trials did not find any statistically significant differences in growth parameters. However, examination of these meta-analyses demonstrated statistical heterogeneity. Meta-analyses of data from two trials did not reveal a statistically significant difference in Bayley Mental Development or Psychomotor Development Indices. There are not yet any data on growth or development through later childhood. The available data do not provide strong evidence that feeding preterm infants following hospital discharge with nutrient-enriched formula compared with standard term formula affects growth rates or development up to 18 months post-term.

  4. On Teaching about the Coefficient of Variation in Introductory Statistics Courses

    ERIC Educational Resources Information Center

    Trafimow, David

    2014-01-01

    The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.

  5. flowVS: channel-specific variance stabilization in flow cytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  6. Statistical Analysis of a Round-Robin Measurement Survey of Two Candidate Materials for a Seebeck Coefficient Standard Reference Material

    DTIC Science & Technology

    2009-02-01

    data was linearly fit, and the slope yielded the Seebeck coefficient. A small resis - tor was epoxied to the top of the sample, and the oppo- site end...space probes in its radioisotope thermoelectric generators (RTGs) and is of current interest to automobile manufacturers to supply additional power... resis - tivity or conductivity, thermal conductivity, and Seebeck coefficient. These required measurements are demanding, especially the thermal

  7. flowVS: channel-specific variance stabilization in flow cytometry

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  8. Development and clinical application of a length-adjustable water phantom for total body irradiation.

    PubMed

    Chen, Zhi-Wei; Yao, Sheng-Yu; Zhang, Tie-Ning; Zhu, Zhen-Hua; Hu, Zhe-Kai; Lu, Xun

    2012-08-01

    A new type of water phantom which would be specialised for the absorbed dose measurement in total body irradiation (TBI) treatment is developed. Ten millimetres of thick Plexiglas plates were arranged to form a square cube with 300 mm of edge length. An appropriate sleeve-type piston was installed on the side wall, and a tabular Plexiglas piston was positioned inside the sleeve. By pushing and pulling the piston, the length of the self-made water phantom could be varied to meet the required patients' physical sizes. To compare the international standard water phantom with the length-adjustable and the Plexiglas phantoms, absorbed dose for 6-MV X ray was measured by an ionisation chamber at different depths in three kinds of phantoms. In 70 cases with TBI, midplane doses were metered using the length-adjustable and the Plexiglas phantoms for simulating human dimensions, and dose validation was synchronously carried out. There were no significant statistical differences, p > 0.05, through statistical processing of data from the international standard water phantom and the self-designed one. There were significant statistical differences, p < 0.05, between the two sets of data from the standard and the Plexiglas one. In addition, the absolute difference had a positive correlation with the varied depth of the detector in the Plexiglas phantom. Comparing the data of clinical treatment, the differences were all <1 % among the prescription doses and the validation data collected from the self-design water phantom. However, the differences collected from the Plexiglas phantom were increasing gradually from +0.77 to +2.30 % along with increasing body width. Obviously, the difference had a positive correlation with the body width. The results proved that the new length-adjustable water phantom is more accurate for simulating human dimensions than Plexiglas phantom.

  9. Approaches for estimating minimal clinically important differences in systemic lupus erythematosus.

    PubMed

    Rai, Sharan K; Yazdany, Jinoos; Fortin, Paul R; Aviña-Zubieta, J Antonio

    2015-06-03

    A minimal clinically important difference (MCID) is an important concept used to determine whether a medical intervention improves perceived outcomes in patients. Prior to the introduction of the concept in 1989, studies focused primarily on statistical significance. As most recent clinical trials in systemic lupus erythematosus (SLE) have failed to show significant effects, determining a clinically relevant threshold for outcome scores (that is, the MCID) of existing instruments may be critical for conducting and interpreting meaningful clinical trials as well as for facilitating the establishment of treatment recommendations for patients. To that effect, methods to determine the MCID can be divided into two well-defined categories: distribution-based and anchor-based approaches. Distribution-based approaches are based on statistical characteristics of the obtained samples. There are various methods within the distribution-based approach, including the standard error of measurement, the standard deviation, the effect size, the minimal detectable change, the reliable change index, and the standardized response mean. Anchor-based approaches compare the change in a patient-reported outcome to a second, external measure of change (that is, one that is more clearly understood, such as a global assessment), which serves as the anchor. Finally, the Delphi technique can be applied as an adjunct to defining a clinically important difference. Despite an abundance of methods reported in the literature, little work in MCID estimation has been done in the context of SLE. As the MCID can help determine the effect of a given therapy on a patient and add meaning to statistical inferences made in clinical research, we believe there ought to be renewed focus on this area. Here, we provide an update on the use of MCIDs in clinical research, review some of the work done in this area in SLE, and propose an agenda for future research.

  10. Childhood blindness: a new form for recording causes of visual loss in children.

    PubMed Central

    Gilbert, C.; Foster, A.; Négrel, A. D.; Thylefors, B.

    1993-01-01

    The new standardized form for recording the causes of visual loss in children is accompanied by coding instructions and by a database for statistical analysis. The aim is to record the causes of childhood visual loss, with an emphasis on preventable and treatable causes, so that appropriate control measures can be planned. With this standardized methodology, it will be possible to monitor the changing patterns of childhood blindness over a period of time in response to changes in health care services, specific interventions, and socioeconomic development. PMID:8261552

  11. Study on Standard Fatigue Vehicle Load Model

    NASA Astrophysics Data System (ADS)

    Huang, H. Y.; Zhang, J. P.; Li, Y. H.

    2018-02-01

    Based on the measured data of truck from three artery expressways in Guangdong Province, the statistical analysis of truck weight was conducted according to axle number. The standard fatigue vehicle model applied to industrial areas in the middle and late was obtained, which adopted equivalence damage principle, Miner linear accumulation law, water discharge method and damage ratio theory. Compared with the fatigue vehicle model Specified by the current bridge design code, the proposed model has better applicability. It is of certain reference value for the fatigue design of bridge in China.

  12. Power of tests for comparing trend curves with application to national immunization survey (NIS).

    PubMed

    Zhao, Zhen

    2011-02-28

    To develop statistical tests for comparing trend curves of study outcomes between two socio-demographic strata across consecutive time points, and compare statistical power of the proposed tests under different trend curves data, three statistical tests were proposed. For large sample size with independent normal assumption among strata and across consecutive time points, the Z and Chi-square test statistics were developed, which are functions of outcome estimates and the standard errors at each of the study time points for the two strata. For small sample size with independent normal assumption, the F-test statistic was generated, which is a function of sample size of the two strata and estimated parameters across study period. If two trend curves are approximately parallel, the power of Z-test is consistently higher than that of both Chi-square and F-test. If two trend curves cross at low interaction, the power of Z-test is higher than or equal to the power of both Chi-square and F-test; however, at high interaction, the powers of Chi-square and F-test are higher than that of Z-test. The measurement of interaction of two trend curves was defined. These tests were applied to the comparison of trend curves of vaccination coverage estimates of standard vaccine series with National Immunization Survey (NIS) 2000-2007 data. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Towards a well-founded and reproducible snow load map for Austria

    NASA Astrophysics Data System (ADS)

    Winkler, Michael; Schellander, Harald

    2017-04-01

    "EN 1991-1-3 Eurocode 1: Part 1-3: Snow Loads" provides standard for the determination of the snow load to be used for the structural design of buildings etc. Since 2006 national specifications for Austria define a snow load map with four "load zones", allowing the calculation of the characteristic ground snow load sk for locations below 1500 m asl. A quadratic regression between altitude and sk is used, as suggested by EN 1991-1-3. The actual snow load map is based on best meteorological practice, but still it is somewhat subjective and non-reproducible. Underlying snow data series often end in the 1980s; in the best case data until about 2005 is used. Moreover, extreme value statistics only rely on the Gumbel distribution and the way in which snow depths are converted to snow loads is generally unknown. This might be enough reasons to rethink the snow load standard for Austria, all the more since today's situation is different to what it was some 15 years ago: Firstly, Austria is rich of multi-decadal, high quality snow depth measurements. These data are not well represented in the actual standard. Secondly, semi-empirical snow models allow sufficiently precise calculations of snow water equivalents and snow loads from snow depth measurements without the need of other parameters like temperature etc. which often are not available at the snow measurement sites. With the help of these tools, modelling of daily snow load series from daily snow depth measurements is possible. Finally, extreme value statistics nowadays offers convincing methods to calculate snow depths and loads with a return period of 50 years, which is the base of sk, and allows reproducible spatial extrapolation. The project introduced here will investigate these issues in order to update the Austrian snow load standard by providing a well-founded and reproducible snow load map for Austria. Not least, we seek for contact with standards bodies of neighboring countries to find intersections as well as to avoid inconsistencies and duplications of effort.

  14. The correlation between relatives on the supposition of genomic imprinting.

    PubMed Central

    Spencer, Hamish G

    2002-01-01

    Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight. PMID:12019254

  15. The correlation between relatives on the supposition of genomic imprinting.

    PubMed

    Spencer, Hamish G

    2002-05-01

    Standard genetic analyses assume that reciprocal heterozygotes are, on average, phenotypically identical. If a locus is subject to genomic imprinting, however, this assumption does not hold. We incorporate imprinting into the standard quantitative-genetic model for two alleles at a single locus, deriving expressions for the additive and dominance components of genetic variance, as well as measures of resemblance among relatives. We show that, in contrast to the case with Mendelian expression, the additive and dominance deviations are correlated. In principle, this correlation allows imprinting to be detected solely on the basis of different measures of familial resemblances, but in practice, the standard error of the estimate is likely to be too large for a test to have much statistical power. The effects of genomic imprinting will need to be incorporated into quantitative-genetic models of many traits, for example, those concerned with mammalian birthweight.

  16. Establishing the traceability of a uranyl nitrate solution to a standard reference material

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jackson, C.H.; Clark, J.P.

    1978-01-01

    A uranyl nitrate solution for use as a Working Calibration and Test Material (WCTM) was characterized, using a statistically designed procedure to document traceability to National Bureau of Standards Reference Material (SPM-960). A Reference Calibration and Test Material (PCTM) was prepared from SRM-960 uranium metal to approximate the acid and uranium concentration of the WCTM. This solution was used in the characterization procedure. Details of preparing, handling, and packaging these solutions are covered. Two outside laboratories, each having measurement expertise using a different analytical method, were selected to measure both solutions according to the procedure for characterizing the WCTM. Twomore » different methods were also used for the in-house characterization work. All analytical results were tested for statistical agreement before the WCTM concentration and limit of error values were calculated. A concentration value was determined with a relative limit of error (RLE) of approximately 0.03% which was better than the target RLE of 0.08%. The use of this working material eliminates the expense of using SRMs to fulfill traceability requirements for uranium measurements on this type material. Several years' supply of uranyl nitrate solution with NBS traceability was produced. The cost of this material was less than 10% of an equal quantity of SRM-960 uranium metal.« less

  17. Guidelines for Assessment and Instruction in Statistics Education (GAISE): extending GAISE into nursing education.

    PubMed

    Hayat, Matthew J

    2014-04-01

    Statistics coursework is usually a core curriculum requirement for nursing students at all degree levels. The American Association of Colleges of Nursing (AACN) establishes curriculum standards for academic nursing programs. However, the AACN provides little guidance on statistics education and does not offer standardized competency guidelines or recommendations about course content or learning objectives. Published standards may be used in the course development process to clarify course content and learning objectives. This article includes suggestions for implementing and integrating recommendations given in the Guidelines for Assessment and Instruction in Statistics Education (GAISE) report into statistics education for nursing students. Copyright 2014, SLACK Incorporated.

  18. Statistical analysis and interpolation of compositional data in materials science.

    PubMed

    Pesenson, Misha Z; Suram, Santosh K; Gregoire, John M

    2015-02-09

    Compositional data are ubiquitous in chemistry and materials science: analysis of elements in multicomponent systems, combinatorial problems, etc., lead to data that are non-negative and sum to a constant (for example, atomic concentrations). The constant sum constraint restricts the sampling space to a simplex instead of the usual Euclidean space. Since statistical measures such as mean and standard deviation are defined for the Euclidean space, traditional correlation studies, multivariate analysis, and hypothesis testing may lead to erroneous dependencies and incorrect inferences when applied to compositional data. Furthermore, composition measurements that are used for data analytics may not include all of the elements contained in the material; that is, the measurements may be subcompositions of a higher-dimensional parent composition. Physically meaningful statistical analysis must yield results that are invariant under the number of composition elements, requiring the application of specialized statistical tools. We present specifics and subtleties of compositional data processing through discussion of illustrative examples. We introduce basic concepts, terminology, and methods required for the analysis of compositional data and utilize them for the spatial interpolation of composition in a sputtered thin film. The results demonstrate the importance of this mathematical framework for compositional data analysis (CDA) in the fields of materials science and chemistry.

  19. Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.

    PubMed

    Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas

    2016-11-14

    Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.

  20. Educational Technology-Related Performance of Teaching Faculty in Higher Education: Implications for eLearning Management

    ERIC Educational Resources Information Center

    Larbi-Apau, Josephine A.; Guerra-Lopez, Ingrid; Moseley, James L.; Spannaus, Timothy; Yaprak, Attila

    2017-01-01

    The study examined teaching faculty's educational technology-related performances (ETRP) as a measure for predicting eLearning management in Ghana. A total of valid data (n = 164) were collected and analyzed on applied ISTE-NETS-T Performance Standards using descriptive and ANOVA statistics. Results showed an overall moderate performance with the…

  1. 40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...

  2. 40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...

  3. 40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...

  4. 40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...

  5. A Statistical Analysis of Data Used in Critical Decision Making by Secondary School Personnel.

    ERIC Educational Resources Information Center

    Dunn, Charleta J.; Kowitz, Gerald T.

    Guidance decisions depend on the validity of standardized tests and teacher judgment records as measures of student achievement. To test this validity, a sample of 400 high school juniors, randomly selected from two large Gulf Coas t area schools, were administered the Iowa Tests of Educational Development. The nine subtest scores and each…

  6. 24 CFR 972.127 - Standards for determining whether a property is viable in the long term.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... must be met: (a) The investment to be made in the development is reasonable. (1) Proposed... of households with at least one full-time worker. Measures to achieve a broader range of household... census or other recent statistical evidence demonstrating some mix of incomes of other households located...

  7. Development and Preliminary Psychometric Properties of the Transition Competence Battery for Deaf Adolescents and Young Adults.

    ERIC Educational Resources Information Center

    Bullis, Michael; Reiman, John

    1992-01-01

    The Transition Competence Battery for Deaf Adolescents and Young Adults (TCB) measures employment and independent living skills. The TCB was standardized on students (N from 180 to 230 for the different subtests) from both mainstreamed and residential settings. Item statistics and subtest reliabilities were adequate; evidence of construct validity…

  8. Examining the Reliability of Interval Level Data Using Root Mean Square Differences and Concordance Correlation Coefficients

    ERIC Educational Resources Information Center

    Barchard, Kimberly A.

    2012-01-01

    This article introduces new statistics for evaluating score consistency. Psychologists usually use correlations to measure the degree of linear relationship between 2 sets of scores, ignoring differences in means and standard deviations. In medicine, biology, chemistry, and physics, a more stringent criterion is often used: the extent to which…

  9. Higher certainty of the laser-induced damage threshold test with a redistributing data treatment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jensen, Lars; Mrohs, Marius; Gyamfi, Mark

    2015-10-15

    As a consequence of its statistical nature, the measurement of the laser-induced damage threshold holds always risks to over- or underestimate the real threshold value. As one of the established measurement procedures, the results of S-on-1 (and 1-on-1) tests outlined in the corresponding ISO standard 21 254 depend on the amount of data points and their distribution over the fluence scale. With the limited space on a test sample as well as the requirements on test site separation and beam sizes, the amount of data from one test is restricted. This paper reports on a way to treat damage testmore » data in order to reduce the statistical error and therefore measurement uncertainty. Three simple assumptions allow for the assignment of one data point to multiple data bins and therefore virtually increase the available data base.« less

  10. Assessing alternative measures of wealth in health research.

    PubMed

    Cubbin, Catherine; Pollack, Craig; Flaherty, Brian; Hayward, Mark; Sania, Ayesha; Vallone, Donna; Braveman, Paula

    2011-05-01

    We assessed whether it would be feasible to replace the standard measure of net worth with simpler measures of wealth in population-based studies examining associations between wealth and health. We used data from the 2004 Survey of Consumer Finances (respondents aged 25-64 years) and the 2004 Health and Retirement Survey (respondents aged 50 years or older) to construct logistic regression models relating wealth to health status and smoking. For our wealth measure, we used the standard measure of net worth as well as 9 simpler measures of wealth, and we compared results among the 10 models. In both data sets and for both health indicators, models using simpler wealth measures generated conclusions about the association between wealth and health that were similar to the conclusions generated by models using net worth. The magnitude and significance of the odds ratios were similar for the covariates in multivariate models, and the model-fit statistics for models using these simpler measures were similar to those for models using net worth. Our findings suggest that simpler measures of wealth may be acceptable in population-based studies of health.

  11. Difference of refraction values between standard autorefractometry and Plusoptix.

    PubMed

    Bogdănici, Camelia Margareta; Săndulache, Codrina Maria; Vasiliu, Rodica; Obadă, Otilia

    2016-01-01

    Aim: Comparison between the objective refraction measurement results determined with Topcon KR-8900 standard autorefractometer and Plusoptix A09 photo-refractometer in children. Material and methods: A prospective transversal study was performed in the Department of Ophthalmology of "Sf. Spiridon" Hospital in Iași on 90 eyes of 45 pediatric patients, with a mean age of 8,82 ± 3,52 years, examined with noncycloplegic measurements provided by Plusoptix A09 and cycloplegic and noncycloplegic measurements provided by Topcon KR-8900 standard autorefractometer. The clinical parameters compared were the following: spherical equivalent (SE), spherical and cylindrical values, and cylinder axis. Astigmatism was recorded and evaluated with the cylindrical value on minus after transposition. The statistical calculation was performed with paired t-tests and Pearson's correlation analysis. All the data were analyzed with SPSS statistical package 19 (SPSS for Windows, Chicago, IL). Results: Plusoptix A09 noncycloplegic values were relatively equal between the eyes, with slightly lower values compared to noncycloplegic auto refractometry. Mean (± SD) measurements provided by Plusoptix AO9 were the following: spherical power 1.11 ± 1.52, cylindrical power 0.80 ± 0.80, and spherical equivalent 0.71 ± 1.39. The noncycloplegic auto refractometer mean (± SD) measurements were spherical power 1.12 ± 1.63, cylindrical power 0.79 ± 0,77 and spherical equivalent 0.71 ± 1.58. The cycloplegic auto refractometer mean (± SD) measurements were spherical power 2.08 ± 1.95, cylindrical power 0,82 ± 0.85 and spherical equivalent 1.68 ± 1.87. 32% of the eyes were hyperopic, 2.67% were myopic, 65.33% had astigmatism, and 30% eyes had amblyopia. Conclusions: Noncycloplegic objective refraction values were similar with those determined by autorefractometry. Plusoptix had an important role in the ophthalmological screening, but did not detect higher refractive errors, justifying the cycloplegic autorefractometry.

  12. Volcano plots in analyzing differential expressions with mRNA microarrays.

    PubMed

    Li, Wentian

    2012-12-01

    A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.

  13. Standard Clock in primordial density perturbations and cosmic microwave background

    NASA Astrophysics Data System (ADS)

    Chen, Xingang; Namjoo, Mohammad Hossein

    2014-12-01

    Standard Clocks in the primordial epoch leave a special type of features in the primordial perturbations, which can be used to directly measure the scale factor of the primordial universe as a function of time a (t), thus discriminating between inflation and alternatives. We have started to search for such signals in the Planck 2013 data using the key predictions of the Standard Clock. In this Letter, we summarize the key predictions of the Standard Clock and present an interesting candidate example in Planck 2013 data. Motivated by this candidate, we construct and compute full Standard Clock models and use the more complete prediction to make more extensive comparison with data. Although this candidate is not yet statistically significant, we use it to illustrate how Standard Clocks appear in Cosmic Microwave Background (CMB) and how they can be further tested by future data. We also use it to motivate more detailed theoretical model building.

  14. Reliability of reference distances used in photogrammetry.

    PubMed

    Aksu, Muge; Kaya, Demet; Kocadereli, Ilken

    2010-07-01

    To determine the reliability of the reference distances used for photogrammetric assessment. The sample consisted of 100 subjects with mean ages of 22.97 +/- 2.98 years. Five lateral and four frontal parameters were measured directly on the subjects' faces. For photogrammetric assessment, two reference distances for the profile view and three reference distances for the frontal view were established. Standardized photographs were taken and all the parameters that had been measured directly on the face were measured on the photographs. The reliability of the reference distances was checked by comparing direct and indirect values of the parameters obtained from the subjects' faces and photographs. Repeated measure analysis of variance (ANOVA) and Bland-Altman analyses were used for statistical assessment. For profile measurements, the indirect values measured were statistically different from the direct values except for Sn-Sto in male subjects and Prn-Sn and Sn-Sto in female subjects. The indirect values of Prn-Sn and Sn-Sto were reliable in both sexes. The poorest results were obtained in the indirect values of the N-Sn parameter for female subjects and the Sn-Me parameter for male subjects according to the Sa-Sba reference distance. For frontal measurements, the indirect values were statistically different from the direct values in both sexes except for one in male subjects. The indirect values measured were not statistically different from the direct values for Go-Go. The indirect values of Ch-Ch were reliable in male subjects. The poorest results were obtained according to the P-P reference distance. For profile assessment, the T-Ex reference distance was reliable for Prn-Sn and Sn-Sto in both sexes. For frontal assessment, Ex-Ex and En-En reference distances were reliable for Ch-Ch in male subjects.

  15. Estimating the Magnitude and Frequency of Floods in Small Urban Streams in South Carolina, 2001

    USGS Publications Warehouse

    Feaster, Toby D.; Guimaraes, Wladimir B.

    2004-01-01

    The magnitude and frequency of floods at 20 streamflowgaging stations on small, unregulated urban streams in or near South Carolina were estimated by fitting the measured wateryear peak flows to a log-Pearson Type-III distribution. The period of record (through September 30, 2001) for the measured water-year peak flows ranged from 11 to 25 years with a mean and median length of 16 years. The drainage areas of the streamflow-gaging stations ranged from 0.18 to 41 square miles. Based on the flood-frequency estimates from the 20 streamflow-gaging stations (13 in South Carolina; 4 in North Carolina; and 3 in Georgia), generalized least-squares regression was used to develop regional regression equations. These equations can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for small urban streams in the Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The most significant explanatory variables from this analysis were mainchannel length, percent impervious area, and basin development factor. Mean standard errors of prediction for the regression equations ranged from -25 to 33 percent for the 10-year recurrence-interval flows and from -35 to 54 percent for the 100-year recurrence-interval flows. The U.S. Geological Survey has developed a Geographic Information System application called StreamStats that makes the process of computing streamflow statistics at ungaged sites faster and more consistent than manual methods. This application was developed in the Massachusetts District and ongoing work is being done in other districts to develop a similar application using streamflow statistics relative to those respective States. Considering the future possibility of implementing StreamStats in South Carolina, an alternative set of regional regression equations was developed using only main channel length and impervious area. This was done because no digital coverages are currently available for basin development factor and, therefore, it could not be included in the StreamStats application. The average mean standard error of prediction for the alternative equations was 2 to 5 percent larger than the standard errors for the equations that contained basin development factor. For the urban streamflow-gaging stations in South Carolina, measured water-year peak flows were compared with those from an earlier urban flood-frequency investigation. The peak flows from the earlier investigation were computed using a rainfall-runoff model. At many of the sites, graphical comparisons indicated that the variance of the measured data was much less than the variance of the simulated data. Several statistical tests were applied to compare the variances and the means of the measured and simulated data for each site. The results indicated that the variances were significantly different for 11 of the 13 South Carolina streamflow-gaging stations. For one streamflow-gaging station, the test for normality, which is one of the assumptions of the data when comparing variances, indicated that neither the measured data nor the simulated data were distributed normally; therefore, the test for differences in the variances was not used for that streamflow-gaging station. Another statistical test was used to test for statistically significant differences in the means of the measured and simulated data. The results indicated that for 5 of the 13 urban streamflowgaging stations in South Carolina there was a statistically significant difference in the means of the two data sets. For comparison purposes and to test the hypothesis that there may have been climatic differences between the period in which the measured peak-flow data were measured and the period for which historic rainfall data were used to compute the simulated peak flows, 16 rural streamflow-gaging stations with long-term records were reviewed using similar techniques as those used for the measured an

  16. Adaptive estimation of a time-varying phase with a power-law spectrum via continuous squeezed states

    NASA Astrophysics Data System (ADS)

    Dinani, Hossein T.; Berry, Dominic W.

    2017-06-01

    When measuring a time-varying phase, the standard quantum limit and Heisenberg limit as usually defined, for a constant phase, do not apply. If the phase has Gaussian statistics and a power-law spectrum 1 /|ω| p with p >1 , then the generalized standard quantum limit and Heisenberg limit have recently been found to have scalings of 1 /N(p -1 )/p and 1 /N2 (p -1 )/(p +1 ) , respectively, where N is the mean photon flux. We show that this Heisenberg scaling can be achieved via adaptive measurements on squeezed states. We predict the experimental parameters analytically, and test them with numerical simulations. Previous work had considered the special case of p =2 .

  17. Development and Validation of Instruments to Measure Learning of Expert-Like Thinking

    NASA Astrophysics Data System (ADS)

    Adams, Wendy K.; Wieman, Carl E.

    2011-06-01

    This paper describes the process for creating and validating an assessment test that measures the effectiveness of instruction by probing how well that instruction causes students in a class to think like experts about specific areas of science. The design principles and process are laid out and it is shown how these align with professional standards that have been established for educational and psychological testing and the elements of assessment called for in a recent National Research Council study on assessment. The importance of student interviews for creating and validating the test is emphasized, and the appropriate interview procedures are presented. The relevance and use of standard psychometric statistical tests are discussed. Additionally, techniques for effective test administration are presented.

  18. Cape Canaveral, Florida range reference atmosphere 0-70 km altitude

    NASA Technical Reports Server (NTRS)

    Tingle, A. (Editor)

    1983-01-01

    The RRA contains tabulations for monthly and annual means, standard deviations, skewness coefficients for wind speed, pressure temperature, density, water vapor pressure, virtual temperature, dew-point temperature, and the means and standard deviations for the zonal and meridional wind components and the linear (product moment) correlation coefficient between the wind components. These statistical parameters are tabulated at the station elevation and at 1 km intervals from sea level to 30 km and at 2 km intervals from 30 to 90 km altitude. The wind statistics are given at approximately 10 m above the station elevations and at altitudes with respect to mean sea level thereafter. For those range sites without rocketsonde measurements, the RRAs terminate at 30 km altitude or they are extended, if required, when rocketsonde data from a nearby launch site are available. There are four sets of tables for each of the 12 monthly reference periods and the annual reference period.

  19. An international marine-atmospheric {sup 222}Rn measurement intercomparison in Bermuda. Part 2: Results for the participating laboratories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colle, R.; Unterweger, M.P.; Hutchinson, J.M.R.

    1996-01-01

    As part of an international measurement intercomparison of instruments used to measure atmospheric {sup 222}Rn, four participating laboratories made nearly simultaneous measurements of {sup 222}Rn activity concentration in commonly sampled, ambient air over approximately a 2 week period, and three of these four laboratories participated in the measurement comparison of 14 introduced samples with known, but undisclosed (blind) {sup 222}Rn activity concentration. The exercise was conducted in Bermuda in October 1991. The {sup 222}Rn activity concentrations in ambient Bermudian air over the course of the intercomparison ranged from a few hundredths of a Bq {center_dot} m{sup {minus}3} to about 2more » Bq {center_dot} m{sup {minus}3}, while the standardized sample additions covered a range from approximately 2.5 Bq {center_dot} m{sup {minus}3} to 35 Bq {center_dot} m{sup {minus}3}. The overall uncertainty in the latter concentrations was in the general range of 10%, approximating a 3 standard deviation uncertainty interval. The results of the intercomparison indicated that two of the laboratories were within very good agreement with the standard additions, and almost within expected statistical variations. These same two laboratories, however, at lower ambient concentrations, exhibited a systematic difference with an averaged offset of roughly 0.3 Bq {center_dot} m{sup {minus}3}. The third laboratory participating in the measurement of standardized sample additions was systematically low by about 65% to 70%, with respect to the standard addition which was also confirmed in their ambient air concentration measurements. The fourth laboratory, participating in only the ambient measurement part of the intercomparison, was also systematically low by at least 40% with respect to the first two laboratories.« less

  20. Comparison of methods for determination of total oil sands-derived naphthenic acids in water samples.

    PubMed

    Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed

    2017-11-01

    There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Vertical bone measurements from cone beam computed tomography images using different software packages.

    PubMed

    Vasconcelos, Taruska Ventorini; Neves, Frederico Sampaio; Moraes, Lívia Almeida Bueno; Freitas, Deborah Queiroz

    2015-01-01

    This article aimed at comparing the accuracy of linear measurement tools of different commercial software packages. Eight fully edentulous dry mandibles were selected for this study. Incisor, canine, premolar, first molar and second molar regions were selected. Cone beam computed tomography (CBCT) images were obtained with i-CAT Next Generation. Linear bone measurements were performed by one observer on the cross-sectional images using three different software packages: XoranCat®, OnDemand3D® and KDIS3D®, all able to assess DICOM images. In addition, 25% of the sample was reevaluated for the purpose of reproducibility. The mandibles were sectioned to obtain the gold standard for each region. Intraclass coefficients (ICC) were calculated to examine the agreement between the two periods of evaluation; the one-way analysis of variance performed with the post-hoc Dunnett test was used to compare each of the software-derived measurements with the gold standard. The ICC values were excellent for all software packages. The least difference between the software-derived measurements and the gold standard was obtained with the OnDemand3D and KDIS3D (-0.11 and -0.14 mm, respectively), and the greatest, with the XoranCAT (+0.25 mm). However, there was no statistical significant difference between the measurements obtained with the different software packages and the gold standard (p> 0.05). In conclusion, linear bone measurements were not influenced by the software package used to reconstruct the image from CBCT DICOM data.

  2. Statistical study of air pollutant concentrations via generalized gamma distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marani, A.; Lavagnini, I.; Buttazzoni, C.

    1986-11-01

    This paper deals with modeling observed frequency distributions of air quality data measured in the area of Venice, Italy. The paper discusses the application of the generalized gamma distribution (ggd) which has not been commonly applied to air quality data notwithstanding the fact that it embodies most distribution models used for air quality analyses. The approach yields important simplifications for statistical analyses. A comparison among the ggd and other relevant models (standard gamma, Weibull, lognormal), carried out on daily sulfur dioxide concentrations in the area of Venice underlines the efficiency of ggd models in portraying experimental data.

  3. Generational changes in the growth of children from Maribor and Slovenia.

    PubMed

    Bigec, Martin

    2013-05-01

    Among the numerous factors which influence a child's growth and development are also factors of changeable socio-economic environment and life style. Our aim was to evaluate these changes and contribute to preventive measures and evaluation of a child's growth in pediatric practice. Therefore, we decided to estimate the state of body growth in two generations of children from Maribor at five and six years of age of both gender, establish secular changes and define standards. On a representative sample (gender and age) of 1461 children from Maribor measured in 1996 and a sample of 608 children from Maribor, measured in 1966, 28 body features were studied and compared in each population unit. Variables were statistically and epidemiologically assessed and results were controlled by a test. The following anthropometric differences were significant: in 5-year old boys the measures in the 1996 generation are statistically higher than in 1966 - foot length, head length, upper arm skinfold, subscapular skinfold, arm length, arm diameter, upper thigh skinfold, stature (length), suprailiac skinfold, and body weight. Decreased measures are: abdomen circumference, knee circumference, sitting height, elbow circumference, biacromial diameter, and face heigth. In 6-year old boys additional features have increased in comparison with the year 1966: sternal height, tight circumference, hip width, chest circumference; following measures have decreased: face height, head circumference. In 5-year old girls: increased measures in comparison with the generation from 1966 are: lower leg length, head length, ankle circumference, upper arm skinfold, body weight, billiac diameter, body height, subscapular skinfold, chest circumference, hip circumference, sternal height, suprailiac skinfold, decreased measures are: head circumference, elbow circumference, face circumference, shoulder with, sitting height. In 6-year old girls additional measures are increased: wrist circumference, arm length and chest circumference. Changing trends show an increased tendency towards decrease or increase of most body measurements. In everyday practice the most commonly used measurements are: body mass, head circumference, body length in babies and body height in pre-school children. Our measurements proved, with a p-value of 0.001, that measurements of children in 1966, also shown in diagrams, are significantly different from measurements in 1996. In the second part of this paper we present a part of the anthropometric measurement study carried out for the standardization of the DENVER II developmental screening test. There were 1596 healthy Slovene children between zero and six and half years of age included into the observation. Children come from Maribor, Koper, Velenje and Ljubljana. We used the Cameron's measurement and statistical method. Diagrams were made for following body measures: body mass, body height, head circumference, upper arm circumference, thigh circumference and body mass index. A comparative analysis with the Euro-Growth study showed that our results correspond with the European standards. Therefore, our results are suggested to be applied in everyday pediatric practice.

  4. Modeling the Test-Retest Statistics of a Localization Experiment in the Full Horizontal Plane.

    PubMed

    Morsnowski, André; Maune, Steffen

    2016-10-01

    Two approaches to model the test-retest statistics of a localization experiment basing on Gaussian distribution and on surrogate data are introduced. Their efficiency is investigated using different measures describing directional hearing ability. A localization experiment in the full horizontal plane is a challenging task for hearing impaired patients. In clinical routine, we use this experiment to evaluate the progress of our cochlear implant (CI) recipients. Listening and time effort limit the reproducibility. The localization experiment consists of a 12 loudspeaker circle, placed in an anechoic room, a "camera silens". In darkness, HSM sentences are presented at 65 dB pseudo-erratically from all 12 directions with five repetitions. This experiment is modeled by a set of Gaussian distributions with different standard deviations added to a perfect estimator, as well as by surrogate data. Five repetitions per direction are used to produce surrogate data distributions for the sensation directions. To investigate the statistics, we retrospectively use the data of 33 CI patients with 92 pairs of test-retest-measurements from the same day. The first model does not take inversions into account, (i.e., permutations of the direction from back to front and vice versa are not considered), although they are common for hearing impaired persons particularly in the rear hemisphere. The second model considers these inversions but does not work with all measures. The introduced models successfully describe test-retest statistics of directional hearing. However, since their applications on the investigated measures perform differently no general recommendation can be provided. The presented test-retest statistics enable pair test comparisons for localization experiments.

  5. High intensity click statistics from a 10 × 10 avalanche photodiode array

    NASA Astrophysics Data System (ADS)

    Kröger, Johannes; Ahrens, Thomas; Sperling, Jan; Vogel, Werner; Stolz, Heinrich; Hage, Boris

    2017-11-01

    Photon-number measurements are a fundamental technique for the discrimination and characterization of quantum states of light. Beyond the abilities of state-of-the-art devices, we present measurements with an array of 100 avalanche photodiodes exposed to photon-numbers ranging from well below to significantly above one photon per diode. Despite each single diode only discriminating between zero and non-zero photon-numbers we were able to extract a second order moment, which acts as a nonclassicality indicator. We demonstrate a vast enhancement of the applicable intensity range by two orders of magnitude relative to the standard application of such devices. It turns out that the probabilistic mapping of arbitrary photon-numbers on a finite number of registered clicks is not per se a disadvantage compared with true photon counters. Such detector arrays can bridge the gap between single-photon and linear detection, by investigation of the click statistics, without the necessity of photon statistics reconstruction.

  6. Generalized energy measurements and modified transient quantum fluctuation theorems

    NASA Astrophysics Data System (ADS)

    Watanabe, Gentaro; Venkatesh, B. Prasanna; Talkner, Peter

    2014-05-01

    Determining the work which is supplied to a system by an external agent provides a crucial step in any experimental realization of transient fluctuation relations. This, however, poses a problem for quantum systems, where the standard procedure requires the projective measurement of energy at the beginning and the end of the protocol. Unfortunately, projective measurements, which are preferable from the point of view of theory, seem to be difficult to implement experimentally. We demonstrate that, when using a particular type of generalized energy measurements, the resulting work statistics is simply related to that of projective measurements. This relation between the two work statistics entails the existence of modified transient fluctuation relations. The modifications are exclusively determined by the errors incurred in the generalized energy measurements. They are universal in the sense that they do not depend on the force protocol. Particularly simple expressions for the modified Crooks relation and Jarzynski equality are found for Gaussian energy measurements. These can be obtained by a sequence of sufficiently many generalized measurements which need not be Gaussian. In accordance with the central limit theorem, this leads to an effective error reduction in the individual measurements and even yields a projective measurement in the limit of infinite repetitions.

  7. Knee Images Digital Analysis (KIDA): a novel method to quantify individual radiographic features of knee osteoarthritis in detail.

    PubMed

    Marijnissen, A C A; Vincken, K L; Vos, P A J M; Saris, D B F; Viergever, M A; Bijlsma, J W J; Bartels, L W; Lafeber, F P J G

    2008-02-01

    Radiography is still the golden standard for imaging features of osteoarthritis (OA), such as joint space narrowing, subchondral sclerosis, and osteophyte formation. Objective assessment, however, remains difficult. The goal of the present study was to evaluate a novel digital method to analyse standard knee radiographs. Standardized radiographs of 20 healthy and 55 OA knees were taken in general practise according to the semi-flexed method by Buckland-Wright. Joint Space Width (JSW), osteophyte area, subchondral bone density, joint angle, and tibial eminence height were measured as continuous variables using newly developed Knee Images Digital Analysis (KIDA) software on a standard PC. Two observers evaluated the radiographs twice, each on two different occasions. The observers were blinded to the source of the radiographs and to their previous measurements. Statistical analysis to compare measurements within and between observers was performed according to Bland and Altman. Correlations between KIDA data and Kellgren & Lawrence (K&L) grade were calculated and data of healthy knees were compared to those of OA knees. Intra- and inter-observer variations for measurement of JSW, subchondral bone density, osteophytes, tibial eminence, and joint angle were small. Significant correlations were found between KIDA parameters and K&L grade. Furthermore, significant differences were found between healthy and OA knees. In addition to JSW measurement, objective evaluation of osteophyte formation and subchondral bone density is possible on standard radiographs. The measured differences between OA and healthy individuals suggest that KIDA allows detection of changes in time, although sensitivity to change has to be demonstrated in long-term follow-up studies.

  8. Value assignment and uncertainty evaluation for single-element reference solutions

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Bodnar, Olha; Butler, Therese A.; Molloy, John L.; Winchester, Michael R.

    2018-06-01

    A Bayesian statistical procedure is proposed for value assignment and uncertainty evaluation for the mass fraction of the elemental analytes in single-element solutions distributed as NIST standard reference materials. The principal novelty that we describe is the use of information about relative differences observed historically between the measured values obtained via gravimetry and via high-performance inductively coupled plasma optical emission spectrometry, to quantify the uncertainty component attributable to between-method differences. This information is encapsulated in a prior probability distribution for the between-method uncertainty component, and it is then used, together with the information provided by current measurement data, to produce a probability distribution for the value of the measurand from which an estimate and evaluation of uncertainty are extracted using established statistical procedures.

  9. The characterization and certification of a quantitative reference material for Legionella detection and quantification by qPCR.

    PubMed

    Baume, M; Garrelly, L; Facon, J P; Bouton, S; Fraisse, P O; Yardin, C; Reyrolle, M; Jarraud, S

    2013-06-01

    The characterization and certification of a Legionella DNA quantitative reference material as a primary measurement standard for Legionella qPCR. Twelve laboratories participated in a collaborative certification campaign. A candidate reference DNA material was analysed through PCR-based limiting dilution assays (LDAs). The validated data were used to statistically assign both a reference value and an associated uncertainty to the reference material. This LDA method allowed for the direct quantification of the amount of Legionella DNA per tube in genomic units (GU) and the determination of the associated uncertainties. This method could be used for the certification of all types of microbiological standards for qPCR. The use of this primary standard will improve the accuracy of Legionella qPCR measurements and the overall consistency of these measurements among different laboratories. The extensive use of this certified reference material (CRM) has been integrated in the French standard NF T90-471 (April 2010) and in the ISO Technical Specification 12 869 (Anon 2012 International Standardisation Organisation) for validating qPCR methods and ensuring the reliability of these methods. © 2013 The Society for Applied Microbiology.

  10. Anisotropies of gravitational-wave standard sirens as a new cosmological probe without redshift information

    NASA Astrophysics Data System (ADS)

    Nishizawa, Atsushi; Namikawa, Toshiya; Taruya, Atsushi

    2016-03-01

    Gravitational waves (GWs) from compact binary stars at cosmological distances are promising and powerful cosmological probes, referred to as the GW standard sirens. With future GW detectors, we will be able to precisely measure source luminosity distances out to a redshift z 5. To extract cosmological information, previous studies using the GW standard sirens rely on source redshift information obtained through an extensive electromagnetic follow-up campaign. However, the redshift identification is typically time-consuming and rather challenging. Here we propose a novel method for cosmology with the GW standard sirens free from the redshift measurements. Utilizing the anisotropies of the number density and luminosity distances of compact binaries originated from the large-scale structure, we show that (i) this anisotropies can be measured even at very high-redshifts (z = 2), (ii) the expected constraints on the primordial non-Gaussianity with Einstein Telescope would be comparable to or even better than the other large-scale structure probes at the same epoch, (iii) the cross-correlation with other cosmological observations is found to have high-statistical significance. A.N. was supported by JSPS Postdoctoral Fellowships for Research Abroad No. 25-180.

  11. [Effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patient with implant restoration].

    PubMed

    Wang, Rong; Xu, Xin

    2015-12-01

    To compare the effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patients with dental implant restoration. Twenty patients, each with a single edentulous posterior dentition with no distal dentition were selected, and divided into 2 groups. Patients in group A underwent original occlusion adjustment method and patients in group B underwent occlusal plane reduction technique. Ankylos implants were implanted in the edentulous space in each patient and restored with fixed prosthodontics single unit crown. Occlusion was adjusted in each restoration accordingly. Electromyograms were conducted to determine the effect of adjustment methods on occlusion and muscles of mastication 3 months and 6 months after initial restoration and adjustment. Data was collected and measurements for balanced occlusal measuring standards were obtained, including central occlusion force (COF), asymmetry index of molar occlusal force(AMOF). Balanced muscles of mastication measuring standards were also obtained including measurements from electromyogram for the muscles of mastication and the anterior bundle of the temporalis muscle at the mandibular rest position, average electromyogram measurements of the anterior bundle of the temporalis muscle at the intercuspal position(ICP), Astot, masseter muscle asymmetry index, and anterior temporalis asymmetry index (ASTA). Statistical analysis was performed using Student 's t test with SPSS 18.0 software package. Three months after occlusion adjustment, parameters of the original occlusion adjustment method were significantly different between group A and group B in balanced occlusal measuring standards and balanced muscles of mastication measuring standards. Six months after occlusion adjustment, parameters of the original occlusion adjustment methods were significantly different between group A and group B in balanced muscles of mastication measuring standards, but was no significant difference in balanced occlusal measuring standards. Using occlusion plane reduction adjustment technique, it is possible to obtain occlusion index and muscles of mastication's electromyogram index similar to the opposite side's natural dentition in patients with single unit fix prosthodontics crown and single posterior edentulous dentition without distal dentitions.

  12. [Statistical process control applied to intensity modulated radiotherapy pretreatment controls with portal dosimetry].

    PubMed

    Villani, N; Gérard, K; Marchesi, V; Huger, S; François, P; Noël, A

    2010-06-01

    The first purpose of this study was to illustrate the contribution of statistical process control for a better security in intensity modulated radiotherapy (IMRT) treatments. This improvement is possible by controlling the dose delivery process, characterized by pretreatment quality control results. So, it is necessary to put under control portal dosimetry measurements (currently, the ionisation chamber measurements were already monitored by statistical process control thanks to statistical process control tools). The second objective was to state whether it is possible to substitute ionisation chamber with portal dosimetry in order to optimize time devoted to pretreatment quality control. At Alexis-Vautrin center, pretreatment quality controls in IMRT for prostate and head and neck treatments were performed for each beam of each patient. These controls were made with an ionisation chamber, which is the reference detector for the absolute dose measurement, and with portal dosimetry for the verification of dose distribution. Statistical process control is a statistical analysis method, coming from industry, used to control and improve the studied process quality. It uses graphic tools as control maps to follow-up process, warning the operator in case of failure, and quantitative tools to evaluate the process toward its ability to respect guidelines: this is the capability study. The study was performed on 450 head and neck beams and on 100 prostate beams. Control charts, showing drifts, both slow and weak, and also both strong and fast, of mean and standard deviation have been established and have shown special cause introduced (manual shift of the leaf gap of the multileaf collimator). Correlation between dose measured at one point, given with the EPID and the ionisation chamber has been evaluated at more than 97% and disagreement cases between the two measurements were identified. The study allowed to demonstrate the feasibility to reduce the time devoted to pretreatment controls, by substituting the ionisation chamber's measurements with those performed with EPID, and also that a statistical process control monitoring of data brought security guarantee. 2010 Société française de radiothérapie oncologique (SFRO). Published by Elsevier SAS. All rights reserved.

  13. True external diameter better predicts hemodynamic performance of bioprosthetic aortic valves than the manufacturers' stated size.

    PubMed

    Cevasco, Marisa; Mick, Stephanie L; Kwon, Michael; Lee, Lawrence S; Chen, Edward P; Chen, Frederick Y

    2013-05-01

    Currently, there is no universal standard for sizing bioprosthetic aortic valves. Hence, a standardized comparison was performed to clarify this issue. Every size of four commercially available bioprosthetic aortic valves marketed in the United States (Biocor Supra; Mosaic Ultra; Magna Ease; Mitroflow) was obtained. Subsequently, custom sizers were created that were accurate to 0.0025 mm to represent aortic roots 18 mm through 32 mm, and these were used to measure the external diameter of each valve. Using the effective orifice area (EOA) and transvalvular pressure gradient (TPG) data submitted to the FDA, a comparison was made between the hemodynamic properties of valves with equivalent manufacturer stated sizes and valves with equivalent measured external diameters. Based on manufacturer size alone, the valves at first seemed to be hemodynamically different from each other, with Mitroflow valves appearing to be hemodynamically superior, having a large EOA and equivalent or superior TPG (p < 0.05). However, Mitroflow valves had a larger measured external diameter than the other valves of a given numerical manufacturer size. Valves with equivalent external diameters were then compared, regardless of the stated manufacturer sizes. For truly equivalently sized valves (i.e., by measured external diameter) there was no clear hemodynamic difference. There was no statistical difference in the EOAs between the Biocor Supra, Mosaic Ultra, and Mitroflow valves, and the Magna Ease valve had a statistically smaller EOA (p < 0.05). On comparing the mean TPG, the Biocor Supra and Mitroflow valves had statistically equivalent gradients to each other, as did the Mosaic Ultra and Magna Ease valves. When comparing valves of the same numerical manufacturer size, there appears to be a difference in hemodynamic performance across different manufacturers' valves according to FDA data. However, comparing equivalently measured valves eliminates the differences between valves produced by different manufacturers.

  14. Lead exposure in US worksites: A literature review and development of an occupational lead exposure database from the published literature

    PubMed Central

    Koh, Dong-Hee; Locke, Sarah J.; Chen, Yu-Cheng; Purdue, Mark P.; Friesen, Melissa C.

    2016-01-01

    Background Retrospective exposure assessment of occupational lead exposure in population-based studies requires historical exposure information from many occupations and industries. Methods We reviewed published US exposure monitoring studies to identify lead exposure measurement data. We developed an occupational lead exposure database from the 175 identified papers containing 1,111 sets of lead concentration summary statistics (21% area air, 47% personal air, 32% blood). We also extracted ancillary exposure-related information, including job, industry, task/location, year collected, sampling strategy, control measures in place, and sampling and analytical methods. Results Measurements were published between 1940 and 2010 and represented 27 2-digit standardized industry classification codes. The majority of the measurements were related to lead-based paint work, joining or cutting metal using heat, primary and secondary metal manufacturing, and lead acid battery manufacturing. Conclusions This database can be used in future statistical analyses to characterize differences in lead exposure across time, jobs, and industries. PMID:25968240

  15. A round-robin gamma stereotactic radiosurgery dosimetry interinstitution comparison of calibration protocols

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drzymala, R. E., E-mail: drzymala@wustl.edu; Alvarez, P. E.; Bednarz, G.

    2015-11-15

    Purpose: Absorbed dose calibration for gamma stereotactic radiosurgery is challenging due to the unique geometric conditions, dosimetry characteristics, and nonstandard field size of these devices. Members of the American Association of Physicists in Medicine (AAPM) Task Group 178 on Gamma Stereotactic Radiosurgery Dosimetry and Quality Assurance have participated in a round-robin exchange of calibrated measurement instrumentation and phantoms exploring two approved and two proposed calibration protocols or formalisms on ten gamma radiosurgery units. The objectives of this study were to benchmark and compare new formalisms to existing calibration methods, while maintaining traceability to U.S. primary dosimetry calibration laboratory standards. Methods:more » Nine institutions made measurements using ten gamma stereotactic radiosurgery units in three different 160 mm diameter spherical phantoms [acrylonitrile butadiene styrene (ABS) plastic, Solid Water, and liquid water] and in air using a positioning jig. Two calibrated miniature ionization chambers and one calibrated electrometer were circulated for all measurements. Reference dose-rates at the phantom center were determined using the well-established AAPM TG-21 or TG-51 dose calibration protocols and using two proposed dose calibration protocols/formalisms: an in-air protocol and a formalism proposed by the International Atomic Energy Agency (IAEA) working group for small and nonstandard radiation fields. Each institution’s results were normalized to the dose-rate determined at that institution using the TG-21 protocol in the ABS phantom. Results: Percentages of dose-rates within 1.5% of the reference dose-rate (TG-21 + ABS phantom) for the eight chamber-protocol-phantom combinations were the following: 88% for TG-21, 70% for TG-51, 93% for the new IAEA nonstandard-field formalism, and 65% for the new in-air protocol. Averages and standard deviations for dose-rates over all measurements relative to the TG-21 + ABS dose-rate were 0.999 ± 0.009 (TG-21), 0.991 ± 0.013 (TG-51), 1.000 ± 0.009 (IAEA), and 1.009 ± 0.012 (in-air). There were no statistically significant differences (i.e., p > 0.05) between the two ionization chambers for the TG-21 protocol applied to all dosimetry phantoms. The mean results using the TG-51 protocol were notably lower than those for the other dosimetry protocols, with a standard deviation 2–3 times larger. The in-air protocol was not statistically different from TG-21 for the A16 chamber in the liquid water or ABS phantoms (p = 0.300 and p = 0.135) but was statistically different from TG-21 for the PTW chamber in all phantoms (p = 0.006 for Solid Water, 0.014 for liquid water, and 0.020 for ABS). Results of IAEA formalism were statistically different from TG-21 results only for the combination of the A16 chamber with the liquid water phantom (p = 0.017). In the latter case, dose-rates measured with the two protocols differed by only 0.4%. For other phantom-ionization-chamber combinations, the new IAEA formalism was not statistically different from TG-21. Conclusions: Although further investigation is needed to validate the new protocols for other ionization chambers, these results can serve as a reference to quantitatively compare different calibration protocols and ionization chambers if a particular method is chosen by a professional society to serve as a standardized calibration protocol.« less

  16. Evaluation of maxillary anterior teeth and their relation to the golden proportion in Malaysian population.

    PubMed

    Al-Marzok, Maan Ibrahim; Majeed, Kais Raad Abdul; Ibrahim, Ibrahim Khalil

    2013-01-24

    The maxillary anterior teeth are important in achieving pleasing dental aesthetics. Various methods are used to measure the size and form of them, including the golden proportion between their perceived widths, and the width-to-height ratio, referred to as the golden standard. The purpose of this study was conducted to evaluate whether consistent relationships exist between tooth width and height of the clinical crown dimensions; and to investigate the occurrence of the golden proportion of the maxillary anterior teeth. Dental casts of the maxillary arches were made in this cross-sectional study from MAHSA University College students who met the inclusion criteria. The 49 participants represented the Malaysian population main ethnics. The dimensions of the anterior teeth and the perceived width of anterior teeth viewed from front were measured using a digital caliper. Comparison of the perceived width ratio of lateral to central incisor and canine to lateral incisor with the golden proportion of 0.618 revealed there were a significant statistical difference (p < 0.05). The statistical difference was significant for the width-to-height ratio of central incisors to the golden standard of 80%. There was no significant difference in the comparison among ethnic groups for the golden proportion and the golden standard. The golden proportion was not found to exist between the perceived widths of maxillary anterior teeth. No golden standard were detected for the width-to-height proportions of maxillary incisors. Specific population characteristics and perception of beauty must be considered. However, ethnicity has no association with the proportions of maxillary anterior teeth.

  17. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  18. Do regional methods really help reduce uncertainties in flood frequency analyses?

    NASA Astrophysics Data System (ADS)

    Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric

    2013-04-01

    Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.

  19. Comparison of methodologies for calculating quality measures based on administrative data versus clinical data from an electronic health record system: implications for performance measures.

    PubMed

    Tang, Paul C; Ralston, Mary; Arrigotti, Michelle Fernandez; Qureshi, Lubna; Graham, Justin

    2007-01-01

    New reimbursement policies and pay-for-performance programs to reward providers for producing better outcomes are proliferating. Although electronic health record (EHR) systems could provide essential clinical data upon which to base quality measures, most metrics in use were derived from administrative claims data. We compared commonly used quality measures calculated from administrative data to those derived from clinical data in an EHR based on a random sample of 125 charts of Medicare patients with diabetes. Using standard definitions based on administrative data (which require two visits with an encounter diagnosis of diabetes during the measurement period), only 75% of diabetics determined by manually reviewing the EHR (the gold standard) were identified. In contrast, 97% of diabetics were identified using coded information in the EHR. The discrepancies in identified patients resulted in statistically significant differences in the quality measures for frequency of HbA1c testing, control of blood pressure, frequency of testing for urine protein, and frequency of eye exams for diabetic patients. New development of standardized quality measures should shift from claims-based measures to clinically based measures that can be derived from coded information in an EHR. Using data from EHRs will also leverage their clinical content without adding burden to the care process.

  20. Open-source platform to benchmark fingerprints for ligand-based virtual screening

    PubMed Central

    2013-01-01

    Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588

  1. Earth Versus Neutrinos: Measuring the Total Muon-Neutrino-to-Nucleon Cross Section at Ultra-High Energies through Differential Earth Absorption of Muon Neutrinos from Cosmic Rays Using the IceCube Detector

    NASA Astrophysics Data System (ADS)

    Miarecki, Sandra Christine

    The IceCube Neutrino Detector at the South Pole was constructed to measure the flux of high-energy neutrinos and to try to identify their cosmic sources. In addition to these astrophysical neutrinos, IceCube also detects the neutrinos that result from cosmic ray interactions with the atmosphere. These atmospheric neutrinos can be used to measure the total muon neutrino-to-nucleon cross section by measuring neutrino absorption in the Earth. The measurement involves isolating a sample of 10,784 Earth-transiting muons detected by IceCube in its 79-string configuration. The cross-section is determined using a two-dimensional fit in measured muon energy and zenith angle and is presented as a multiple of the Standard Model expectation as calculated by Cooper-Sarkar, Mertsch, and Sarkar in 2011. A multiple of 1.0 would indicate agreement with the Standard Model. The results of this analysis find the multiple to be 1.30 (+0.21 -0.19 statistical) (+0.40 -0.44 systematic) for the neutrino energy range of 6.3 to 980 TeV, which is in agreement with the Standard Model expectation.

  2. Data Processing System (DPS) software with experimental design, statistical analysis and data mining developed for use in entomological research.

    PubMed

    Tang, Qi-Yi; Zhang, Chuan-Xi

    2013-04-01

    A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.

  3. Upgrade Summer Severe Weather Tool

    NASA Technical Reports Server (NTRS)

    Watson, Leela

    2011-01-01

    The goal of this task was to upgrade to the existing severe weather database by adding observations from the 2010 warm season, update the verification dataset with results from the 2010 warm season, use statistical logistic regression analysis on the database and develop a new forecast tool. The AMU analyzed 7 stability parameters that showed the possibility of providing guidance in forecasting severe weather, calculated verification statistics for the Total Threat Score (TTS), and calculated warm season verification statistics for the 2010 season. The AMU also performed statistical logistic regression analysis on the 22-year severe weather database. The results indicated that the logistic regression equation did not show an increase in skill over the previously developed TTS. The equation showed less accuracy than TTS at predicting severe weather, little ability to distinguish between severe and non-severe weather days, and worse standard categorical accuracy measures and skill scores over TTS.

  4. Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions

    NASA Technical Reports Server (NTRS)

    Miles, Jeffrey Hilton

    2006-01-01

    This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.

  5. TRAPR: R Package for Statistical Analysis and Visualization of RNA-Seq Data.

    PubMed

    Lim, Jae Hyun; Lee, Soo Youn; Kim, Ju Han

    2017-03-01

    High-throughput transcriptome sequencing, also known as RNA sequencing (RNA-Seq), is a standard technology for measuring gene expression with unprecedented accuracy. Numerous bioconductor packages have been developed for the statistical analysis of RNA-Seq data. However, these tools focus on specific aspects of the data analysis pipeline, and are difficult to appropriately integrate with one another due to their disparate data structures and processing methods. They also lack visualization methods to confirm the integrity of the data and the process. In this paper, we propose an R-based RNA-Seq analysis pipeline called TRAPR, an integrated tool that facilitates the statistical analysis and visualization of RNA-Seq expression data. TRAPR provides various functions for data management, the filtering of low-quality data, normalization, transformation, statistical analysis, data visualization, and result visualization that allow researchers to build customized analysis pipelines.

  6. [Analysis the epidemiological features of 3,258 patients with allergic rhinitis in Yichang City].

    PubMed

    Chen, Bo; Zhang, Zhimao; Pei, Zhi; Chen, Shihan; Du, Zhimei; Lan, Yan; Han, Bei; Qi, Qi

    2015-02-01

    To investigate the epidemiological features in patients with allergic rhinitis (AR) in Yichang city, and put forward effective prevention and control measures. Collecting the data of allergic rhinitis in city proper from 2010 to 2013, input the data into the database and used statistical analysis. In recent years, the AR patients in this area increased year by year. The spring and the winter were the peak season of onset. The patients was constituted by young men. There was statistically significant difference between the age, the area,and the gender (P < 0.01). The history of allergy and the diseases related to the gender composition had statistical significance difference (P < 0.05). The allergens and the positive degree in gender, age structure had statistically significant difference (P < 0.01). Need to conduct the healthy propaganda and education, optimizing the environment, change the bad habits, timely medical treatment, standard treatment.

  7. Atherosclerosis is associated with erectile function and lower urinary tract symptoms, especially nocturia, in middle-aged men.

    PubMed

    Tsujimura, Akira; Hiramatsu, Ippei; Aoki, Yusuke; Shimoyama, Hirofumi; Mizuno, Taiki; Nozaki, Taiji; Shirai, Masato; Kobayashi, Kazuhiro; Kumamoto, Yoshiaki; Horie, Shigeo

    2017-06-01

    Atherosclerosis is a systematic disease in which plaque builds up inside the arteries that can lead to serious problems related to quality of life (QOL). Lower urinary tract symptoms (LUTS), erectile dysfunction (ED), and late-onset hypogonadism (LOH) are highly prevalent in aging men and are significantly associated with a reduced QOL. However, few questionnaire-based studies have fully examined the relation between atherosclerosis and several urological symptoms. The study comprised 303 outpatients who visited our clinic with symptoms of LOH. Several factors influencing atherosclerosis, including serum concentrations of triglyceride, fasting blood sugar, and total testosterone measured by radioimmunoassay, were investigated. We also measured brachial-ankle pulse wave velocity (baPWV) and assessed symptoms by specific questionnaires, including the Sexual Health Inventory for Men (SHIM), Erection Hardness Score (EHS), International Prostate Symptom Score (IPSS), QOL index, and Aging Male Symptoms rating scale (AMS). Stepwise associations between the ratio of measured/age standard baPWV and clinical factors including laboratory data and the scores of the questionnaires were compared using the Jonckheere-Terpstra test for trend. The associations between the ratio of measured/age standard baPWV and each IPSS score were assessed in a multivariate linear regression model after adjustment for serum triglyceride, fasting blood sugar, and total testosterone. Regarding ED, a higher level of the ratio of measured/age standard baPWV was associated with a lower EHS, whereas no association was found with SHIM. Regarding LUTS, a higher ratio of measured/age standard baPWV was associated with a higher IPSS and QOL index. However, there was no statistically significant difference between the ratio of measured/age standard baPWV and AMS. A multivariate linear regression model showed only nocturia to be associated with the ratio of measured/age standard baPWV for each IPSS score. Atherosclerosis is associated with erectile function and LUTS, especially nocturia.

  8. Exocrine Dysfunction Correlates with Endocrinal Impairment of Pancreas in Type 2 Diabetes Mellitus

    PubMed Central

    Prasanna Kumar, H. R.; Gowdappa, H. Basavana; Hosmani, Tejashwi; Urs, Tejashri

    2018-01-01

    Background: Diabetes mellitus (DM) is a chronic abnormal metabolic condition, which manifests elevated blood sugar level over a prolonged period. The pancreatic endocrine system generally gets affected during diabetes, but often abnormal exocrine functions are also manifested due to its proximity to the endocrine system. Fecal elastase-1 (FE-1) is found to be an ideal biomarker to reflect the exocrine insufficiency of the pancreas. Aim: The aim of this study was conducted to assess exocrine dysfunction of the pancreas in patients with type-2 DM (T2DM) by measuring FE levels and to associate the level of hyperglycemia with exocrine pancreatic dysfunction. Methodology: A prospective, cross-sectional comparative study was conducted on both T2DM patients and healthy nondiabetic volunteers. FE-1 levels were measured using a commercial kit (Human Pancreatic Elastase ELISA BS 86-01 from Bioserv Diagnostics). Data analysis was performed based on the important statistical parameters such as mean, standard deviation, standard error, t-test-independent samples, and Chi-square test/cross tabulation using SPSS for Windows version 20.0. Results: Statistically nonsignificant (P = 0.5051) relationship between FE-1 deficiency and age was obtained, which implied age as a noncontributing factor toward exocrine pancreatic insufficiency among diabetic patients. Statistically significant correlation (P = 0.003) between glycated hemoglobin and FE-1 levels was also noted. The association between retinopathy (P = 0.001) and peripheral pulses (P = 0.001) with FE-1 levels were found to be statistically significant. Conclusion: This study validates the benefit of FE-1 estimation, as a surrogate marker of exocrine pancreatic insufficiency, which remains unmanifest and subclinical. PMID:29535950

  9. A framework for incorporating DTI Atlas Builder registration into Tract-Based Spatial Statistics and a simulated comparison to standard TBSS.

    PubMed

    Leming, Matthew; Steiner, Rachel; Styner, Martin

    2016-02-27

    Tract-based spatial statistics (TBSS) 6 is a software pipeline widely employed in comparative analysis of the white matter integrity from diffusion tensor imaging (DTI) datasets. In this study, we seek to evaluate the relationship between different methods of atlas registration for use with TBSS and different measurements of DTI (fractional anisotropy, FA, axial diffusivity, AD, radial diffusivity, RD, and medial diffusivity, MD). To do so, we have developed a novel tool that builds on existing diffusion atlas building software, integrating it into an adapted version of TBSS called DAB-TBSS (DTI Atlas Builder-Tract-Based Spatial Statistics) by using the advanced registration offered in DTI Atlas Builder 7 . To compare the effectiveness of these two versions of TBSS, we also propose a framework for simulating population differences for diffusion tensor imaging data, providing a more substantive means of empirically comparing DTI group analysis programs such as TBSS. In this study, we used 33 diffusion tensor imaging datasets and simulated group-wise changes in this data by increasing, in three different simulations, the principal eigenvalue (directly altering AD), the second and third eigenvalues (RD), and all three eigenvalues (MD) in the genu, the right uncinate fasciculus, and the left IFO. Additionally, we assessed the benefits of comparing the tensors directly using a functional analysis of diffusion tensor tract statistics (FADTTS 10 ). Our results indicate comparable levels of FA-based detection between DAB-TBSS and TBSS, with standard TBSS registration reporting a higher rate of false positives in other measurements of DTI. Within the simulated changes investigated here, this study suggests that the use of DTI Atlas Builder's registration enhances TBSS group-based studies.

  10. The Impact of Linking Distinct Achievement Test Scores on the Interpretation of Student Growth in Achievement

    ERIC Educational Resources Information Center

    Airola, Denise Tobin

    2011-01-01

    Changes to state tests impact the ability of State Education Agencies (SEAs) to monitor change in performance over time. The purpose of this study was to evaluate the Standardized Performance Growth Index (PGIz), a proposed statistical model for measuring change in student and school performance, across transitions in tests. The PGIz is a…

  11. The p-Value You Can't Buy.

    PubMed

    Demidenko, Eugene

    2016-01-02

    There is growing frustration with the concept of the p -value. Besides having an ambiguous interpretation, the p- value can be made as small as desired by increasing the sample size, n . The p -value is outdated and does not make sense with big data: Everything becomes statistically significant. The root of the problem with the p- value is in the mean comparison. We argue that statistical uncertainty should be measured on the individual, not the group, level. Consequently, standard deviation (SD), not standard error (SE), error bars should be used to graphically present the data on two groups. We introduce a new measure based on the discrimination of individuals/objects from two groups, and call it the D -value. The D -value can be viewed as the n -of-1 p -value because it is computed in the same way as p while letting n equal 1. We show how the D -value is related to discrimination probability and the area above the receiver operating characteristic (ROC) curve. The D -value has a clear interpretation as the proportion of patients who get worse after the treatment, and as such facilitates to weigh up the likelihood of events under different scenarios. [Received January 2015. Revised June 2015.].

  12. Changing Criminal Attitudes Among Incarcerated Offenders: Initial Examination of a Structured Treatment Program.

    PubMed

    Simourd, David J; Olver, Mark E; Brandenburg, Bryan

    2016-09-01

    The present study investigated the effect of a criminal attitude treatment program to changes on measured criminal attitudes and postprogram recidivism. The criminal attitude program (CAP) is a standardized therapeutic curriculum consisting of 15 modules offering 44 hr of therapeutic time. It was delivered by trained facilitators to a total of 113 male offenders incarcerated in one of five state correctional institutions. Pretreatment and posttreatment comparisons were made on standardized measures of criminal attitudes, response bias, and motivation for lifestyle changes. Results found statistically significant lower criminal attitudes at posttreatment that were unaffected by response bias. There were also increases in motivation for lifestyle changes, but these did not reach statistical significance. Fifty-seven participants were released into the community following the program and were eligible for recidivism analyses. Comparisons between participants who completed the CAP and those who did not complete the CAP revealed 7% lower rearrest among CAP completers. Although preliminary, these results indicate that the CAP had a positive effect on changes to criminal attitudes and recidivism. The findings are discussed in terms of conceptual and practical considerations in the assessment and treatment of criminal attitudes among offenders. © The Author(s) 2015.

  13. Advances in Statistical Methods for Substance Abuse Prevention Research

    PubMed Central

    MacKinnon, David P.; Lockwood, Chondra M.

    2010-01-01

    The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467

  14. The effect of sensor spacing on wind measurements at the Shuttle Landing Facility

    NASA Technical Reports Server (NTRS)

    Merceret, Francis J.

    1995-01-01

    This document presents results of a field study of the effect of sensor spacing on the validity of wind measurements at the Space Shuttle landing Facility (SLF). Standard measurements are made at one second intervals from 30 foot (9.1m) towers located 500 feet (152m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. This study quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions. Correlations, spectra, moments, and structure functions were computed. A universal normalization for structure functions was devised. The normalized structure functions increase as the 2/3 power of separation distance until an asymptotic value is approached. This occurs at spacings of several hundred feet (about 100m). At larger spacings, the structure functions are bounded by the asymptote. This enables quantitative estimates of the expected differences between the winds at the measurement point and the points of interest to be made from the measured wind statistics. A procedure is provided for making these estimates.

  15. In vivo dosimetry for external photon treatments of head and neck cancers by diodes and TLDS.

    PubMed

    Tung, C J; Wang, H C; Lo, S H; Wu, J M; Wang, C J

    2004-01-01

    In vivo dosimetry was implemented for treatments of head and neck cancers in the large fields. Diode and thermoluminescence dosemeter (TLD) measurements were carried out for the linear accelerators of 6 MV photon beams. ESTRO in vivo dosimetry protocols were followed in the determination of midline doses from measurements of entrance and exit doses. Of the fields monitored by diodes, the maximum absolute deviation of measured midline doses from planned target doses was 8%, with the mean value and the standard deviation of -1.0 and 2.7%. If planned target doses were calculated using radiological water equivalent thicknesses rather than patient geometric thicknesses, the maximum absolute deviation dropped to 4%, with the mean and the standard deviation of 0.7 and 1.8%. For in vivo dosimetry monitored by TLDs, the shift in mean dose remained small but the statistical precision became poor.

  16. 78 FR 9055 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-07

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the..., Medical Systems Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo...

  17. Application of Allan Deviation to Assessing Uncertainties of Continuous-measurement Instruments, and Optimizing Calibration Schemes

    NASA Astrophysics Data System (ADS)

    Jacobson, Gloria; Rella, Chris; Farinas, Alejandro

    2014-05-01

    Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993

  18. Developing the Precision Magnetic Field for the E989 Muon g{2 Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Matthias W.

    The experimental value ofmore » $$(g\\hbox{--}2)_\\mu$$ historically has been and contemporarily remains an important probe into the Standard Model and proposed extensions. Previous measurements of $$(g\\hbox{--}2)_\\mu$$ exhibit a persistent statistical tension with calculations using the Standard Model implying that the theory may be incomplete and constraining possible extensions. The Fermilab Muon g-2 experiment, E989, endeavors to increase the precision over previous experiments by a factor of four and probe more deeply into the tension with the Standard Model. The $$(g\\hbox{--}2)_\\mu$$ experimental implementation measures two spin precession frequencies defined by the magnetic field, proton precession and muon precession. The value of $$(g\\hbox{--}2)_\\mu$$ is derived from a relationship between the two frequencies. The precision of magnetic field measurements and the overall magnetic field uniformity achieved over the muon storage volume are then two undeniably important aspects of the e xperiment in minimizing uncertainty. The current thesis details the methods employed to achieve magnetic field goals and results of the effort.« less

  19. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries.

    PubMed

    Rivera-Rodriguez, Claudia L; Resch, Stephen; Haneuse, Sebastien

    2018-01-01

    In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty.

  20. Quantifying and reducing statistical uncertainty in sample-based health program costing studies in low- and middle-income countries

    PubMed Central

    Resch, Stephen

    2018-01-01

    Objectives: In many low- and middle-income countries, the costs of delivering public health programs such as for HIV/AIDS, nutrition, and immunization are not routinely tracked. A number of recent studies have sought to estimate program costs on the basis of detailed information collected on a subsample of facilities. While unbiased estimates can be obtained via accurate measurement and appropriate analyses, they are subject to statistical uncertainty. Quantification of this uncertainty, for example, via standard errors and/or 95% confidence intervals, provides important contextual information for decision-makers and for the design of future costing studies. While other forms of uncertainty, such as that due to model misspecification, are considered and can be investigated through sensitivity analyses, statistical uncertainty is often not reported in studies estimating the total program costs. This may be due to a lack of awareness/understanding of (1) the technical details regarding uncertainty estimation and (2) the availability of software with which to calculate uncertainty for estimators resulting from complex surveys. We provide an overview of statistical uncertainty in the context of complex costing surveys, emphasizing the various potential specific sources that contribute to overall uncertainty. Methods: We describe how analysts can compute measures of uncertainty, either via appropriately derived formulae or through resampling techniques such as the bootstrap. We also provide an overview of calibration as a means of using additional auxiliary information that is readily available for the entire program, such as the total number of doses administered, to decrease uncertainty and thereby improve decision-making and the planning of future studies. Results: A recent study of the national program for routine immunization in Honduras shows that uncertainty can be reduced by using information available prior to the study. This method can not only be used when estimating the total cost of delivering established health programs but also to decrease uncertainty when the interest lies in assessing the incremental effect of an intervention. Conclusion: Measures of statistical uncertainty associated with survey-based estimates of program costs, such as standard errors and 95% confidence intervals, provide important contextual information for health policy decision-making and key inputs for the design of future costing studies. Such measures are often not reported, possibly because of technical challenges associated with their calculation and a lack of awareness of appropriate software. Modern statistical analysis methods for survey data, such as calibration, provide a means to exploit additional information that is readily available but was not used in the design of the study to significantly improve the estimation of total cost through the reduction of statistical uncertainty. PMID:29636964

  1. A strategy for enhancing financial performance: a study of general acute care hospitals in South Korea.

    PubMed

    Choi, Mankyu; Lee, Keon-Hyung

    2008-01-01

    In this study, the determinants of hospital profitability were evaluated using a sample of 142 hospitals that had undergone hospital standardization inspections by the South Korea Hospital Association over the 4-year period from 1998 to 2001. The measures of profitability used as dependent variables in this study were pretax return on assets, after-tax return on assets, basic earning power, pretax operating margin, and after-tax operating margin. Among those determinants, it was found that ownership type, teaching status, inventory turnover, and the average charge per adjusted inpatient day positively and statistically significantly affected all 5 of these profitability measures. However, the labor expenses per adjusted inpatient day and administrative expenses per adjusted inpatient day negatively and statistically significantly affected all 5 profitability measures. The debt ratio negatively and statistically significantly affected all 5 profitability measures, with the exception of basic earning power. None of the market factors assessed were shown to significantly affect profitability. In conclusion, the results of this study suggest that the profitability of hospitals can be improved despite deteriorating external environmental conditions by facilitating the formation of sound financial structures with optimal capital supplies, optimizing the management of total assets with special emphasis placed on inventory management, and introducing efficient control of fixed costs including labor and administrative expenses.

  2. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    NASA Astrophysics Data System (ADS)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  3. Estimation of Total Length of Femur from its Proximal and Distal Segmental Measurements of Disarticulated Femur Bones of Nepalese Population using Regression Equation Method.

    PubMed

    Khanal, Laxman; Shah, Sandip; Koirala, Sarun

    2017-03-01

    Length of long bones is taken as an important contributor for estimating one of the four elements of forensic anthropology i.e., stature of the individual. Since physical characteristics of the individual differ among different groups of population, population specific studies are needed for estimating the total length of femur from its segment measurements. Since femur is not always recovered intact in forensic cases, it was the aim of this study to derive regression equations from measurements of proximal and distal fragments in Nepalese population. A cross-sectional study was done among 60 dry femora (30 from each side) without sex determination in anthropometry laboratory. Along with maximum femoral length, four proximal and four distal segmental measurements were measured following the standard method with the help of osteometric board, measuring tape and digital Vernier's caliper. Bones with gross defects were excluded from the study. Measured values were recorded separately for right and left side. Statistical Package for Social Science (SPSS version 11.5) was used for statistical analysis. The value of segmental measurements were different between right and left side but statistical difference was not significant except for depth of medial condyle (p=0.02). All the measurements were positively correlated and found to have linear relationship with the femoral length. With the help of regression equation, femoral length can be calculated from the segmental measurements; and then femoral length can be used to calculate the stature of the individual. The data collected may contribute in the analysis of forensic bone remains in study population.

  4. Statistics of baryon correlation functions in lattice QCD

    NASA Astrophysics Data System (ADS)

    Wagman, Michael L.; Savage, Martin J.; Nplqcd Collaboration

    2017-12-01

    A systematic analysis of the structure of single-baryon correlation functions calculated with lattice QCD is performed, with a particular focus on characterizing the structure of the noise associated with quantum fluctuations. The signal-to-noise problem in these correlation functions is shown, as long suspected, to result from a sign problem. The log-magnitude and complex phase are found to be approximately described by normal and wrapped normal distributions respectively. Properties of circular statistics are used to understand the emergence of a large time noise region where standard energy measurements are unreliable. Power-law tails in the distribution of baryon correlation functions, associated with stable distributions and "Lévy flights," are found to play a central role in their time evolution. A new method of analyzing correlation functions is considered for which the signal-to-noise ratio of energy measurements is constant, rather than exponentially degrading, with increasing source-sink separation time. This new method includes an additional systematic uncertainty that can be removed by performing an extrapolation, and the signal-to-noise problem reemerges in the statistics of this extrapolation. It is demonstrated that this new method allows accurate results for the nucleon mass to be extracted from the large-time noise region inaccessible to standard methods. The observations presented here are expected to apply to quantum Monte Carlo calculations more generally. Similar methods to those introduced here may lead to practical improvements in analysis of noisier systems.

  5. On evaluating compliance with air pollution levels 'not to be exceeded more than once per year'

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Sidik, S. M.

    1974-01-01

    The point of view taken is that the Environmental Protection Agency (EPA) Air Quality Standards (AQS) represent conditions which must be made to exist in the ambient environment. The statistical techniques developed should serve as tools for measuring the closeness to achieving the desired quality of air. It is shown that the sampling frequency recommended by EPA is inadequate to meet these objectives when the standard is expressed as a level not to be exceeded more than once per year and sampling frequency is once every three days or less frequent.

  6. Mysid (Mysidopsis bahia) life-cycle test: Design comparisons and assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lussier, S.M.; Champlin, D.; Kuhn, A.

    1996-12-31

    This study examines ASTM Standard E1191-90, ``Standard Guide for Conducting Life-cycle Toxicity Tests with Saltwater Mysids,`` 1990, using Mysidopsis bahia, by comparing several test designs to assess growth, reproduction, and survival. The primary objective was to determine the most labor efficient and statistically powerful test design for the measurement of statistically detectable effects on biologically sensitive endpoints. Five different test designs were evaluated varying compartment size, number of organisms per compartment and sex ratio. Results showed that while paired organisms in the ASTM design had the highest rate of reproduction among designs tested, no individual design had greater statistical powermore » to detect differences in reproductive effects. Reproduction was not statistically different between organisms paired in the ASTM design and those with randomized sex ratios using larger test compartments. These treatments had numerically higher reproductive success and lower within tank replicate variance than treatments using smaller compartments where organisms were randomized, or had a specific sex ratio. In this study, survival and growth were not statistically different among designs tested. Within tank replicate variability can be reduced by using many exposure compartments with pairs, or few compartments with many organisms in each. While this improves variance within replicate chambers, it does not strengthen the power of detection among treatments in the test. An increase in the number of true replicates (exposure chambers) to eight will have the effect of reducing the percent detectable difference by a factor of two.« less

  7. In vivo evaluation of the effect of stimulus distribution on FIR statistical efficiency in event-related fMRI

    PubMed Central

    Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L

    2013-01-01

    Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a-priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. PMID:23473798

  8. Estimating error statistics for Chambon-la-Forêt observatory definitive data

    NASA Astrophysics Data System (ADS)

    Lesur, Vincent; Heumez, Benoît; Telali, Abdelkader; Lalanne, Xavier; Soloviev, Anatoly

    2017-08-01

    We propose a new algorithm for calibrating definitive observatory data with the goal of providing users with estimates of the data error standard deviations (SDs). The algorithm has been implemented and tested using Chambon-la-Forêt observatory (CLF) data. The calibration process uses all available data. It is set as a large, weakly non-linear, inverse problem that ultimately provides estimates of baseline values in three orthogonal directions, together with their expected standard deviations. For this inverse problem, absolute data error statistics are estimated from two series of absolute measurements made within a day. Similarly, variometer data error statistics are derived by comparing variometer data time series between different pairs of instruments over few years. The comparisons of these time series led us to use an autoregressive process of order 1 (AR1 process) as a prior for the baselines. Therefore the obtained baselines do not vary smoothly in time. They have relatively small SDs, well below 300 pT when absolute data are recorded twice a week - i.e. within the daily to weekly measures recommended by INTERMAGNET. The algorithm was tested against the process traditionally used to derive baselines at CLF observatory, suggesting that statistics are less favourable when this latter process is used. Finally, two sets of definitive data were calibrated using the new algorithm. Their comparison shows that the definitive data SDs are less than 400 pT and may be slightly overestimated by our process: an indication that more work is required to have proper estimates of absolute data error statistics. For magnetic field modelling, the results show that even on isolated sites like CLF observatory, there are very localised signals over a large span of temporal frequencies that can be as large as 1 nT. The SDs reported here encompass signals of a few hundred metres and less than a day wavelengths.

  9. Intensive Reading Remediation in Grade 2 or 3: Are There Effects a Decade Later?

    PubMed Central

    Blachman, Benita A.; Schatschneider, Christopher; Fletcher, Jack M.; Murray, Maria S.; Munger, Kristen A.; Vaughn, Michael G.

    2014-01-01

    Despite data supporting the benefits of early reading interventions, there has been little evaluation of the long-term educational impact of these interventions, with most follow-up studies lasting less than two years (Suggate, 2010). This study evaluated reading outcomes more than a decade after the completion of an 8-month reading intervention using a randomized design with second and third graders selected on the basis of poor word-level skills (Blachman et al., 2004). Fifty-eight (84%) of the original 69 participants took part in the study. The treatment group demonstrated a moderate to small effect size advantage on reading and spelling measures over the comparison group. There were statistically significant differences with moderate effect sizes between treatment and comparison groups on standardized measures of word recognition (i.e., Woodcock Basic Skills Cluster, d = 0.53; Woodcock Word Identification, d = 0.62), the primary, but not exclusive, focus of the intervention. Statistical tests on other reading and spelling measures did not reach thresholds for statistical significance. Patterns in the data related to other educational outcomes, such as high school completion, favored the treatment participants, although differences were not significant. PMID:24578581

  10. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  11. 75 FR 39265 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-07-08

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Prevention, Classifications and Public Health Data Standards, 3311 Toledo Road, Room 2337, Hyattsville, MD...

  12. 78 FR 53148 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-28

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337...

  13. 75 FR 56549 - National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-16

    ... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337, Hyattsville, Maryland 20782, e...

  14. Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation

    ERIC Educational Resources Information Center

    Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann

    2017-01-01

    This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…

  15. Bivariate random-effects meta-analysis models for diagnostic test accuracy studies using arcsine-based transformations.

    PubMed

    Negeri, Zelalem F; Shaikh, Mateen; Beyene, Joseph

    2018-05-11

    Diagnostic or screening tests are widely used in medical fields to classify patients according to their disease status. Several statistical models for meta-analysis of diagnostic test accuracy studies have been developed to synthesize test sensitivity and specificity of a diagnostic test of interest. Because of the correlation between test sensitivity and specificity, modeling the two measures using a bivariate model is recommended. In this paper, we extend the current standard bivariate linear mixed model (LMM) by proposing two variance-stabilizing transformations: the arcsine square root and the Freeman-Tukey double arcsine transformation. We compared the performance of the proposed methods with the standard method through simulations using several performance measures. The simulation results showed that our proposed methods performed better than the standard LMM in terms of bias, root mean square error, and coverage probability in most of the scenarios, even when data were generated assuming the standard LMM. We also illustrated the methods using two real data sets. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. CORRELATIONS BETWEEN INTELLIGENCE, HEAD CIRCUMFERENCE AND HEIGHT: EVIDENCE FROM TWO SAMPLES IN SAUDI ARABIA.

    PubMed

    Bakhiet, Salaheldin Farah Attallah; Essa, Yossry Ahmed Sayed; Dwieb, Amira Mahmood Mohsen; Elsayed, Abdelkader Mohamed Abdelkader; Sulman, Afra Sulman Mohammed; Cheng, Helen; Lynn, Richard

    2017-03-01

    This study was based on two independent studies which in total consisted of 1812 school pupils aged 6-12 years in Saudi Arabia. Study I consisted of 1591 school pupils (609 boys and 982 girls) attending state schools, and Study II consisted of 211 boys with learning disabilities. Intelligence (measured using the Standard Progressive Matrices Plus for Study I and the Standard Progressive Matrices for Study II), head size and height were measured for the two samples. The results showed that intelligence was statistically significantly correlated with head circumference (r=0.350, p<0.001 for Study I and r=0.168, p<0.05 for Study II) and height (r=0.271, p<0.001 for Study I and r=0.178, p<0.05 for Study II).

  17. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities

    PubMed Central

    Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.

    2016-01-01

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220

  18. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities.

    PubMed

    Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X

    2016-11-21

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.

  19. Reporting Practices and Use of Quantitative Methods in Canadian Journal Articles in Psychology.

    PubMed

    Counsell, Alyssa; Harlow, Lisa L

    2017-05-01

    With recent focus on the state of research in psychology, it is essential to assess the nature of the statistical methods and analyses used and reported by psychological researchers. To that end, we investigated the prevalence of different statistical procedures and the nature of statistical reporting practices in recent articles from the four major Canadian psychology journals. The majority of authors evaluated their research hypotheses through the use of analysis of variance (ANOVA), t -tests, and multiple regression. Multivariate approaches were less common. Null hypothesis significance testing remains a popular strategy, but the majority of authors reported a standardized or unstandardized effect size measure alongside their significance test results. Confidence intervals on effect sizes were infrequently employed. Many authors provided minimal details about their statistical analyses and less than a third of the articles presented on data complications such as missing data and violations of statistical assumptions. Strengths of and areas needing improvement for reporting quantitative results are highlighted. The paper concludes with recommendations for how researchers and reviewers can improve comprehension and transparency in statistical reporting.

  20. 75 FR 53925 - Sea Turtle Conservation; Shrimp and Summer Flounder Trawling Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-02

    ... because of the statistical probability the candidate TED may not achieve the standard (i.e., control TED... the test with 4 turtle captures because of the statistical probability the candidate TED may not... because of the statistical probability the candidate TED may not achieve the standard (i.e., [[Page 53930...

  1. A novel measure and significance testing in data analysis of cell image segmentation.

    PubMed

    Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L

    2017-03-14

    Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.

  2. Three-dimensional accuracy of different correction methods for cast implant bars

    PubMed Central

    Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom

    2014-01-01

    PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205

  3. Accounting for measurement reliability to improve the quality of inference in dental microhardness research: a worked example.

    PubMed

    Sever, Ivan; Klaric, Eva; Tarle, Zrinka

    2016-07-01

    Dental microhardness experiments are influenced by unobserved factors related to the varying tooth characteristics that affect measurement reproducibility. This paper explores the appropriate analytical tools for modeling different sources of unobserved variability to reduce the biases encountered and increase the validity of microhardness studies. The enamel microhardness of human third molars was measured by Vickers diamond. The effects of five bleaching agents-10, 16, and 30 % carbamide peroxide, and 25 and 38 % hydrogen peroxide-were examined, as well as the effect of artificial saliva and amorphous calcium phosphate. To account for both between- and within-tooth heterogeneity in evaluating treatment effects, the statistical analysis was performed in the mixed-effects framework, which also included the appropriate weighting procedure to adjust for confounding. The results were compared to those of the standard ANOVA model usually applied. The weighted mixed-effects model produced the parameter estimates of different magnitude and significance than the standard ANOVA model. The results of the former model were more intuitive, with more precise estimates and better fit. Confounding could seriously bias the study outcomes, highlighting the need for more robust statistical procedures in dental research that account for the measurement reliability. The presented framework is more flexible and informative than existing analytical techniques and may improve the quality of inference in dental research. Reported results could be misleading if underlying heterogeneity of microhardness measurements is not taken into account. The confidence in treatment outcomes could be increased by applying the framework presented.

  4. Normative Measurements of Grip and Pinch Strengths of 21st Century Korean Population

    PubMed Central

    Shim, Jin Hee; Kim, Jin Soo; Lee, Dong Chul; Ki, Sae Hwi; Yang, Jae Won; Jeon, Man Kyung; Lee, Sang Myung

    2013-01-01

    Background Measuring grip and pinch strength is an important part of hand injury evaluation. Currently, there are no standardized values of normal grip and pinch strength among the Korean population, and lack of such data prevents objective evaluation of post-surgical recovery in strength. This study was designed to establish the normal values of grip and pinch strength among the healthy Korean population and to identify any dependent variables affecting grip and pinch strength. Methods A cross-sectional study was carried out. The inclusion criterion was being a healthy Korean person without a previous history of hand trauma. The grip strength was measured using a Jamar dynamometer. Pulp and key pinch strength were measured with a hydraulic pinch gauge. Intra-individual and inter-individual variations in these variables were analyzed in a standardized statistical manner. Results There were a total of 336 healthy participants between 13 and 77 years of age. As would be expected in any given population, the mean grip and pinch strength was greater in the right hand than the left. Male participants (137) showed mean strengths greater than female participants (199) when adjusted for age. Among the male participants, anthropometric variables correlated positively with grip strength, but no such correlations were identifiable in female participants in a statistically significant way. Conclusions Objective measurements of hand strength are an important component of hand injury evaluation, and population-specific normative data are essential for clinical and research purposes. This study reports updated normative hand strengths of the South Korean population in the 21st century. PMID:23362480

  5. Weak Value Amplification is Suboptimal for Estimation and Detection

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-01-01

    We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.

  6. Methods for estimating low-flow statistics for Massachusetts streams

    USGS Publications Warehouse

    Ries, Kernell G.; Friesz, Paul J.

    2000-01-01

    Methods and computer software are described in this report for determining flow duration, low-flow frequency statistics, and August median flows. These low-flow statistics can be estimated for unregulated streams in Massachusetts using different methods depending on whether the location of interest is at a streamgaging station, a low-flow partial-record station, or an ungaged site where no data are available. Low-flow statistics for streamgaging stations can be estimated using standard U.S. Geological Survey methods described in the report. The MOVE.1 mathematical method and a graphical correlation method can be used to estimate low-flow statistics for low-flow partial-record stations. The MOVE.1 method is recommended when the relation between measured flows at a partial-record station and daily mean flows at a nearby, hydrologically similar streamgaging station is linear, and the graphical method is recommended when the relation is curved. Equations are presented for computing the variance and equivalent years of record for estimates of low-flow statistics for low-flow partial-record stations when either a single or multiple index stations are used to determine the estimates. The drainage-area ratio method or regression equations can be used to estimate low-flow statistics for ungaged sites where no data are available. The drainage-area ratio method is generally as accurate as or more accurate than regression estimates when the drainage-area ratio for an ungaged site is between 0.3 and 1.5 times the drainage area of the index data-collection site. Regression equations were developed to estimate the natural, long-term 99-, 98-, 95-, 90-, 85-, 80-, 75-, 70-, 60-, and 50-percent duration flows; the 7-day, 2-year and the 7-day, 10-year low flows; and the August median flow for ungaged sites in Massachusetts. Streamflow statistics and basin characteristics for 87 to 133 streamgaging stations and low-flow partial-record stations were used to develop the equations. The streamgaging stations had from 2 to 81 years of record, with a mean record length of 37 years. The low-flow partial-record stations had from 8 to 36 streamflow measurements, with a median of 14 measurements. All basin characteristics were determined from digital map data. The basin characteristics that were statistically significant in most of the final regression equations were drainage area, the area of stratified-drift deposits per unit of stream length plus 0.1, mean basin slope, and an indicator variable that was 0 in the eastern region and 1 in the western region of Massachusetts. The equations were developed by use of weighted-least-squares regression analyses, with weights assigned proportional to the years of record and inversely proportional to the variances of the streamflow statistics for the stations. Standard errors of prediction ranged from 70.7 to 17.5 percent for the equations to predict the 7-day, 10-year low flow and 50-percent duration flow, respectively. The equations are not applicable for use in the Southeast Coastal region of the State, or where basin characteristics for the selected ungaged site are outside the ranges of those for the stations used in the regression analyses. A World Wide Web application was developed that provides streamflow statistics for data collection stations from a data base and for ungaged sites by measuring the necessary basin characteristics for the site and solving the regression equations. Output provided by the Web application for ungaged sites includes a map of the drainage-basin boundary determined for the site, the measured basin characteristics, the estimated streamflow statistics, and 90-percent prediction intervals for the estimates. An equation is provided for combining regression and correlation estimates to obtain improved estimates of the streamflow statistics for low-flow partial-record stations. An equation is also provided for combining regression and drainage-area ratio estimates to obtain improved e

  7. [Intraoperative Measurement of Refraction with a Hand-Held Autorefractometer].

    PubMed

    Gesser, C; Küper, T; Richard, G; Hassenstein, A

    2015-07-01

    The aim of this study was to evaluate an intraoperative measurement of objective refraction with a hand-held retinomax instrument. At the end of cataract surgery objective refraction in a lying position was measured with a retinomax instrument. On the first postoperative day the same measurement was performed with a retinomax and a standard autorefractometer. To evaluate the differences between measurements, the spherical equivalent (SE) and Jackson's cross cylinder at 0° (J0) and 45° (J45) was used. 103 eyes were included. 95 of them had normal cataract surgery. Differences between retinomax at the operative day and the standard autorefractometer were 0.68 ± 2.58 D in SE, 0.05 ± 1.4D in J0 and 0.05 ± 1.4D in J45. There were no statistically significant differences between the groups. Intraoperative measurement of the refraction with a retinomax can predict the postoperative refraction. Nevertheless, in a few patients great differences may occur. Georg Thieme Verlag KG Stuttgart · New York.

  8. [Validation of measurement methods and estimation of uncertainty of measurement of chemical agents in the air at workstations].

    PubMed

    Dobecki, Marek

    2012-01-01

    This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.

  9. Evaluation of measurement uncertainty of glucose in clinical chemistry.

    PubMed

    Berçik Inal, B; Koldas, M; Inal, H; Coskun, C; Gümüs, A; Döventas, Y

    2007-04-01

    The definition of the uncertainty of measurement used in the International Vocabulary of Basic and General Terms in Metrology (VIM) is a parameter associated with the result of a measurement, which characterizes the dispersion of the values that could reasonably be attributed to the measurand. Uncertainty of measurement comprises many components. In addition to every parameter, the measurement uncertainty is that a value should be given by all institutions that have been accredited. This value shows reliability of the measurement. GUM, published by NIST, contains uncertainty directions. Eurachem/CITAC Guide CG4 was also published by Eurachem/CITAC Working Group in the year 2000. Both of them offer a mathematical model, for uncertainty can be calculated. There are two types of uncertainty in measurement. Type A is the evaluation of uncertainty through the statistical analysis and type B is the evaluation of uncertainty through other means, for example, certificate reference material. Eurachem Guide uses four types of distribution functions: (1) rectangular distribution that gives limits without specifying a level of confidence (u(x)=a/ radical3) to a certificate; (2) triangular distribution that values near to the same point (u(x)=a/ radical6); (3) normal distribution in which an uncertainty is given in the form of a standard deviation s, a relative standard deviation s/ radicaln, or a coefficient of variance CV% without specifying the distribution (a = certificate value, u = standard uncertainty); and (4) confidence interval.

  10. Improving qPCR telomere length assays: Controlling for well position effects increases statistical power.

    PubMed

    Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey

    2015-01-01

    Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.

  11. Magneto-acupuncture stimuli effects on ultraweak photon emission from hands of healthy persons.

    PubMed

    Park, Sang-Hyun; Kim, Jungdae; Koo, Tae-Hoi

    2009-03-01

    We investigated ultraweak photon emissions from the hands of 45 healthy persons before and after magneto-acupuncture stimuli. Photon emissions were measured by using two photomultiplier tubes in the spectral range of UV and visible. Several statistical quantities such as the average intensity, the standard deviation, the delta-value, and the degree of asymmetry were calculated from the measurements of photon emissions before and after the magneto-acupuncture stimuli. The distributions of the quantities from the measurements with the magneto-acupuncture stimuli were more differentiable than those of the groups without any stimuli and with the sham magnets. We also analyzed the magneto-acupuncture stimuli effects on the photon emissions through a year-long measurement for two subjects. The individualities of the subjects increased the differences of photon emissions compared to the above group study before and after magnetic stimuli. The changes on the ultraweak photon emission rates of hand for the magnet group were detected conclusively in the quantities of the averages and standard deviations.

  12. Physical and mental health consequences of Katrina on Vietnamese immigrants in New Orleans: a pre- and post-disaster assessment.

    PubMed

    Vu, Lung; Vanlandingham, Mark J

    2012-06-01

    We assessed the health impacts of a natural disaster upon a major immigrant community by comparing pre- and post-event measures for identical individuals. We collected standard health measures for a population-based sample of working-age Vietnamese-Americans living in New Orleans in 2005, just weeks before Katrina occurred. Near the first- and second-year anniversaries of the event, we located and re-assessed more than two-thirds of this original pre-Katrina cohort. We found statistically significant declines in health status for seven of the eight standard SF-36 subscales and for both the physical and mental health component summaries at the first anniversary of the disaster. By the second anniversary, recovery of the health dimensions assessed by these measures was substantial and significant. Most of the SF-36 mental and physical health subscales returned to their original pre-Katrina levels. Being in middle-age, being engaged in professional or self-employed occupations, being unmarried, being less acculturated, and having extensive post-Katrina property damage have statistically significant negative effects on post-Katrina health status, and several of these factors continued to impede recovery by the second anniversary. Hurricane Katrina had significant negative impacts on the mental and physical health of Vietnamese New Orleanians. Several factors present clear opportunities for targeted interventions.

  13. Evaluating the quality of a cell counting measurement process via a dilution series experimental design.

    PubMed

    Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng

    2017-12-01

    Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.

  14. Formation of a narrow baryon resonance with positive strangeness in K{sup +} collisions with Xe nuclei

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barmin, V. V.; Asratyan, A. E.; Borisov, V. S.

    2010-07-15

    The data on the charge-exchange reaction K{sup +}Xe {sup {yields}}K{sup 0}pXe', obtained with the bubble chamber DIANA, are reanalyzed using increased statistics and updated selections. Our previous evidence for formation of a narrow pK{sup 0} resonance with mass near 1538 MeV is confirmed. The statistical significance of the signal reaches some 8{sigma} (6{sigma}) standard deviations when estimated as S/{radical}B (S/{radical}B + S. The mass and intrinsic width of the {Theta}{sup +} baryon are measured as m = 1538 {+-} 2 MeV and {Gamma} = 0.39 {+-} 0.10 MeV.

  15. Weak value amplification considered harmful

    NASA Astrophysics Data System (ADS)

    Ferrie, Christopher; Combes, Joshua

    2014-03-01

    We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.

  16. Incorporating Stroke Severity Into Hospital Measures of 30-Day Mortality After Ischemic Stroke Hospitalization.

    PubMed

    Schwartz, Jennifer; Wang, Yongfei; Qin, Li; Schwamm, Lee H; Fonarow, Gregg C; Cormier, Nicole; Dorsey, Karen; McNamara, Robert L; Suter, Lisa G; Krumholz, Harlan M; Bernheim, Susannah M

    2017-11-01

    The Centers for Medicare & Medicaid Services publicly reports a hospital-level stroke mortality measure that lacks stroke severity risk adjustment. Our objective was to describe novel measures of stroke mortality suitable for public reporting that incorporate stroke severity into risk adjustment. We linked data from the American Heart Association/American Stroke Association Get With The Guidelines-Stroke registry with Medicare fee-for-service claims data to develop the measures. We used logistic regression for variable selection in risk model development. We developed 3 risk-standardized mortality models for patients with acute ischemic stroke, all of which include the National Institutes of Health Stroke Scale score: one that includes other risk variables derived only from claims data (claims model); one that includes other risk variables derived from claims and clinical variables that could be obtained from electronic health record data (hybrid model); and one that includes other risk variables that could be derived only from electronic health record data (electronic health record model). The cohort used to develop and validate the risk models consisted of 188 975 hospital admissions at 1511 hospitals. The claims, hybrid, and electronic health record risk models included 20, 21, and 9 risk-adjustment variables, respectively; the C statistics were 0.81, 0.82, and 0.79, respectively (as compared with the current publicly reported model C statistic of 0.75); the risk-standardized mortality rates ranged from 10.7% to 19.0%, 10.7% to 19.1%, and 10.8% to 20.3%, respectively; the median risk-standardized mortality rate was 14.5% for all measures; and the odds of mortality for a high-mortality hospital (+1 SD) were 1.51, 1.52, and 1.52 times those for a low-mortality hospital (-1 SD), respectively. We developed 3 quality measures that demonstrate better discrimination than the Centers for Medicare & Medicaid Services' existing stroke mortality measure, adjust for stroke severity, and could be implemented in a variety of settings. © 2017 American Heart Association, Inc.

  17. COGNATE: comparative gene annotation characterizer.

    PubMed

    Wilbrandt, Jeanne; Misof, Bernhard; Niehuis, Oliver

    2017-07-17

    The comparison of gene and genome structures across species has the potential to reveal major trends of genome evolution. However, such a comparative approach is currently hampered by a lack of standardization (e.g., Elliott TA, Gregory TR, Philos Trans Royal Soc B: Biol Sci 370:20140331, 2015). For example, testing the hypothesis that the total amount of coding sequences is a reliable measure of potential proteome diversity (Wang M, Kurland CG, Caetano-Anollés G, PNAS 108:11954, 2011) requires the application of standardized definitions of coding sequence and genes to create both comparable and comprehensive data sets and corresponding summary statistics. However, such standard definitions either do not exist or are not consistently applied. These circumstances call for a standard at the descriptive level using a minimum of parameters as well as an undeviating use of standardized terms, and for software that infers the required data under these strict definitions. The acquisition of a comprehensive, descriptive, and standardized set of parameters and summary statistics for genome publications and further analyses can thus greatly benefit from the availability of an easy to use standard tool. We developed a new open-source command-line tool, COGNATE (Comparative Gene Annotation Characterizer), which uses a given genome assembly and its annotation of protein-coding genes for a detailed description of the respective gene and genome structure parameters. Additionally, we revised the standard definitions of gene and genome structures and provide the definitions used by COGNATE as a working draft suggestion for further reference. Complete parameter lists and summary statistics are inferred using this set of definitions to allow down-stream analyses and to provide an overview of the genome and gene repertoire characteristics. COGNATE is written in Perl and freely available at the ZFMK homepage ( https://www.zfmk.de/en/COGNATE ) and on github ( https://github.com/ZFMK/COGNATE ). The tool COGNATE allows comparing genome assemblies and structural elements on multiples levels (e.g., scaffold or contig sequence, gene). It clearly enhances comparability between analyses. Thus, COGNATE can provide the important standardization of both genome and gene structure parameter disclosure as well as data acquisition for future comparative analyses. With the establishment of comprehensive descriptive standards and the extensive availability of genomes, an encompassing database will become possible.

  18. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    PubMed

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed preferable for X-axis positional verification, with KVX preferred for superoinferior alignment. The COM3PARE methodology was validated as feasible and useful in this pilot head and neck cancer positional verification dataset. COM3PARE represents a flexible and robust standardized analytic methodology for IGRT comparison. The implemented SAS script is included to encourage other groups to implement COM3PARE in other anatomic sites or IGRT platforms.

  19. Uncertainties of Mayak urine data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Guthrie; Vostrotin, Vadim; Vvdensky, Vladimir

    2008-01-01

    For internal dose calculations for the Mayak worker epidemiological study, quantitative estimates of uncertainty of the urine measurements are necessary. Some of the data consist of measurements of 24h urine excretion on successive days (e.g. 3 or 4 days). In a recent publication, dose calculations were done where the uncertainty of the urine measurements was estimated starting from the statistical standard deviation of these replicate mesurements. This approach is straightforward and accurate when the number of replicate measurements is large, however, a Monte Carlo study showed it to be problematic for the actual number of replicate measurements (median from 3more » to 4). Also, it is sometimes important to characterize the uncertainty of a single urine measurement. Therefore this alternate method has been developed. A method of parameterizing the uncertainty of Mayak urine bioassay measmements is described. The Poisson lognormal model is assumed and data from 63 cases (1099 urine measurements in all) are used to empirically determine the lognormal normalization uncertainty, given the measurement uncertainties obtained from count quantities. The natural logarithm of the geometric standard deviation of the normalization uncertainty is found to be in the range 0.31 to 0.35 including a measurement component estimated to be 0.2.« less

  20. Insights from analysis for harmful and potentially harmful constituents (HPHCs) in tobacco products.

    PubMed

    Oldham, Michael J; DeSoi, Darren J; Rimmer, Lonnie T; Wagner, Karl A; Morton, Michael J

    2014-10-01

    A total of 20 commercial cigarette and 16 commercial smokeless tobacco products were assayed for 96 compounds listed as harmful and potentially harmful constituents (HPHCs) by the US Food and Drug Administration. For each product, a single lot was used for all testing. Both International Organization for Standardization and Health Canada smoking regimens were used for cigarette testing. For those HPHCs detected, measured levels were consistent with levels reported in the literature, however substantial assay variability (measured as average relative standard deviation) was found for most results. Using an abbreviated list of HPHCs, statistically significant differences for most of these HPHCs occurred when results were obtained 4-6months apart (i.e., temporal variability). The assay variability and temporal variability demonstrate the need for standardized analytical methods with defined repeatability and reproducibility for each HPHC using certified reference standards. Temporal variability also means that simple conventional comparisons, such as two-sample t-tests, are inappropriate for comparing products tested at different points in time from the same laboratory or from different laboratories. Until capable laboratories use standardized assays with established repeatability, reproducibility, and certified reference standards, the resulting HPHC data will be unreliable for product comparisons or other decision making in regulatory science. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Proportions of maxillary anterior teeth relative to each other and to golden standard in tabriz dental faculty students.

    PubMed

    Parnia, Fereydoun; Hafezeqoran, Ali; Mahboub, Farhang; Moslehifard, Elnaz; Koodaryan, Rodabeh; Moteyagheni, Rosa; Saleh Saber, Fariba

    2010-01-01

    Various methods are used to measure the size and form of the teeth, including the golden pro-portion, and the width-to-length ratio of central teeth, referred to as the golden standard. The aim of this study was to eval-uate the occurrence of golden standard values and golden proportion in the anterior teeth. Photographs of 100 dentistry students (50 males and 50 females) were taken under standard conditions. The visible widths and lengths of maxillary right and left incisors were calculated and the ratios were compared with golden standard. Data was analyzed using SPSS 14 software. Review of the results of the means showed statistically significant differences between the width ratio of right lateral teeth to the central teeth width with golden proportion (P<0.001). Likewise, the difference was significant for the left side, too (P<0.001). Test results of mean differences showed that the mean difference between proportion of right laterals to centrals with golden proportion was significant (P<0.001). The difference was significant for the left side, too (P<0.001). As a result, there is no golden proportion among maxillary incisors. The review of results of mean differences for single samples showed that the mean differences between the proportion of width-to-length of left and right central teeth was statistically significant by golden standard (P<0.001). Therefore, considering the width-to-length proportion of maxillary central teeth, no golden standard exists. In the evaluation of the width-to-width and width-to-length proportions of maxillary incisors no golden proportions and standards were detected, respectively.

  2. Minimizing the Standard Deviation of Spatially Averaged Surface Cross-Sectional Data from the Dual-Frequency Precipitation Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, Robert; Kim, Hyokyung

    2016-01-01

    For an airborne or spaceborne radar, the precipitation-induced path attenuation can be estimated from the measurements of the normalized surface cross section, sigma 0, in the presence and absence of precipitation. In one implementation, the mean rain-free estimate and its variability are found from a lookup table (LUT) derived from previously measured data. For the dual-frequency precipitation radar aboard the global precipitation measurement satellite, the nominal table consists of the statistics of the rain-free 0 over a 0.5 deg x 0.5 deg latitude-longitude grid using a three-month set of input data. However, a problem with the LUT is an insufficient number of samples in many cells. An alternative table is constructed by a stepwise procedure that begins with the statistics over a 0.25 deg x 0.25 deg grid. If the number of samples at a cell is too few, the area is expanded, cell by cell, choosing at each step that cell that minimizes the variance of the data. The question arises, however, as to whether the selected region corresponds to the smallest variance. To address this question, a second type of variable-averaging grid is constructed using all possible spatial configurations and computing the variance of the data within each region. Comparisons of the standard deviations for the fixed and variable-averaged grids are given as a function of incidence angle and surface type using a three-month set of data. The advantage of variable spatial averaging is that the average standard deviation can be reduced relative to the fixed grid while satisfying the minimum sample requirement.

  3. Selected Ground-Water Data for Yucca Mountain Region, Southern Nevada and Eastern California, Through December 1992

    USGS Publications Warehouse

    La Camera, Richard J.; Westenburg, Craig L.

    1994-01-01

    Tne U.S. Geological Survey. in support of the U.S. Department of Energy, Yucca Mountain Site- Characterization Project, collects, compiles, and summarizes water-resource data in the Yucca Mountain region. The data are collected to document the historical and current condition of ground-water resources, to detect and document changes in those resources through time, and to allow assessments of ground-water resources during investigations to determine the potential suitability of Yucca Mountain for storing high-level nuclear waste. Data on ground-water levels at 36 sites, ground- water discharge at 6 sites, ground-water quality at 19 sites, and ground-water withdrawals within Crater Fiat, Jackass Flats, Mercury Valley, and the Amargosa Desert are presented. Data on ground-water levels, discharges, and withdrawals collected by other agencies or as part of other programs are included to further indicate variations through time. A statistical summary of ground-water levels and median annual ground-water withdrawals in Jackass Flats is presented. The statistical summary includes the number of measurements, the maximum, minimum, and median water-level altitudes, and the average deviation of a11 water-level altitudes for selected baseline periods and for calendar year 1992. Data on ground-water quality are compared to established, proposed, or tentative primary and secondary drinking-water standards, and measures which exceeded those standards are listed for 18 sites. Detected organic compounds for which established, proposed, or tentative drinking-water standards exist also are listed.

  4. Standard deviation and standard error of the mean.

    PubMed

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  5. Standard deviation and standard error of the mean

    PubMed Central

    In, Junyong; Lee, Sangseok

    2015-01-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923

  6. Measurement of Impact Acceleration: Mouthpiece Accelerometer Versus Helmet Accelerometer

    PubMed Central

    Higgins, Michael; Halstead, P. David; Snyder-Mackler, Lynn; Barlow, David

    2007-01-01

    Context: Instrumented helmets have been used to estimate impact acceleration imparted to the head during helmet impacts. These instrumented helmets may not accurately measure the actual amount of acceleration experienced by the head due to factors such as helmet-to-head fit. Objective: To determine if an accelerometer attached to a mouthpiece (MP) provides a more accurate representation of headform center of gravity (HFCOG) acceleration during impact than does an accelerometer attached to a helmet fitted on the headform. Design: Single-factor research design in which the independent variable was accelerometer position (HFCOG, helmet, MP) and the dependent variables were g and Severity Index (SI). Setting: Independent impact research laboratory. Intervention(s): The helmeted headform was dropped (n = 168) using a National Operating Committee on Standards for Athletic Equipment (NOCSAE) drop system from the standard heights and impact sites according to NOCSAE test standards. Peak g and SI were measured for each accelerometer position during impact. Main Outcome Measures: Upon impact, the peak g and SI were recorded for each accelerometer location. Results: Strong relationships were noted for HFCOG and MP measures, and significant differences were seen between HFCOG and helmet g measures and HFCOG and helmet SI measures. No statistically significant differences were noted between HFCOG and MP g and SI measures. Regression analyses showed a significant relationship between HFCOG and MP measures but not between HFCOG and helmet measures. Conclusions: Upon impact, MP acceleration (g) and SI measurements were closely related to and more accurate in measuring HFCOG g and SI than helmet measurements. The MP accelerometer is a valid method for measuring head acceleration. PMID:17597937

  7. Characterizing and locating air pollution sources in a complex industrial district using optical remote sensing technology and multivariate statistical modeling.

    PubMed

    Chang, Pao-Erh Paul; Yang, Jen-Chih Rena; Den, Walter; Wu, Chang-Fu

    2014-09-01

    Emissions of volatile organic compounds (VOCs) are most frequent environmental nuisance complaints in urban areas, especially where industrial districts are nearby. Unfortunately, identifying the responsible emission sources of VOCs is essentially a difficult task. In this study, we proposed a dynamic approach to gradually confine the location of potential VOC emission sources in an industrial complex, by combining multi-path open-path Fourier transform infrared spectrometry (OP-FTIR) measurement and the statistical method of principal component analysis (PCA). Close-cell FTIR was further used to verify the VOC emission source by measuring emitted VOCs from selected exhaust stacks at factories in the confined areas. Multiple open-path monitoring lines were deployed during a 3-month monitoring campaign in a complex industrial district. The emission patterns were identified and locations of emissions were confined by the wind data collected simultaneously. N,N-Dimethyl formamide (DMF), 2-butanone, toluene, and ethyl acetate with mean concentrations of 80.0 ± 1.8, 34.5 ± 0.8, 103.7 ± 2.8, and 26.6 ± 0.7 ppbv, respectively, were identified as the major VOC mixture at all times of the day around the receptor site. As the toxic air pollutant, the concentrations of DMF in air samples were found exceeding the ambient standard despite the path-average effect of OP-FTIR upon concentration levels. The PCA data identified three major emission sources, including PU coating, chemical packaging, and lithographic printing industries. Applying instrumental measurement and statistical modeling, this study has established a systematic approach for locating emission sources. Statistical modeling (PCA) plays an important role in reducing dimensionality of a large measured dataset and identifying underlying emission sources. Instrumental measurement, however, helps verify the outcomes of the statistical modeling. The field study has demonstrated the feasibility of using multi-path OP-FTIR measurement. The wind data incorporating with the statistical modeling (PCA) may successfully identify the major emission source in a complex industrial district.

  8. Mentors Offering Maternal Support (M.O.M.S.)

    DTIC Science & Technology

    2011-08-02

    at Sessions 1, 5, and 8. Table 1. Pretest - posttest , randomized, controlled, repeated measured design Experimental Intervention Sessions...theoretical mediators of self-esteem and emotional support (0.6 standard deviation change from pretest to posttest ) with reduction of effect to 0.4...always brought back to the designated topic . In order to have statistically significant results for the outcome variables the study sessions must

  9. Fisher information as a generalized measure of coherence in classical and quantum optics.

    PubMed

    Luis, Alfredo

    2012-10-22

    We show that metrological resolution in the detection of small phase shifts provides a suitable generalization of the degrees of coherence and polarization. Resolution is estimated via Fisher information. Besides the standard two-beam Gaussian case, this approach provides also good results for multiple field components and nonGaussian statistics. This works equally well in quantum and classical optics.

  10. New Standards Require Teaching More Statistics: Are Preservice Secondary Mathematics Teachers Ready?

    ERIC Educational Resources Information Center

    Lovett, Jennifer N.; Lee, Hollylynne S.

    2017-01-01

    Mathematics teacher education programs often need to respond to changing expectations and standards for K-12 curriculum and accreditation. New standards for high school mathematics in the United States include a strong emphasis in statistics. This article reports results from a mixed methods cross-institutional study examining the preparedness of…

  11. Standardized Effect Sizes for Moderated Conditional Fixed Effects with Continuous Moderator Variables

    PubMed Central

    Bodner, Todd E.

    2017-01-01

    Wilkinson and Task Force on Statistical Inference (1999) recommended that researchers include information on the practical magnitude of effects (e.g., using standardized effect sizes) to distinguish between the statistical and practical significance of research results. To date, however, researchers have not widely incorporated this recommendation into the interpretation and communication of the conditional effects and differences in conditional effects underlying statistical interactions involving a continuous moderator variable where at least one of the involved variables has an arbitrary metric. This article presents a descriptive approach to investigate two-way statistical interactions involving continuous moderator variables where the conditional effects underlying these interactions are expressed in standardized effect size metrics (i.e., standardized mean differences and semi-partial correlations). This approach permits researchers to evaluate and communicate the practical magnitude of particular conditional effects and differences in conditional effects using conventional and proposed guidelines, respectively, for the standardized effect size and therefore provides the researcher important supplementary information lacking under current approaches. The utility of this approach is demonstrated with two real data examples and important assumptions underlying the standardization process are highlighted. PMID:28484404

  12. Uncovering the single top: observation of electroweak top quark production

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benitez, Jorge Armando

    2009-01-01

    The top quark is generally produced in quark and anti-quark pairs. However, the Standard Model also predicts the production of only one top quark which is mediated by the electroweak interaction, known as 'Single Top'. Single Top quark production is important because it provides a unique and direct way to measure the CKM matrix element V tb, and can be used to explore physics possibilities beyond the Standard Model predictions. This dissertation presents the results of the observation of Single Top using 2.3 fb -1 of Data collected with the D0 detector at the Fermilab Tevatron collider. The analysis includes the Single Top muon+jets and electron+jets final states and employs Boosted Decision Tress as a method to separate the signal from the background. The resulting Single Top cross section measurement is: (1) σ(pmore » $$\\bar{p}$$→ tb + X, tqb + X) = 3.74 -0.74 +0.95 pb, where the errors include both statistical and systematic uncertainties. The probability to measure a cross section at this value or higher in the absence of signal is p = 1.9 x 10 -6. This corresponds to a standard deviation Gaussian equivalence of 4.6. When combining this result with two other analysis methods, the resulting cross section measurement is: (2) σ(p$$\\bar{p}$$ → tb + X, tqb + X) = 3.94 ± 0.88 pb, and the corresponding measurement significance is 5.0 standard deviations.« less

  13. Certification of NIST standard reference material 2389a, amino acids in 0.1 mol/L HCl--quantification by ID LC-MS/MS.

    PubMed

    Lowenthal, Mark S; Yen, James; Bunk, David M; Phinney, Karen W

    2010-05-01

    An isotope-dilution liquid chromatography-tandem mass spectrometry (ID LC-MS/MS) measurement procedure was developed to accurately quantify amino acid concentrations in National Institute of Standards and Technology (NIST) Standard Reference Material (SRM) 2389a-amino acids in 0.1 mol/L hydrochloric acid. Seventeen amino acids were quantified using selected reaction monitoring on a triple quadrupole mass spectrometer. LC-MS/MS results were compared to gravimetric measurements from the preparation of SRM 2389a-a reference material developed at NIST and intended for use in intra-laboratory calibrations and quality control. Quantitative mass spectrometry results and gravimetric values were statistically combined into NIST-certified mass fraction values with associated uncertainty estimates. Coefficients of variation (CV) for the repeatability of the LC-MS/MS measurements among amino acids ranged from 0.33% to 2.7% with an average CV of 1.2%. Average relative expanded uncertainty of the certified values including Types A and B uncertainties was 3.5%. Mean accuracy of the LC-MS/MS measurements with gravimetric preparation values agreed to within |1.1|% for all amino acids. NIST SRM 2389a will be available for characterization of routine methods for amino acid analysis and serves as a standard for higher-order measurement traceability. This is the first time an ID LC-MS/MS methodology has been applied for quantifying amino acids in a NIST SRM material.

  14. Precision, Reliability, and Effect Size of Slope Variance in Latent Growth Curve Models: Implications for Statistical Power Analysis

    PubMed Central

    Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher

    2018-01-01

    Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377

  15. MQSA National Statistics

    MedlinePlus

    ... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...

  16. Can Ultrasound Accurately Assess Ischiofemoral Space Dimensions? A Validation Study.

    PubMed

    Finnoff, Jonathan T; Johnson, Adam C; Hollman, John H

    2017-04-01

    Ischiofemoral impingement is a potential cause of hip and buttock pain. It is evaluated commonly with magnetic resonance imaging (MRI). To our knowledge, no study previously has evaluated the ability of ultrasound to measure the ischiofemoral space (IFS) dimensions reliably. To determine whether ultrasound could accurately measure the IFS dimensions when compared with the gold standard imaging modality of MRI. A methods comparison study. Sports medicine center within a tertiary-care institution. A total of 5 male and 5 female asymptomatic adult subjects (age mean = 29.2 years, range = 23-35 years; body mass index mean = 23.5, range = 19.5-26.6) were recruited to participate in the study. Subjects were secured in a prone position on a MRI table with their hips in a neutral position. Their IFS dimensions were then acquired in a randomized order using diagnostic ultrasound and MRI. The main outcome measurements were the IFS dimensions acquired with ultrasound and MRI. The mean IFS dimensions measured with ultrasound was 29.5 mm (standard deviation [SD] 4.99 mm, standard error mean 1.12 mm), whereas those obtained with MRI were 28.25 mm (SD 5.91 mm, standard error mean 1.32 mm). The mean difference between the ultrasound and MRI measurements was 1.25 mm, which was not statistically significant (SD 3.71 mm, standard error mean 3.71 mm, 95% confidence interval -0.49 mm to 2.98 mm, t 19 = 1.506, P = .15). The Bland-Altman analysis indicated that the 95% limits of agreement between the 2 measurement was -6.0 to 8.5 mm, indicating that there was no systematic bias between the ultrasound and MRI measurements. Our findings suggest that the IFS measurements obtained with ultrasound are very similar to those obtained with MRI. Therefore, when evaluating individuals with suspected ischiofemoral impingement, one could consider using ultrasound to measure their IFS dimensions. III. Copyright © 2017 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.

  17. Histogram Curve Matching Approaches for Object-based Image Classification of Land Cover and Land Use

    PubMed Central

    Toure, Sory I.; Stow, Douglas A.; Weeks, John R.; Kumar, Sunil

    2013-01-01

    The classification of image-objects is usually done using parametric statistical measures of central tendency and/or dispersion (e.g., mean or standard deviation). The objectives of this study were to analyze digital number histograms of image objects and evaluate classifications measures exploiting characteristic signatures of such histograms. Two histograms matching classifiers were evaluated and compared to the standard nearest neighbor to mean classifier. An ADS40 airborne multispectral image of San Diego, California was used for assessing the utility of curve matching classifiers in a geographic object-based image analysis (GEOBIA) approach. The classifications were performed with data sets having 0.5 m, 2.5 m, and 5 m spatial resolutions. Results show that histograms are reliable features for characterizing classes. Also, both histogram matching classifiers consistently performed better than the one based on the standard nearest neighbor to mean rule. The highest classification accuracies were produced with images having 2.5 m spatial resolution. PMID:24403648

  18. Measuring and monitoring biological diversity: Standard methods for mammals

    USGS Publications Warehouse

    Wilson, Don E.; Cole, F. Russell; Nichols, James D.; Rudran, Rasanayagam; Foster, Mercedes S.

    1996-01-01

    Measuring and Monitoring Biological Diversity: Standard Methods for Mammals provides a comprehensive manual for designing and implementing inventories of mammalian biodiversity anywhere in the world and for any group, from rodents to open-country grazers. The book emphasizes formal estimation approaches, which supply data that can be compared across habitats and over time. Beginning with brief natural histories of the twenty-six orders of living mammals, the book details the field techniques—observation, capture, and sign interpretation—appropriate to different species. The contributors provide guidelines for study design, discuss survey planning, describe statistical techniques, and outline methods of translating field data into electronic formats. Extensive appendixes address such issues as the ethical treatment of animals in research, human health concerns, preserving voucher specimens, and assessing age, sex, and reproductive condition in mammals.Useful in both developed and developing countries, this volume and the Biological Diversity Handbook Series as a whole establish essential standards for a key aspect of conservation biology and resource management.

  19. Calibrated Noise Measurements with Induced Receiver Gain Fluctuations

    NASA Technical Reports Server (NTRS)

    Racette, Paul; Walker, David; Gu, Dazhen; Rajola, Marco; Spevacek, Ashly

    2011-01-01

    The lack of well-developed techniques for modeling changing statistical moments in our observations has stymied the application of stochastic process theory in science and engineering. These limitations were encountered when modeling the performance of radiometer calibration architectures and algorithms in the presence of non stationary receiver fluctuations. Analyses of measured signals have traditionally been limited to a single measurement series. Whereas in a radiometer that samples a set of noise references, the data collection can be treated as an ensemble set of measurements of the receiver state. Noise Assisted Data Analysis is a growing field of study with significant potential for aiding the understanding and modeling of non stationary processes. Typically, NADA entails adding noise to a signal to produce an ensemble set on which statistical analysis is performed. Alternatively as in radiometric measurements, mixing a signal with calibrated noise provides, through the calibration process, the means to detect deviations from the stationary assumption and thereby a measurement tool to characterize the signal's non stationary properties. Data sets comprised of calibrated noise measurements have been limited to those collected with naturally occurring fluctuations in the radiometer receiver. To examine the application of NADA using calibrated noise, a Receiver Gain Modulation Circuit (RGMC) was designed and built to modulate the gain of a radiometer receiver using an external signal. In 2010, an RGMC was installed and operated at the National Institute of Standards and Techniques (NIST) using their Noise Figure Radiometer (NFRad) and national standard noise references. The data collected is the first known set of calibrated noise measurements from a receiver with an externally modulated gain. As an initial step, sinusoidal and step-function signals were used to modulate the receiver gain, to evaluate the circuit characteristics and to study the performance of a variety of calibration algorithms. The receiver noise temperature and time-bandwidth product of the NFRad are calculated from the data. Statistical analysis using temporal-dependent calibration algorithms reveals that the natural occurring fluctuations in the receiver are stationary over long intervals (100s of seconds); however the receiver exhibits local non stationarity over the interval over which one set of reference measurements are collected. A variety of calibration algorithms have been applied to the data to assess algorithms' performance with the gain fluctuation signals. This presentation will describe the RGMC, experiment design and a comparative analysis of calibration algorithms.

  20. Lumbar lordosis and sacral slope in lumbar spinal stenosis: standard values and measurement accuracy.

    PubMed

    Bredow, J; Oppermann, J; Scheyerer, M J; Gundlfinger, K; Neiss, W F; Budde, S; Floerkemeier, T; Eysel, P; Beyer, F

    2015-05-01

    Radiological study. To asses standard values, intra- and interobserver reliability and reproducibility of sacral slope (SS) and lumbar lordosis (LL) and the correlation of these parameters in patients with lumbar spinal stenosis (LSS). Anteroposterior and lateral X-rays of the lumbar spine of 102 patients with LSS were included in this retrospective, radiologic study. Measurements of SS and LL were carried out by five examiners. Intraobserver correlation and correlation between LL and SS were calculated with Pearson's r linear correlation coefficient and intraclass correlation coefficients (ICC) were calculated for inter- and intraobserver reliability. In addition, patients were examined in subgroups with respect to previous surgery and the current therapy. Lumbar lordosis averaged 45.6° (range 2.5°-74.9°; SD 14.2°), intraobserver correlation was between Pearson r = 0.93 and 0.98. The measurement of SS averaged 35.3° (range 13.8°-66.9°; SD 9.6°), intraobserver correlation was between Pearson r = 0.89 and 0.96. Intraobserver reliability ranged from 0.966 to 0.992 ICC in LL measurements and 0.944-0.983 ICC in SS measurements. There was an interobserver reliability ICC of 0.944 in LL and 0.990 in SS. Correlation between LL and SS averaged r = 0.79. No statistically significant differences were observed between the analyzed subgroups. Manual measurement of LL and SS in patients with LSS on lateral radiographs is easily performed with excellent intra- and interobserver reliability. Correlation between LL and SS is very high. Differences between patients with and without previous decompression were not statistically significant.

  1. Using a Standardized Clinical Quantitative Sensory Testing Battery to Judge the Clinical Relevance of Sensory Differences Between Adjacent Body Areas.

    PubMed

    Dimova, Violeta; Oertel, Bruno G; Lötsch, Jörn

    2017-01-01

    Skin sensitivity to sensory stimuli varies among different body areas. A standardized clinical quantitative sensory testing (QST) battery, established for the diagnosis of neuropathic pain, was used to assess whether the magnitude of differences between test sites reaches clinical significance. Ten different sensory QST measures derived from thermal and mechanical stimuli were obtained from 21 healthy volunteers (10 men) and used to create somatosensory profiles bilateral from the dorsum of the hands (the standard area for the assessment of normative values for the upper extremities as proposed by the German Research Network on Neuropathic Pain) and bilateral at volar forearms as a neighboring nonstandard area. The parameters obtained were statistically compared between test sites. Three of the 10 QST parameters differed significantly with respect to the "body area," that is, warmth detection, thermal sensory limen, and mechanical pain thresholds. After z-transformation and interpretation according to the QST battery's standard instructions, 22 abnormal values were obtained at the hand. Applying the same procedure to parameters assessed at the nonstandard site forearm, that is, z-transforming them to the reference values for the hand, 24 measurements values emerged as abnormal, which was not significantly different compared with the hand (P=0.4185). Sensory differences between neighboring body areas are statistically significant, reproducing prior knowledge. This has to be considered in scientific assessments where a small variation of the tested body areas may not be an option. However, the magnitude of these differences was below the difference in sensory parameters that is judged as abnormal, indicating a robustness of the QST instrument against protocol deviations with respect to the test area when using the method of comparison with a 95 % confidence interval of a reference dataset.

  2. Explorations in Statistics: Standard Deviations and Standard Errors

    ERIC Educational Resources Information Center

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  3. Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.

    ERIC Educational Resources Information Center

    Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas

    2002-01-01

    Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…

  4. Ion chamber absorbed dose calibration coefficients, N{sub D,w}, measured at ADCLs: Distribution analysis and stability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muir, B. R., E-mail: Bryan.Muir@nrc-cnrc.gc.ca

    2015-04-15

    Purpose: To analyze absorbed dose calibration coefficients, N{sub D,w}, measured at accredited dosimetry calibration laboratories (ADCLs) for client ionization chambers to study (i) variability among N{sub D,w} coefficients for chambers of the same type calibrated at each ADCL to investigate ion chamber volume fluctuations and chamber manufacturing tolerances; (ii) equivalency of ion chamber calibration coefficients measured at different ADCLs by intercomparing N{sub D,w} coefficients for chambers of the same type; and (iii) the long-term stability of N{sub D,w} coefficients for different chamber types by investigating repeated chamber calibrations. Methods: Large samples of N{sub D,w} coefficients for several chamber types measuredmore » over the time period between 1998 and 2014 were obtained from the three ADCLs operating in the United States. These are analyzed using various graphical and numerical statistical tests for the four chamber types with the largest samples of calibration coefficients to investigate (i) and (ii) above. Ratios of calibration coefficients for the same chamber, typically obtained two years apart, are calculated to investigate (iii) above and chambers with standard deviations of old/new ratios less than 0.3% meet stability requirements for accurate reference dosimetry recommended in dosimetry protocols. Results: It is found that N{sub D,w} coefficients for a given chamber type compared among different ADCLs may arise from differing probability distributions potentially due to slight differences in calibration procedures and/or the transfer of the primary standard. However, average N{sub D,w} coefficients from different ADCLs for given chamber types are very close with percent differences generally less than 0.2% for Farmer-type chambers and are well within reported uncertainties. Conclusions: The close agreement among calibrations performed at different ADCLs reaffirms the Calibration Laboratory Accreditation Subcommittee process of ensuring ADCL conformance with National Institute of Standards and Technology standards. This study shows that N{sub D,w} coefficients measured at different ADCLs are statistically equivalent, especially considering reasonable uncertainties. This analysis of N{sub D,w} coefficients also allows identification of chamber types that can be considered stable enough for accurate reference dosimetry.« less

  5. Can anthropometry measure gender discrimination? An analysis using WHO standards to assess the growth of Bangladeshi children.

    PubMed

    Moestue, Helen

    2009-08-01

    To examine the potential of anthropometry as a tool to measure gender discrimination, with particular attention to the WHO growth standards. Surveillance data collected from 1990 to 1999 were analysed. Height-for-age Z-scores were calculated using three norms: the WHO standards, the 1978 National Center for Health Statistics (NCHS) reference and the 1990 British growth reference (UK90). Bangladesh. Boys and girls aged 6-59 months (n 504 358). The three sets of growth curves provided conflicting pictures of the relative growth of girls and boys by age and over time. Conclusions on sex differences in growth depended also on the method used to analyse the curves, be it according to the shape or the relative position of the sex-specific curves. The shapes of the WHO-generated curves uniquely implied that Bangladeshi girls faltered faster or caught up slower than boys throughout their pre-school years, a finding consistent with the literature. In contrast, analysis of the relative position of the curves suggested that girls had higher WHO Z-scores than boys below 24 months of age. Further research is needed to help establish whether and how the WHO international standards can measure gender discrimination in practice, which continues to be a serious problem in many parts of the world.

  6. Standards of Scientific Conduct: Disciplinary differences

    PubMed Central

    Kalichman, Michael; Sweet, Monica; Plemmons, Dena

    2014-01-01

    Teaching of responsible conduct of research is largely predicated on the assumption that there are accepted standards of conduct that can be taught. However there is little evidence of consensus in the scientific community about such standards, at least for the practices of authorship, collaboration, and data management. To assess whether such differences in standards are based on disciplinary differences, a survey, described previously, addressing standards, practices, and perceptions about teaching and learning was distributed in November 2010 to U.S. faculty from 50 graduate programs for the biomedical disciplines of microbiology, neuroscience, nursing, and psychology. Despite evidence of statistically significant differences across the four disciplines, actual differences were quite small. Stricter measures of effect size indicated practically significant disciplinary differences for fewer than 10% of the questions. This suggests that the variation in individual standards of practice within each discipline is at least as great as variation due to differences among disciplines. Therefore, the need for discipline-specific training may not be as important as sometimes thought. PMID:25256408

  7. Floating Gate sensor for in-vivo dosimetry in radiation therapies. Design and first characterization.

    NASA Astrophysics Data System (ADS)

    Faigon, A.; Martinez Vazquez, I.; Carbonetto, S.; García Inza, M.; G

    2017-01-01

    A floating gate dosimeter was designed and fabricated in a standard CMOS technology. The design guides and characterization are presented. The characterization included the controlled charging by tunneling of the floating gate, and its discharging under irradiation while measuring the transistor drain current whose change is the measure of the absorbed dose. The resolution of the obtained device is close to 1 cGy satisfying the requirements for most radiation therapies dosimetry. Pending statistical proofs, the dosimeter is a potential candidate for wide in-vivo control of radiotherapy treatments.

  8. Apparatus description and data analysis of a radiometric technique for measurements of spectral and total normal emittance

    NASA Technical Reports Server (NTRS)

    Edwards, S. F.; Kantsios, A. G.; Voros, J. P.; Stewart, W. F.

    1975-01-01

    The development of a radiometric technique for determining the spectral and total normal emittance of materials heated to temperatures of 800, 1100, and 1300 K by direct comparison with National Bureau of Standards (NBS) reference specimens is discussed. Emittances are measured over the spectral range of 1 to 15 microns and are statistically compared with NBS reference specimens. Results are included for NBS reference specimens, Rene 41, alundum, zirconia, AISI type 321 stainless steel, nickel 201, and a space-shuttle reusable surface insulation.

  9. A Study of Particle Beam Spin Dynamics for High Precision Experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fiedler, Andrew J.

    In the search for physics beyond the Standard Model, high precision experiments to measure fundamental properties of particles are an important frontier. One group of such measurements involves magnetic dipole moment (MDM) values as well as searching for an electric dipole moment (EDM), both of which could provide insights about how particles interact with their environment at the quantum level and if there are undiscovered new particles. For these types of high precision experiments, minimizing statistical uncertainties in the measurements plays a critical role. \\\\ \\indent This work leverages computer simulations to quantify the effects of statistical uncertainty for experimentsmore » investigating spin dynamics. In it, analysis of beam properties and lattice design effects on the polarization of the beam is performed. As a case study, the beam lines that will provide polarized muon beams to the Fermilab Muon \\emph{g}-2 experiment are analyzed to determine the effects of correlations between the phase space variables and the overall polarization of the muon beam.« less

  10. Body weight changes during the menstrual cycle among university students in Ahvaz, Iran.

    PubMed

    Haghighizadeh, Mohammad Hossein; Karandish, Majid; Ghoreishi, Mahdiye; Soroor, Farshad; Shirani, Fatemeh

    2014-07-01

    Weight changes during menstrual cycle may be a cause of concern about body weight among most women. Limited data are available linking menstrual cycle and body weight changes. The aim of this study was to examine the relationship between menstrual cycles and body weight changes among university students in Ahvaz, Iran. This cross-sectional study was conducted on 50 Iranian female students aged 18-24 years. Anthropometric indices were measured according to standard protocols. During a complete menstrual cycle, weights of participants were measured each morning. Seventy eight percent of participants had normal weight (Body Mass Index: 18.5-24.9 kg m(-2)). Body weight increased only slightly during the three days before beginning of the menstruation. By using repeated-measures ANOVA, no statistically significant differences were found in weigh during menstrual cycle (p-value = 0.301). No statistically significant changes were found in body weight during women's menstrual cycle in a group of healthy non-obese Iranian young women. Further studies on overweight and obese women are suggested.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marleau, Peter; Reyna, David

    In this work we investigate a method that confirms the operability of neutron detectors requiring neither radiological sources nor radiation-generating devices. This is desirable when radiological sources are not available, but confidence in the functionality of the instrument is required. The “source”, based on the production of neutrons in high-Z materials by muons, provides a tagged, low-background and consistent rate of neutrons that can be used to check the functionality of or calibrate a detector. Using a Monte Carlo guided optimization, an experimental apparatus was designed and built to evaluate the feasibility of this technique. Through a series of trialmore » measurements in a variety of locations we show that gated muon-induced neutrons appear to provide a consistent source of neutrons (35.9 ± 2.3 measured neutrons/10,000 muons in the instrument) under normal environmental variability (less than one statistical standard deviation for 10,000 muons) with a combined environmental + statistical uncertainty of ~18% for 10,000 muons. This is achieved in a single 21-22 minute measurement at sea level.« less

  12. A statistical model to estimate refractivity turbulence structure constant C sub n sup 2 in the free atmosphere

    NASA Technical Reports Server (NTRS)

    Warnock, J. M.; Vanzandt, T. E.

    1986-01-01

    A computer program has been tested and documented (Warnock and VanZandt, 1985) that estimates mean values of the refractivity turbulence structure constant in the stable free atmosphere from standard National Weather Service balloon data or an equivalent data set. The program is based on the statistical model for the occurrence of turbulence developed by VanZandt et al. (1981). Height profiles of the estimated refractivity turbulence structure constant agree well with profiles measured by the Sunset radar with a height resolution of about 1 km. The program also estimates the energy dissipation rate (epsilon), but because of the lack of suitable observations of epsilon, the model for epsilon has not yet been evaluated sufficiently to be used in routine applications. Vertical profiles of the refractivity turbulence structure constant were compared with profiles measured by both radar and optical remote sensors and good agreement was found. However, at times the scintillometer measurements were less than both the radar and model values.

  13. Estimating weak ratiometric signals in imaging data. I. Dual-channel data.

    PubMed

    Broder, Josef; Majumder, Anirban; Porter, Erika; Srinivasamoorthy, Ganesh; Keith, Charles; Lauderdale, James; Sornborger, Andrew

    2007-09-01

    Ratiometric fluorescent indicators are becoming increasingly prevalent in many areas of biology. They are used for making quantitative measurements of intracellular free calcium both in vitro and in vivo, as well as measuring membrane potentials, pH, and other important physiological variables of interest to researchers in many subfields. Often, functional changes in the fluorescent yield of ratiometric indicators are small, and the signal-to-noise ratio (SNR) is of order unity or less. In particular, variability in the denominator of the ratio can lead to very poor ratio estimates. We present a statistical optimization method for objectively detecting and estimating ratiometric signals in dual-wavelength measurements of fluorescent, ratiometric indicators that improves on standard methods. With the use of an appropriate statistical model for ratiometric signals and by taking the pixel-pixel covariance of an imaging dataset into account, we are able to extract user-independent spatiotemporal information that retains high resolution in both space and time.

  14. Application of multislice spiral CT for guidance of insertion of thoracic spine pedicle screws: an in vitro study.

    PubMed

    Wang, Juan; Zhou, Yicheng; Hu, Ning; Wang, Renfa

    2006-01-01

    To investigate the value of the guidance of three dimensional (3-D) reconstruction of multi-slice spiral CT (MSCT) for the placement of pedicle screws, the 3-D anatomical data of the thoracic pedicles were measured by MSCT in two embalmed human cadaveric thoracic pedicles spines (T1-T10) to guide the insertion of pedicle screws. After pulling the screws out, the pathways were filled with contrast media. The PW, PH, TSA and SSA of developed pathways were measured on the CT images and they were also measured on the real objects by caliper and goniometer. Analysis of variance demonstrated that the difference between the CT scans and real objects had no statistical significance (P > 0.05). Moreover, the difference between pedicle axis and developed pathway also had no statistical significance (P > 0.05). The data obtained from 3-D reconstruction of MSCT demonstrated that individualized standards, are not only accurate but also helpful for the successful placement of pedicle screws.

  15. Statistical analysis of tire treadwear data

    DOT National Transportation Integrated Search

    1985-03-01

    This report describes the results of a statistical analysis of the treadwear : variability of radial tires subjected to the Uniform Tire Quality Grading (UTQG) : standard. Because unexplained variability in the treadwear portion of the standard : cou...

  16. The effect of antenatal lifestyle advice for women who are overweight or obese on secondary measures of neonatal body composition: the LIMIT randomised trial

    PubMed Central

    Dodd, Jodie M; Deussen, Andrea R; Mohomad, Izyan; Rifas-Shiman, Sheryl L; Yelland, Lisa N; Louise, Jennie; McPhee, Andrew J; Grivell, Rosalie M; Owens, Julie A; Gillman, Matthew W; Robinson, Jeffrey S

    2016-01-01

    Objective To evaluate the effect of providing antenatal dietary and lifestyle advice on neonatal anthropometry, and to determine the inter-observer variability in obtaining anthropometric measurements. Design Randomised controlled trial Setting Public maternity hospitals across metropolitan Adelaide, South Australia Population Pregnant women with a singleton gestation between 10+0–20+0, and body mass index (BMI) ≥25kg/m2. Methods Women were randomised to either Lifestyle Advice (comprehensive dietary and lifestyle intervention over the course of pregnancy including dietary, exercise and behavioral strategies, delivered by a research dietician and research assistants) or continued Standard Care. Analyses were conducted using intention to treat principles. Main Outcome Measures Secondary outcome measures for the trial included assessment of infant body composition using body circumference and skinfold thickness measurements (SFTM), percentage body fat, and bio-impedance analysis of fat free mass. Results Anthropometric measurements were obtained from 970 neonates (488 Lifestyle Advice Group, and 482 Standard Care Group). In 394 of these neonates (215 Lifestyle Advice Group, and 179 Standard Care Group) bio-impedance analysis was also obtained. There were no statistically significant differences identified between those neonates born to women receiving Lifestyle Advice and those receiving Standard Care, in terms of body circumference measures, SFTM, percentage body fat, fat mass, or fat free mass. The intra-class correlation coefficient for SFTM was moderate to excellent (ICC 0.55 to 0.88). Conclusions Among neonates born to women who are overweight or obese, anthropometric measures of body composition were not modified by an antenatal dietary and lifestyle intervention. PMID:26841217

  17. Quantification of heterogeneity observed in medical images.

    PubMed

    Brooks, Frank J; Grigsby, Perry W

    2013-03-02

    There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging modalities. In this work, we motivate and derive a statistical measure of image heterogeneity. This statistic measures the distance-dependent average deviation from the smoothest intensity gradation feasible. We show how this statistic may be used to automatically rank images of in vivo human tumors in order of increasing heterogeneity. We test this method against the current practice of ranking images via expert visual inspection. We find that this statistic provides a means of heterogeneity quantification beyond that given by other statistics traditionally used for the same purpose. We demonstrate the effect of tumor shape upon our ranking method and find the method applicable to a wide variety of clinically relevant tumor images. We find that the automated heterogeneity rankings agree very closely with those performed visually by experts. These results indicate that our automated method may be used reliably to rank, in order of increasing heterogeneity, tumor images whether or not object shape is considered to contribute to that heterogeneity. Automated heterogeneity ranking yields objective results which are more consistent than visual rankings. Reducing variability in image interpretation will enable more researchers to better study potential clinical implications of observed tumor heterogeneity.

  18. Quantifying, displaying and accounting for heterogeneity in the meta-analysis of RCTs using standard and generalised Q statistics

    PubMed Central

    2011-01-01

    Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747

  19. Economic and outcomes consequences of TachoSil®: a systematic review.

    PubMed

    Colombo, Giorgio L; Bettoni, Daria; Di Matteo, Sergio; Grumi, Camilla; Molon, Cinzia; Spinelli, Daniela; Mauro, Gaetano; Tarozzo, Alessia; Bruno, Giacomo M

    2014-01-01

    TachoSil(®) is a medicated sponge coated with human fibrinogen and human thrombin. It is indicated as a support treatment in adult surgery to improve hemostasis, promote tissue sealing, and support sutures when standard surgical techniques are insufficient. This review systematically analyses the international scientific literature relating to the use of TachoSil in hemostasis and as a surgical sealant, from the point of view of its economic impact. We carried out a systematic review of the PubMed literature up to November 2013. Based on the selection criteria, papers were grouped according to the following outcomes: reduction of time to hemostasis; decrease in length of hospital stay; and decrease in postoperative complications. Twenty-four scientific papers were screened, 13 (54%) of which were randomized controlled trials and included a total of 2,116 patients, 1,055 of whom were treated with TachoSil. In the clinical studies carried out in patients undergoing hepatic, cardiac, or renal surgery, the time to hemostasis obtained with TachoSil was lower (1-4 minutes) than the time measured with other techniques and hemostatic drugs, with statistically significant differences. Moreover, in 13 of 15 studies, TachoSil showed a statistically significant reduction in postoperative complications in comparison with the standard surgical procedure. The range of the observed decrease in the length of hospital stay for TachoSil patients was 2.01-3.58 days versus standard techniques, with a statistically significant difference in favor of TachoSil in eight of 15 studies. This analysis shows that TachoSil has a role as a supportive treatment in surgery to improve hemostasis and promote tissue sealing when standard techniques are insufficient, with a consequent decrease in postoperative complications and hospital costs.

  20. A game-based platform for crowd-sourcing biomedical image diagnosis and standardized remote training and education of diagnosticians

    NASA Astrophysics Data System (ADS)

    Feng, Steve; Woo, Minjae; Chandramouli, Krithika; Ozcan, Aydogan

    2015-03-01

    Over the past decade, crowd-sourcing complex image analysis tasks to a human crowd has emerged as an alternative to energy-inefficient and difficult-to-implement computational approaches. Following this trend, we have developed a mathematical framework for statistically combining human crowd-sourcing of biomedical image analysis and diagnosis through games. Using a web-based smart game (BioGames), we demonstrated this platform's effectiveness for telediagnosis of malaria from microscopic images of individual red blood cells (RBCs). After public release in early 2012 (http://biogames.ee.ucla.edu), more than 3000 gamers (experts and non-experts) used this BioGames platform to diagnose over 2800 distinct RBC images, marking them as positive (infected) or negative (non-infected). Furthermore, we asked expert diagnosticians to tag the same set of cells with labels of positive, negative, or questionable (insufficient information for a reliable diagnosis) and statistically combined their decisions to generate a gold standard malaria image library. Our framework utilized minimally trained gamers' diagnoses to generate a set of statistical labels with an accuracy that is within 98% of our gold standard image library, demonstrating the "wisdom of the crowd". Using the same image library, we have recently launched a web-based malaria training and educational game allowing diagnosticians to compare their performance with their peers. After diagnosing a set of ~500 cells per game, diagnosticians can compare their quantified scores against a leaderboard and view their misdiagnosed cells. Using this platform, we aim to expand our gold standard library with new RBC images and provide a quantified digital tool for measuring and improving diagnostician training globally.

  1. Effects of hypotensive anesthesia on blood transfusion rates in craniosynostosis corrections.

    PubMed

    Fearon, Jeffrey A; Cook, T Kevin; Herbert, Morley

    2014-05-01

    Hypotensive anesthesia is routinely used during craniosynostosis corrections to reduce blood loss. Noting that cerebral oxygenation levels often fell below recommended levels, the authors sought to measure the effects of hypotensive versus standard anesthesia on blood transfusion rates. One hundred children undergoing craniosynostosis corrections were randomized prospectively into two groups: a target mean arterial pressure of either 50 mm Hg or 60 mm Hg. Aside from anesthesiologists, caregivers were blinded and strict transfusion criteria were followed. Multiple variables were analyzed, and appropriate statistical testing was performed. The hypotensive and standard groups appeared similar, with no statistically significant differences in mean age (46.5 months versus 46.5 months), weight (19.25 kg versus 19.49 kg), procedure [anterior remodeling (34 versus 31) versus posterior (19 versus 16)], or preoperative hemoglobin level (13 g/dl versus 12.9 g/dl). Intraoperative mean arterial pressures differed significantly (56 mm Hg versus 66 mm Hg; p < 0.001). The captured cell saver amount was lower in the hypotensive group (163 cc versus 204 cc; p = 0.02), yet no significant differences were noted in postoperative hemoglobin levels (8.8 g/dl versus 9.3 g/dl). Fifteen of 100 patients (15 percent) received allogenic transfusions, but no statistically significant differences were noted in transfusion rates between the hypotensive [nine of 53 (17.0 percent)] and standard anesthesia [six of 47 (13 percent)] group (p = 0.056). No significant difference in transfusion requirements was found between hypotensive and standard anesthesia during craniosynostosis corrections. Considering potential benefits of improved cerebral blood flow and total body perfusion, surgeons might consider performing craniosynostosis corrections without hypotension. Therapeutic, II.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, R; Bai, W

    Purpose: Because of statistical noise in Monte Carlo dose calculations, effective point doses may not be accurate. Volume spheres are useful for evaluating dose in Monte Carlo plans, which have an inherent statistical uncertainty.We use a user-defined sphere volume instead of a point, take sphere sampling around effective point make the dose statistics to decrease the stochastic errors. Methods: Direct dose measurements were made using a 0.125cc Semiflex ion chamber (IC) 31010 isocentrically placed in the center of a homogeneous Cylindric sliced RW3 phantom (PTW, Germany).In the scanned CT phantom series the sensitive volume length of the IC (6.5mm) weremore » delineated and defined the isocenter as the simulation effective points. All beams were simulated in Monaco in accordance to the measured model. In our simulation using 2mm voxels calculation grid spacing and choose calculate dose to medium and request the relative standard deviation ≤0.5%. Taking three different assigned IC over densities (air electron density(ED) as 0.01g/cm3 default CT scanned ED and Esophageal lumen ED 0.21g/cm3) were tested at different sampling sphere radius (2.5, 2, 1.5 and 1 mm) statistics dose were compared with the measured does. Results: The results show that in the Monaco TPS for the IC using Esophageal lumen ED 0.21g/cm3 and sampling sphere radius 1.5mm the statistical value is the best accordance with the measured value, the absolute average percentage deviation is 0.49%. And when the IC using air electron density(ED) as 0.01g/cm3 and default CT scanned EDthe recommented statistical sampling sphere radius is 2.5mm, the percentage deviation are 0.61% and 0.70%, respectivly. Conclusion: In Monaco treatment planning system for the ionization chamber 31010 recommend air cavity using ED 0.21g/cm3 and sampling 1.5mm sphere volume instead of a point dose to decrease the stochastic errors. Funding Support No.C201505006.« less

  3. On Acoustic Source Specification for Rotor-Stator Interaction Noise Prediction

    NASA Technical Reports Server (NTRS)

    Nark, Douglas M.; Envia, Edmane; Burley, Caesy L.

    2010-01-01

    This paper describes the use of measured source data to assess the effects of acoustic source specification on rotor-stator interaction noise predictions. Specifically, the acoustic propagation and radiation portions of a recently developed coupled computational approach are used to predict tonal rotor-stator interaction noise from a benchmark configuration. In addition to the use of full measured data, randomization of source mode relative phases is also considered for specification of the acoustic source within the computational approach. Comparisons with sideline noise measurements are performed to investigate the effects of various source descriptions on both inlet and exhaust predictions. The inclusion of additional modal source content is shown to have a much greater influence on the inlet results. Reasonable agreement between predicted and measured levels is achieved for the inlet, as well as the exhaust when shear layer effects are taken into account. For the number of trials considered, phase randomized predictions follow statistical distributions similar to those found in previous statistical source investigations. The shape of the predicted directivity pattern relative to measurements also improved with phase randomization, having predicted levels generally within one standard deviation of the measured levels.

  4. Inference of median difference based on the Box-Cox model in randomized clinical trials.

    PubMed

    Maruo, K; Isogawa, N; Gosho, M

    2015-05-10

    In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.

  5. A two-point diagnostic for the H II galaxy Hubble diagram

    NASA Astrophysics Data System (ADS)

    Leaf, Kyle; Melia, Fulvio

    2018-03-01

    A previous analysis of starburst-dominated H II galaxies and H II regions has demonstrated a statistically significant preference for the Friedmann-Robertson-Walker cosmology with zero active mass, known as the Rh = ct universe, over Λcold dark matter (ΛCDM) and its related dark-matter parametrizations. In this paper, we employ a two-point diagnostic with these data to present a complementary statistical comparison of Rh = ct with Planck ΛCDM. Our two-point diagnostic compares, in a pairwise fashion, the difference between the distance modulus measured at two redshifts with that predicted by each cosmology. Our results support the conclusion drawn by a previous comparative analysis demonstrating that Rh = ct is statistically preferred over Planck ΛCDM. But we also find that the reported errors in the H II measurements may not be purely Gaussian, perhaps due to a partial contamination by non-Gaussian systematic effects. The use of H II galaxies and H II regions as standard candles may be improved even further with a better handling of the systematics in these sources.

  6. A Primer on Observational Measurement.

    PubMed

    Girard, Jeffrey M; Cohn, Jeffrey F

    2016-08-01

    Observational measurement plays an integral role in a variety of scientific endeavors within biology, psychology, sociology, education, medicine, and marketing. The current article provides an interdisciplinary primer on observational measurement; in particular, it highlights recent advances in observational methodology and the challenges that accompany such growth. First, we detail the various types of instrument that can be used to standardize measurements across observers. Second, we argue for the importance of validity in observational measurement and provide several approaches to validation based on contemporary validity theory. Third, we outline the challenges currently faced by observational researchers pertaining to measurement drift, observer reactivity, reliability analysis, and time/expense. Fourth, we describe recent advances in computer-assisted measurement, fully automated measurement, and statistical data analysis. Finally, we identify several key directions for future observational research to explore.

  7. Appropriate Statistics for Determining Chance-Removed Interpractitioner Agreement.

    PubMed

    Popplewell, Michael; Reizes, John; Zaslawski, Chris

    2018-05-31

    Fleiss' Kappa (FK) has been commonly, but incorrectly, employed as the "standard" for evaluating chance-removed inter-rater agreement with ordinal data. This practice may lead to misleading conclusions in inter-rater agreement research. An example is presented that demonstrates the conditions where FK produces inappropriate results, compared with Gwet's AC2, which is proposed as a more appropriate statistic. A novel format for recording a Chinese Medical (CM) diagnoses, called the Diagnostic System of Oriental Medicine (DSOM), was used to record and compare patient diagnostic data, which, unlike the contemporary CM diagnostic format, allows agreement by chance to be considered when evaluating patient data obtained with unrestricted diagnostic options available to diagnosticians. Five CM practitioners diagnosed 42 subjects drawn from an open population. Subjects' diagnoses were recorded using the DSOM format. All the available data were initially used to evaluate agreement. Then, the subjects were sorted into three groups to demonstrate the effects of differing data marginality on the calculated chance-removed agreement. Agreement between the practitioners for each subject was evaluated with linearly weighted simple agreement, FK and Gwet's AC2. In all cases, overall agreement was much lower with FK than Gwet's AC2. Larger differences occurred when the data were more free marginal. Inter-rater agreement determined with FK statistics is unlikely to be correct unless it can be shown that the data from which agreement is determined are, in fact, fixed marginal. It follows that results obtained on agreement between practitioners with FK are probably incorrect. It is shown that inter-rater agreement evaluated with AC2 statistic is an appropriate measure when fixed marginal data are neither expected nor guaranteed. The AC2 statistic should be used as the standard statistical approach for determining agreement between practitioners.

  8. The Dysexecutive Questionnaire advanced: item and test score characteristics, 4-factor solution, and severity classification.

    PubMed

    Bodenburg, Sebastian; Dopslaff, Nina

    2008-01-01

    The Dysexecutive Questionnaire (DEX, , Behavioral assessment of the dysexecutive syndrome, 1996) is a standardized instrument to measure possible behavioral changes as a result of the dysexecutive syndrome. Although initially intended only as a qualitative instrument, the DEX has also been used increasingly to address quantitative problems. Until now there have not been more fundamental statistical analyses of the questionnaire's testing quality. The present study is based on an unselected sample of 191 patients with acquired brain injury and reports on the data relating to the quality of the items, the reliability and the factorial structure of the DEX. Item 3 displayed too great an item difficulty, whereas item 11 was not sufficiently discriminating. The DEX's reliability in self-rating is r = 0.85. In addition to presenting the statistical values of the tests, a clinical severity classification of the overall scores of the 4 found factors and of the questionnaire as a whole is carried out on the basis of quartile standards.

  9. Mineral production and mining trends for selected non-fuel commodities in Idaho and Montana, 1905-2001

    USGS Publications Warehouse

    Larsen, Jeremy C.; Long, Keith R.; Assmus, Kenneth C.; Zientek, Michael L.

    2004-01-01

    Idaho and Montana state mining statistics were obtained from historical mineral production records and compiled into a continuous record from 1905 through 2001. To facilitate comparisons, the mineral production data were normalized by converting the units of measure to metric tons for all included commodities. These standardized statistical data include production rates for principal non-fuel mineral commodities from both Idaho and Montana, as well as the production rates of similar commodities for the U.S. and the world for contrast. Data are presented here in both tabular and bar chart format. Moreover, the tables of standardized mineral production data are also provided in digital format as, commodity_production.xls. Some significant historical events pertaining to the mining industry are described as well. When taken into account with the historical production data, this combined information may to help explain both specific fluctuations and general tendencies in the overall trends in the rates of mineral resource production over time.

  10. A New Approach of Juvenile Age Estimation using Measurements of the Ilium and Multivariate Adaptive Regression Splines (MARS) Models for Better Age Prediction.

    PubMed

    Corron, Louise; Marchal, François; Condemi, Silvana; Chaumoître, Kathia; Adalian, Pascal

    2017-01-01

    Juvenile age estimation methods used in forensic anthropology generally lack methodological consistency and/or statistical validity. Considering this, a standard approach using nonparametric Multivariate Adaptive Regression Splines (MARS) models were tested to predict age from iliac biometric variables of male and female juveniles from Marseilles, France, aged 0-12 years. Models using unidimensional (length and width) and bidimensional iliac data (module and surface) were constructed on a training sample of 176 individuals and validated on an independent test sample of 68 individuals. Results show that MARS prediction models using iliac width, module and area give overall better and statistically valid age estimates. These models integrate punctual nonlinearities of the relationship between age and osteometric variables. By constructing valid prediction intervals whose size increases with age, MARS models take into account the normal increase of individual variability. MARS models can qualify as a practical and standardized approach for juvenile age estimation. © 2016 American Academy of Forensic Sciences.

  11. Development of a statistical method to help evaluating the transparency/opacity of decorative thin films

    NASA Astrophysics Data System (ADS)

    da Silva Oliveira, C. I.; Martinez-Martinez, D.; Al-Rjoub, A.; Rebouta, L.; Menezes, R.; Cunha, L.

    2018-04-01

    In this paper, we present a statistical method that allows evaluating the degree of a transparency of a thin film. To do so, the color coordinates are measured on different substrates, and the standard deviation is evaluated. In case of low values, the color depends on the film and not on the substrate, and intrinsic colors are obtained. In contrast, transparent films lead to high values of standard deviation, since the value of the color coordinates depends on the substrate. Between both extremes, colored films with a certain degree of transparency can be found. This method allows an objective and simple evaluation of the transparency of any film, improving the subjective visual inspection and avoiding the thickness problems related to optical spectroscopy evaluation. Zirconium oxynitride films deposited on three different substrates (Si, steel and glass) are used for testing the validity of this method, whose results have been validated with optical spectroscopy, and agree with the visual impression of the samples.

  12. [Comparison of the effect of different diagnostic criteria of subclinical hypothyroidism and positive TPO-Ab on pregnancy outcomes].

    PubMed

    He, Yiping; He, Tongqiang; Wang, Yanxia; Xu, Zhao; Xu, Yehong; Wu, Yiqing; Ji, Jing; Mi, Yang

    2014-11-01

    To explore the effect of different diagnositic criteria of subclinical hypothyroidism using thyroid stimulating hormone (TSH) and positive thyroid peroxidase antibodies (TPO-Ab) on the pregnancy outcomes. 3 244 pregnant women who had their antenatal care and delivered in Child and Maternity Health Hospital of Shannxi Province August from 2011 to February 2013 were recruited prospectively. According to the standard of American Thyroid Association (ATA), pregnant women with normal serum free thyroxine (FT4) whose serum TSH level> 2.50 mU/L were diagnosed as subclinical hypothyroidism in pregnancy (foreign standard group). According to the Guideline of Diagnosis and Therapy of Prenatal and Postpartum Thyroid Disease made by Chinese Society of Endocrinology and Chinese Society of Perinatal Medicine in 2012, pregnant women with serum TSH level> 5.76 mU/L, and normal FT4 were diagnosed as subclinical hypothyroidism in pregnancy(national standard group). Pregnant women with subclinical hypothyroidism whose serum TSH levels were between 2.50-5.76 mU/L were referred as the study observed group; and pregnant women with serum TSH level< 2.50 mU/L and negative TPO- Ab were referred as the control group. Positive TPO-Ab results and the pregnancy outcomes were analyzed. (1) There were 635 cases in the foreign standard group, with the incidence of 19.57% (635/3 244). And there were 70 cases in the national standard group, with the incidence of 2.16% (70/3 244). There were statistically significant difference between the two groups (P < 0.01). There were 565 cases in the study observed group, with the incidence of 17.42% (565/3 244). There was statistically significant difference (P < 0.01) when compared with the national standard group; while there was no statistically significant difference (P > 0.05) when compared with the foreign standard group. (2) Among the 3 244 cases, 402 cases had positive TPO-Ab. 318 positive cases were in the foreign standard group, and the incidence of subclinical hypothyroidism was 79.10% (318/402). There were 317 negative cases in the foreign standard group, with the incidence of 11.15% (317/2 842). The difference was statistically significant (P < 0.01) between them. In the national standard group, 46 cases had positive TPO-Ab, with the incidence of 11.44% (46/402), and 24 cases had negative result, with the incidence of 0.84% (24/2 842). There were statistically significant difference (P < 0.01) between them. In the study observed group, 272 cases were TPO-Ab positive, with the incidence of 67.66% (272/402), and 293 cases were negative, with the incidence of 10.31% (293/2 842), the difference was statistically significant (P < 0.01). (3) The incidence of miscarriage, premature delivery, gestational hypertension disease, gestational diabetes mellitus(GDM)in the foreign standard group had statistically significant differences (P < 0.05) when compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. And the incidence of miscarriage, premature delivery, gestational hypertension disease, GDM in the national standard group had statistical significant difference (P < 0.05) compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. This study observed group of pregnant women's abortion, gestational hypertension disease, GDM incidence respectively compared with control group, the difference had statistical significance (P < 0.05); but in preterm labor, placental abruption, and fetal distress incidence, there were no statistically significant difference (P > 0.05). (4) The incidence of miscarriage, premature delivery, gestational hypertension disease, GDM, placental abruption, fetal distress in the TPO-Ab positive cases of the national standard group showed an increase trend when compared with TPO-Ab negative cases, with no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases of the study observed group had statistical significance difference (P < 0.05) when compared with TPO-Ab negative cases; while the incidence of miscarriage, premature birth, placental abruption, fetal distress had no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases had statistically significance difference when compared with TPO-Ab negtive cases of foreign standard group (P < 0.05). (1) The incidence of subclinical hypothyroidism is rather high during early pregnancy and can lead to adverse pregnancy outcome. (2) Positive TPO-Ab result has important predictive value of the thyroid dysfunction and GDM. (3) Relatively, the ATA standard of diagnosis (serum TSH level> 2.50 mU/L) is safer for the antenatal care; the national standard (serum TSH level> 5.76 mU/L) is not conducive to pregnancy management.

  13. Comparison of air-kerma strength determinations for HDR (192)Ir sources.

    PubMed

    Rasmussen, Brian E; Davis, Stephen D; Schmidt, Cal R; Micka, John A; Dewerd, Larry A

    2011-12-01

    To perform a comparison of the interim air-kerma strength standard for high dose rate (HDR) (192)Ir brachytherapy sources maintained by the University of Wisconsin Accredited Dosimetry Calibration Laboratory (UWADCL) with measurements of the various source models using modified techniques from the literature. The current interim standard was established by Goetsch et al. in 1991 and has remained unchanged to date. The improved, laser-aligned seven-distance apparatus of the University of Wisconsin Medical Radiation Research Center (UWMRRC) was used to perform air-kerma strength measurements of five different HDR (192)Ir source models. The results of these measurements were compared with those from well chambers traceable to the original standard. Alternative methodologies for interpolating the (192)Ir air-kerma calibration coefficient from the NIST air-kerma standards at (137)Cs and 250 kVp x rays (M250) were investigated and intercompared. As part of the interpolation method comparison, the Monte Carlo code EGSnrc was used to calculate updated values of A(wall) for the Exradin A3 chamber used for air-kerma strength measurements. The effects of air attenuation and scatter, room scatter, as well as the solution method were investigated in detail. The average measurements when using the inverse N(K) interpolation method for the Classic Nucletron, Nucletron microSelectron, VariSource VS2000, GammaMed Plus, and Flexisource were found to be 0.47%, -0.10%, -1.13%, -0.20%, and 0.89% different than the existing standard, respectively. A further investigation of the differences observed between the sources was performed using MCNP5 Monte Carlo simulations of each source model inside a full model of an HDR 1000 Plus well chamber. Although the differences between the source models were found to be statistically significant, the equally weighted average difference between the seven-distance measurements and the well chambers was 0.01%, confirming that it is not necessary to update the current standard maintained at the UWADCL.

  14. Statistical considerations for grain-size analyses of tills

    USGS Publications Warehouse

    Jacobs, A.M.

    1971-01-01

    Relative percentages of sand, silt, and clay from samples of the same till unit are not identical because of different lithologies in the source areas, sorting in transport, random variation, and experimental error. Random variation and experimental error can be isolated from the other two as follows. For each particle-size class of each till unit, a standard population is determined by using a normally distributed, representative group of data. New measurements are compared with the standard population and, if they compare satisfactorily, the experimental error is not significant and random variation is within the expected range for the population. The outcome of the comparison depends on numerical criteria derived from a graphical method rather than on a more commonly used one-way analysis of variance with two treatments. If the number of samples and the standard deviation of the standard population are substituted in a t-test equation, a family of hyperbolas is generated, each of which corresponds to a specific number of subsamples taken from each new sample. The axes of the graphs of the hyperbolas are the standard deviation of new measurements (horizontal axis) and the difference between the means of the new measurements and the standard population (vertical axis). The area between the two branches of each hyperbola corresponds to a satisfactory comparison between the new measurements and the standard population. Measurements from a new sample can be tested by plotting their standard deviation vs. difference in means on axes containing a hyperbola corresponding to the specific number of subsamples used. If the point lies between the branches of the hyperbola, the measurements are considered reliable. But if the point lies outside this region, the measurements are repeated. Because the critical segment of the hyperbola is approximately a straight line parallel to the horizontal axis, the test is simplified to a comparison between the means of the standard population and the means of the subsample. The minimum number of subsamples required to prove significant variation between samples caused by different lithologies in the source areas and sorting in transport can be determined directly from the graphical method. The minimum number of subsamples required is the maximum number to be run for economy of effort. ?? 1971 Plenum Publishing Corporation.

  15. Statistics and the Question of Standards

    PubMed Central

    Stigler, Stephen M.

    1996-01-01

    This is a written version of a memorial lecture given in honor of Churchill Eisenhart at the National Institute of Standards and Technology on May 5, 1995. The relationship and the interplay between statistics and standards over the past centuries are described. Historical examples are presented to illustrate mutual dependency and development in the two fields. PMID:27805077

  16. Bayesian correction for covariate measurement error: A frequentist evaluation and comparison with regression calibration.

    PubMed

    Bartlett, Jonathan W; Keogh, Ruth H

    2018-06-01

    Bayesian approaches for handling covariate measurement error are well established and yet arguably are still relatively little used by researchers. For some this is likely due to unfamiliarity or disagreement with the Bayesian inferential paradigm. For others a contributory factor is the inability of standard statistical packages to perform such Bayesian analyses. In this paper, we first give an overview of the Bayesian approach to handling covariate measurement error, and contrast it with regression calibration, arguably the most commonly adopted approach. We then argue why the Bayesian approach has a number of statistical advantages compared to regression calibration and demonstrate that implementing the Bayesian approach is usually quite feasible for the analyst. Next, we describe the closely related maximum likelihood and multiple imputation approaches and explain why we believe the Bayesian approach to generally be preferable. We then empirically compare the frequentist properties of regression calibration and the Bayesian approach through simulation studies. The flexibility of the Bayesian approach to handle both measurement error and missing data is then illustrated through an analysis of data from the Third National Health and Nutrition Examination Survey.

  17. Quality Space and Launch Requirements Addendum to AS9100C

    DTIC Science & Technology

    2015-03-05

    45 8.9.1 Statistical Process Control (SPC) .......................................................................... 45 8.9.1.1 Out of Control...Systems Center SME Subject Matter Expert SOW Statement of Work SPC Statistical Process Control SPO System Program Office SRP Standard Repair...individual data exceeding the control limits. Control limits are developed using standard statistical methods or other approved techniques and are based on

  18. [The evaluation of costs: standards of medical care and clinical statistic groups].

    PubMed

    Semenov, V Iu; Samorodskaia, I V

    2014-01-01

    The article presents the comparative analysis of techniques of evaluation of costs of hospital treatment using medical economic standards of medical care and clinical statistical groups. The technique of evaluation of costs on the basis of clinical statistical groups was developed almost fifty years ago and is largely applied in a number of countries. Nowadays, in Russia the payment for completed case of treatment on the basis of medical economic standards is the main mode of payment for medical care in hospital. It is very conditionally a Russian analogue of world-wide prevalent system of diagnostic related groups. The tariffs for these cases of treatment as opposed to clinical statistical groups are counted on basis of standards of provision of medical care approved by Minzdrav of Russia. The information derived from generalization of cases of treatment of real patients is not applied.

  19. Methods for estimating selected low-flow frequency statistics for unregulated streams in Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Arihood, Leslie D.

    2010-01-01

    This report provides estimates of, and presents methods for estimating, selected low-flow frequency statistics for unregulated streams in Kentucky including the 30-day mean low flows for recurrence intervals of 2 and 5 years (30Q2 and 30Q5) and the 7-day mean low flows for recurrence intervals of 5, 10, and 20 years (7Q2, 7Q10, and 7Q20). Estimates of these statistics are provided for 121 U.S. Geological Survey streamflow-gaging stations with data through the 2006 climate year, which is the 12-month period ending March 31 of each year. Data were screened to identify the periods of homogeneous, unregulated flows for use in the analyses. Logistic-regression equations are presented for estimating the annual probability of the selected low-flow frequency statistics being equal to zero. Weighted-least-squares regression equations were developed for estimating the magnitude of the nonzero 30Q2, 30Q5, 7Q2, 7Q10, and 7Q20 low flows. Three low-flow regions were defined for estimating the 7-day low-flow frequency statistics. The explicit explanatory variables in the regression equations include total drainage area and the mapped streamflow-variability index measured from a revised statewide coverage of this characteristic. The percentage of the station low-flow statistics correctly classified as zero or nonzero by use of the logistic-regression equations ranged from 87.5 to 93.8 percent. The average standard errors of prediction of the weighted-least-squares regression equations ranged from 108 to 226 percent. The 30Q2 regression equations have the smallest standard errors of prediction, and the 7Q20 regression equations have the largest standard errors of prediction. The regression equations are applicable only to stream sites with low flows unaffected by regulation from reservoirs and local diversions of flow and to drainage basins in specified ranges of basin characteristics. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features.

  20. Determination of the conversion gain and the accuracy of its measurement for detector elements and arrays

    NASA Astrophysics Data System (ADS)

    Beecken, B. P.; Fossum, E. R.

    1996-07-01

    Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.

  1. Measurement-device-independent entanglement-based quantum key distribution

    NASA Astrophysics Data System (ADS)

    Yang, Xiuqing; Wei, Kejin; Ma, Haiqiang; Sun, Shihai; Liu, Hongwei; Yin, Zhenqiang; Li, Zuohan; Lian, Shibin; Du, Yungang; Wu, Lingan

    2016-05-01

    We present a quantum key distribution protocol in a model in which the legitimate users gather statistics as in the measurement-device-independent entanglement witness to certify the sources and the measurement devices. We show that the task of measurement-device-independent quantum communication can be accomplished based on monogamy of entanglement, and it is fairly loss tolerate including source and detector flaws. We derive a tight bound for collective attacks on the Holevo information between the authorized parties and the eavesdropper. Then with this bound, the final secret key rate with the source flaws can be obtained. The results show that long-distance quantum cryptography over 144 km can be made secure using only standard threshold detectors.

  2. Statistical properties of a utility measure of observer performance compared to area under the ROC curve

    NASA Astrophysics Data System (ADS)

    Abbey, Craig K.; Samuelson, Frank W.; Gallas, Brandon D.; Boone, John M.; Niklason, Loren T.

    2013-03-01

    The receiver operating characteristic (ROC) curve has become a common tool for evaluating diagnostic imaging technologies, and the primary endpoint of such evaluations is the area under the curve (AUC), which integrates sensitivity over the entire false positive range. An alternative figure of merit for ROC studies is expected utility (EU), which focuses on the relevant region of the ROC curve as defined by disease prevalence and the relative utility of the task. However if this measure is to be used, it must also have desirable statistical properties keep the burden of observer performance studies as low as possible. Here, we evaluate effect size and variability for EU and AUC. We use two observer performance studies recently submitted to the FDA to compare the EU and AUC endpoints. The studies were conducted using the multi-reader multi-case methodology in which all readers score all cases in all modalities. ROC curves from the study were used to generate both the AUC and EU values for each reader and modality. The EU measure was computed assuming an iso-utility slope of 1.03. We find mean effect sizes, the reader averaged difference between modalities, to be roughly 2.0 times as big for EU as AUC. The standard deviation across readers is roughly 1.4 times as large, suggesting better statistical properties for the EU endpoint. In a simple power analysis of paired comparison across readers, the utility measure required 36% fewer readers on average to achieve 80% statistical power compared to AUC.

  3. Measurements of {Gamma}(Z{sup O} {yields} b{bar b})/{Gamma}(Z{sup O} {yields} hadrons) using the SLD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neal, H.A. Jr. II

    1995-07-01

    The quantity R{sub b} = {Gamma}(Z{sup o} {yields}b{bar b})/{Gamma}(Z{sup o} {yields} hadrons) is a sensitive measure of corrections to the Zbb vertex. The precision necessary to observe the top quark mass dependent corrections is close to being achieved. LEP is already observing a 1.8{sigma} deviation from the Standard Model prediction. Knowledge of the top quark mass combined with the observation of deviations from the Standard Model prediction would indicate new physics. Models which include charged Higgs or light SUSY particles yield predictions for R{sub b} appreciably different from the Standard Model. In this thesis two independent methods are used tomore » measure R{sub b}. One uses a general event tag which determines R{sub b} from the rate at which events are tagged as Z{sup o} {yields} b{bar b} in data and the estimated rates at which various flavors of events are tagged from the Monte Carlo. The second method reduces the reliance on the Monte Carlo by separately tagging each hemisphere as containing a b-decay. The rates of single hemisphere tagged events and both hemisphere tagged events are used to determine the tagging efficiency for b-quarks directly from the data thus eliminating the main sources of systematic error present in the event tag. Both measurements take advantage of the unique environment provided by the SLAC Linear Collider (SLC) and the SLAC Large Detector (SLD). From the event tag a result of R{sub b} = 0.230{plus_minus}0.004{sub statistical}{plus_minus}0.013{sub systematic} is obtained. The higher precision hemisphere tag result obtained is R{sub b} = 0.218{plus_minus}0.004{sub statistical}{plus_minus}0.004{sub systematic}{plus_minus}0.003{sub Rc}.« less

  4. Experience with the use of a partial ossicular replacement prosthesis with a ball-and-socket joint between the plate and the shaft.

    PubMed

    Birk, Stephanie; Brase, Christoph; Hornung, Joachim

    2014-08-01

    In the further development of alloplastic prostheses for use in middle ear surgery, the Dresden and Cologne University Hospitals, working together with a company, introduced a new partial ossicular replacement prosthesis in 2011. The ball-and-socket joint between the prosthesis and the shaft mimics the natural articulations between the malleus and incus and between the incus and stapes, allowing reaction to movements of the tympanic membrane graft, particularly during the healing process. Retrospective evaluation To reconstruct sound conduction as part of a type III tympanoplasty, partial ossicular replacement prosthesis with a ball-and-socket joint between the plate and the shaft was implanted in 60 patients, with other standard partial ossicular replacement prosthesis implanted in 40 patients and 64 patients. Pure-tone audiometry was carried out, on average, 19 and 213 days after surgery. Results of the partial ossicular replacement prosthesis with a ball-and-socket joint between the plate and the shaft were compared with those of the standard prostheses. Early measurements showed a mean improvement of 3.3 dB in the air-bone gap (ABG) with the partial ossicular replacement prosthesis with a ball-and-socket joint between the plate and the shaft, giving similar results than the standard implants (6.6 and 6.0 dB, respectively), but the differences were not statistically significant. Later measurements showed a statistically significant improvement in the mean ABG, 11.5 dB, compared with 4.4 dB for one of the standard partial ossicular replacement prosthesis and a tendency of better results to 6.9 dB of the other standard prosthesis. In our patients, we achieved similarly good audiometric results to those already published for the partial ossicular replacement prosthesis with a ball-and-socket joint between the plate and the shaft. Intraoperative fixation posed no problems, and the postoperative complication rate was low.

  5. Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus

    PubMed Central

    Giacomini, Guilherme; Pavan, Ana Luiza Menegatti; Altemani, João Mauricio Carrasco; Duarte, Sergio Barbosa; Fortaleza, Carlos Magno Castelo Branco; Miranda, José Ricardo de Arruda

    2018-01-01

    Volume measurements of maxillary sinus may be useful to identify diseases affecting paranasal sinuses. However, literature shows a lack of consensus in studies measuring the volume. This may be attributable to different computed tomography data acquisition techniques, segmentation methods, focuses of investigation, among other reasons. Furthermore, methods for volumetrically quantifying the maxillary sinus are commonly manual or semiautomated, which require substantial user expertise and are time-consuming. The purpose of the present study was to develop an automated tool for quantifying the total and air-free volume of the maxillary sinus based on computed tomography images. The quantification tool seeks to standardize maxillary sinus volume measurements, thus allowing better comparisons and determinations of factors that influence maxillary sinus size. The automated tool utilized image processing techniques (watershed, threshold, and morphological operators). The maxillary sinus volume was quantified in 30 patients. To evaluate the accuracy of the automated tool, the results were compared with manual segmentation that was performed by an experienced radiologist using a standard procedure. The mean percent differences between the automated and manual methods were 7.19% ± 5.83% and 6.93% ± 4.29% for total and air-free maxillary sinus volume, respectively. Linear regression and Bland-Altman statistics showed good agreement and low dispersion between both methods. The present automated tool for maxillary sinus volume assessment was rapid, reliable, robust, accurate, and reproducible and may be applied in clinical practice. The tool may be used to standardize measurements of maxillary volume. Such standardization is extremely important for allowing comparisons between studies, providing a better understanding of the role of the maxillary sinus, and determining the factors that influence maxillary sinus size under normal and pathological conditions. PMID:29304130

  6. Multifractal Properties of Process Control Variables

    NASA Astrophysics Data System (ADS)

    Domański, Paweł D.

    2017-06-01

    Control system is an inevitable element of any industrial installation. Its quality affects overall process performance significantly. The assessment, whether control system needs any improvement or not, requires relevant and constructive measures. There are various methods, like time domain based, Minimum Variance, Gaussian and non-Gaussian statistical factors, fractal and entropy indexes. Majority of approaches use time series of control variables. They are able to cover many phenomena. But process complexities and human interventions cause effects that are hardly visible for standard measures. It is shown that the signals originating from industrial installations have multifractal properties and such an analysis may extend standard approach to further observations. The work is based on industrial and simulation data. The analysis delivers additional insight into the properties of control system and the process. It helps to discover internal dependencies and human factors, which are hardly detectable.

  7. Development and status of data quality assurance program at NASA Langley research center: Toward national standards

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    1996-01-01

    As part of a continuing effort to re-engineer the wind tunnel testing process, a comprehensive data quality assurance program is being established at NASA Langley Research Center (LaRC). The ultimate goal of the program is routing provision of tunnel-to-tunnel reproducibility with total uncertainty levels acceptable for test and evaluation of civilian transports. The operational elements for reaching such levels of reproducibility are: (1) statistical control, which provides long term measurement uncertainty predictability and a base for continuous improvement, (2) measurement uncertainty prediction, which provides test designs that can meet data quality expectations with the system's predictable variation, and (3) national standards, which provide a means for resolving tunnel-to-tunnel differences. The paper presents the LaRC design for the program and discusses the process of implementation.

  8. Vectorcardiographic changes during extended space flight (M093): Observations at rest and during exercise

    NASA Technical Reports Server (NTRS)

    Smith, R. F.; Stanton, K.; Stoop, D.; Brown, D.; Janusz, W.; King, P.

    1977-01-01

    The objectives of Skylab Experiment M093 were to measure electrocardiographic signals during space flight, to elucidate the electrophysiological basis for the changes observed, and to assess the effect of the change on the human cardiovascular system. Vectorcardiographic methods were used to quantitate changes, standardize data collection, and to facilitate reduction and statistical analysis of data. Since the Skylab missions provided a unique opportunity to study the effects of prolonged weightlessness on human subjects, an effort was made to construct a data base that contained measurements taken with precision and in adequate number to enable conclusions to be made with a high degree of confidence. Standardized exercise loads were incorporated into the experiment protocol to increase the sensitivity of the electrocardiogram for effects of deconditioning and to detect susceptability for arrhythmias.

  9. Hospital performance measures and 30-day readmission rates.

    PubMed

    Stefan, Mihaela S; Pekow, Penelope S; Nsa, Wato; Priya, Aruna; Miller, Lauren E; Bratzler, Dale W; Rothberg, Michael B; Goldberg, Robert J; Baus, Kristie; Lindenauer, Peter K

    2013-03-01

    Lowering hospital readmission rates has become a primary target for the Centers for Medicare & Medicaid Services, but studies of the relationship between adherence to the recommended hospital care processes and readmission rates have provided inconsistent and inconclusive results. To examine the association between hospital performance on Medicare's Hospital Compare process quality measures and 30-day readmission rates for patients with acute myocardial infarction (AMI), heart failure and pneumonia, and for those undergoing major surgery. We assessed hospital performance on process measures using the 2007 Hospital Inpatient Quality Reporting Program. The process measures for each condition were aggregated in two separate measures: Overall Measure (OM) and Appropriate Care Measure (ACM) scores. Readmission rates were calculated using Medicare claims. Risk-standardized 30-day all-cause readmission rate was calculated as the ratio of predicted to expected rate standardized by the overall mean readmission rate. We calculated predicted readmission rate using hierarchical generalized linear models and adjusting for patient-level factors. Among patients aged ≥ 66 years, the median OM score ranged from 79.4 % for abdominal surgery to 95.7 % for AMI, and the median ACM scores ranged from 45.8 % for abdominal surgery to 87.9 % for AMI. We observed a statistically significant, but weak, correlation between performance scores and readmission rates for pneumonia (correlation coefficient R = 0.07), AMI (R = 0.10), and orthopedic surgery (R = 0.06). The difference in the mean readmission rate between hospitals in the 1st and 4th quartiles of process measure performance was statistically significant only for AMI (0.25 percentage points) and pneumonia (0.31 percentage points). Performance on process measures explained less than 1 % of hospital-level variation in readmission rates. Hospitals with greater adherence to recommended care processes did not achieve meaningfully better 30-day hospital readmission rates compared to those with lower levels of performance.

  10. In vivo evaluation of the effect of stimulus distribution on FIR statistical efficiency in event-related fMRI.

    PubMed

    Jansma, J Martijn; de Zwart, Jacco A; van Gelderen, Peter; Duyn, Jeff H; Drevets, Wayne C; Furey, Maura L

    2013-05-15

    Technical developments in MRI have improved signal to noise, allowing use of analysis methods such as Finite impulse response (FIR) of rapid event related functional MRI (er-fMRI). FIR is one of the most informative analysis methods as it determines onset and full shape of the hemodynamic response function (HRF) without any a priori assumptions. FIR is however vulnerable to multicollinearity, which is directly related to the distribution of stimuli over time. Efficiency can be optimized by simplifying a design, and restricting stimuli distribution to specific sequences, while more design flexibility necessarily reduces efficiency. However, the actual effect of efficiency on fMRI results has never been tested in vivo. Thus, it is currently difficult to make an informed choice between protocol flexibility and statistical efficiency. The main goal of this study was to assign concrete fMRI signal to noise values to the abstract scale of FIR statistical efficiency. Ten subjects repeated a perception task with five random and m-sequence based protocol, with varying but, according to literature, acceptable levels of multicollinearity. Results indicated substantial differences in signal standard deviation, while the level was a function of multicollinearity. Experiment protocols varied up to 55.4% in standard deviation. Results confirm that quality of fMRI in an FIR analysis can significantly and substantially vary with statistical efficiency. Our in vivo measurements can be used to aid in making an informed decision between freedom in protocol design and statistical efficiency. Published by Elsevier B.V.

  11. Ultrasound Metrology in Mexico: a round robin test for medical diagnostics

    NASA Astrophysics Data System (ADS)

    Amezola Luna, R.; López Sánchez, A. L.; Elías Juárez, A. A.

    2011-02-01

    This paper presents preliminary statistical results from an on-going imaging medical ultrasound study, of particular relevance for gynecology and obstetrics areas. Its scope is twofold, firstly to compile the medical ultrasound infrastructure available in cities of Queretaro-Mexico, and second to promote the use of traceable measurement standards as a key aspect to assure quality of ultrasound examinations performed by medical specialists. The experimental methodology is based on a round robin test using an ultrasound phantom for medical imaging. The physician, using its own ultrasound machine, couplant and facilities, measures the size and depth of a set of pre-defined reflecting and absorbing targets of the reference phantom, which simulate human illnesses. Measurements performed give the medical specialist an objective feedback regarding some performance characteristics of their ultrasound examination systems, such as measurement system accuracy, dead zone, axial resolution, depth of penetration and anechoic targets detection. By the end of March 2010, 66 entities with medical ultrasound facilities, from both public and private institutions, have performed measurements. A network of medical ultrasound calibration laboratories in Mexico, with traceability to The International System of Units via national measurement standards, may indeed contribute to reduce measurement deviations and thus attain better diagnostics.

  12. Optimization of Dissolution Compartments in a Biorelevant Dissolution Apparatus Golem v2, Supported by Multivariate Analysis.

    PubMed

    Stupák, Ivan; Pavloková, Sylvie; Vysloužil, Jakub; Dohnal, Jiří; Čulen, Martin

    2017-11-23

    Biorelevant dissolution instruments represent an important tool for pharmaceutical research and development. These instruments are designed to simulate the dissolution of drug formulations in conditions most closely mimicking the gastrointestinal tract. In this work, we focused on the optimization of dissolution compartments/vessels for an updated version of the biorelevant dissolution apparatus-Golem v2. We designed eight compartments of uniform size but different inner geometry. The dissolution performance of the compartments was tested using immediate release caffeine tablets and evaluated by standard statistical methods and principal component analysis. Based on two phases of dissolution testing (using 250 and 100 mL of dissolution medium), we selected two compartment types yielding the highest measurement reproducibility. We also confirmed a statistically ssignificant effect of agitation rate and dissolution volume on the extent of drug dissolved and measurement reproducibility.

  13. Investigation of Particle Sampling Bias in the Shear Flow Field Downstream of a Backward Facing Step

    NASA Technical Reports Server (NTRS)

    Meyers, James F.; Kjelgaard, Scott O.; Hepner, Timothy E.

    1990-01-01

    The flow field about a backward facing step was investigated to determine the characteristics of particle sampling bias in the various flow phenomena. The investigation used the calculation of the velocity:data rate correlation coefficient as a measure of statistical dependence and thus the degree of velocity bias. While the investigation found negligible dependence within the free stream region, increased dependence was found within the boundary and shear layers. Full classic correction techniques over-compensated the data since the dependence was weak, even in the boundary layer and shear regions. The paper emphasizes the necessity to determine the degree of particle sampling bias for each measurement ensemble and not use generalized assumptions to correct the data. Further, it recommends the calculation of the velocity:data rate correlation coefficient become a standard statistical calculation in the analysis of all laser velocimeter data.

  14. Multi-classification of cell deformation based on object alignment and run length statistic.

    PubMed

    Li, Heng; Liu, Zhiwen; An, Xing; Shi, Yonggang

    2014-01-01

    Cellular morphology is widely applied in digital pathology and is essential for improving our understanding of the basic physiological processes of organisms. One of the main issues of application is to develop efficient methods for cell deformation measurement. We propose an innovative indirect approach to analyze dynamic cell morphology in image sequences. The proposed approach considers both the cellular shape change and cytoplasm variation, and takes each frame in the image sequence into account. The cell deformation is measured by the minimum energy function of object alignment, which is invariant to object pose. Then an indirect analysis strategy is employed to overcome the limitation of gradual deformation by run length statistic. We demonstrate the power of the proposed approach with one application: multi-classification of cell deformation. Experimental results show that the proposed method is sensitive to the morphology variation and performs better than standard shape representation methods.

  15. An Extension of the Chi-Square Procedure for Non-NORMAL Statistics, with Application to Solar Neutrino Data

    NASA Astrophysics Data System (ADS)

    Sturrock, P. A.

    2008-01-01

    Using the chi-square statistic, one may conveniently test whether a series of measurements of a variable are consistent with a constant value. However, that test is predicated on the assumption that the appropriate probability distribution function (pdf) is normal in form. This requirement is usually not satisfied by experimental measurements of the solar neutrino flux. This article presents an extension of the chi-square procedure that is valid for any form of the pdf. This procedure is applied to the GALLEX-GNO dataset, and it is shown that the results are in good agreement with the results of Monte Carlo simulations. Whereas application of the standard chi-square test to symmetrized data yields evidence significant at the 1% level for variability of the solar neutrino flux, application of the extended chi-square test to the unsymmetrized data yields only weak evidence (significant at the 4% level) of variability.

  16. The Nonsubsampled Contourlet Transform Based Statistical Medical Image Fusion Using Generalized Gaussian Density

    PubMed Central

    Yang, Guocheng; Li, Meiling; Chen, Leiting; Yu, Jie

    2015-01-01

    We propose a novel medical image fusion scheme based on the statistical dependencies between coefficients in the nonsubsampled contourlet transform (NSCT) domain, in which the probability density function of the NSCT coefficients is concisely fitted using generalized Gaussian density (GGD), as well as the similarity measurement of two subbands is accurately computed by Jensen-Shannon divergence of two GGDs. To preserve more useful information from source images, the new fusion rules are developed to combine the subbands with the varied frequencies. That is, the low frequency subbands are fused by utilizing two activity measures based on the regional standard deviation and Shannon entropy and the high frequency subbands are merged together via weight maps which are determined by the saliency values of pixels. The experimental results demonstrate that the proposed method significantly outperforms the conventional NSCT based medical image fusion approaches in both visual perception and evaluation indices. PMID:26557871

  17. 25 CFR 542.19 - What are the minimum internal control standards for accounting?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...

  18. 25 CFR 542.19 - What are the minimum internal control standards for accounting?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...

  19. 25 CFR 542.19 - What are the minimum internal control standards for accounting?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...

  20. 25 CFR 542.19 - What are the minimum internal control standards for accounting?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...

  1. 25 CFR 542.19 - What are the minimum internal control standards for accounting?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...

  2. How Historical Information Can Improve Extreme Value Analysis of Coastal Water Levels

    NASA Astrophysics Data System (ADS)

    Le Cozannet, G.; Bulteau, T.; Idier, D.; Lambert, J.; Garcin, M.

    2016-12-01

    The knowledge of extreme coastal water levels is useful for coastal flooding studies or the design of coastal defences. While deriving such extremes with standard analyses using tide gauge measurements, one often needs to deal with limited effective duration of observation which can result in large statistical uncertainties. This is even truer when one faces outliers, those particularly extreme values distant from the others. In a recent work (Bulteau et al., 2015), we investigated how historical information of past events reported in archives can reduce statistical uncertainties and relativize such outlying observations. We adapted a Bayesian Markov Chain Monte Carlo method, initially developed in the hydrology field (Reis and Stedinger, 2005), to the specific case of coastal water levels. We applied this method to the site of La Rochelle (France), where the storm Xynthia in 2010 generated a water level considered so far as an outlier. Based on 30 years of tide gauge measurements and 8 historical events since 1890, the results showed a significant decrease in statistical uncertainties on return levels when historical information is used. Also, Xynthia's water level no longer appeared as an outlier and we could have reasonably predicted the annual exceedance probability of that level beforehand (predictive probability for 2010 based on data until the end of 2009 of the same order of magnitude as the standard estimative probability using data until the end of 2010). Such results illustrate the usefulness of historical information in extreme value analyses of coastal water levels, as well as the relevance of the proposed method to integrate heterogeneous data in such analyses.

  3. Final Report on the Key Comparison CCM.P-K4.2012 in Absolute Pressure from 1 Pa to 10 kPa

    PubMed Central

    Ricker, Jacob; Hendricks, Jay; Bock, Thomas; Dominik, Pražák; Kobata, Tokihiko; Torres, Jorge; Sadkovskaya, Irina

    2017-01-01

    The report summarizes the Consultative Committee for Mass (CCM) key comparison CCM.P-K4.2012 for absolute pressure spanning the range of 1 Pa to 10 000 Pa. The comparison was carried out at six National Metrology Institutes (NMIs), including National Institute of Standards and Technology (NIST), Physikalisch-Technische Bundesanstalt (PTB), Czech Metrology Institute (CMI), National Metrology Institute of Japan (NMIJ), Centro Nacional de Metrología (CENAM), and DI Mendeleyev Institute for Metrology (VNIIM). The comparison was made via a calibrated transfer standard measured at each of the NMIs facilities using their laboratory standard during the period May 2012 to September 2013. The transfer package constructed for this comparison preformed as designed and provided a stable artifact to compare laboratory standards. Overall the participants were found to be statistically equivalent to the key comparison reference value. PMID:28216793

  4. Further statistics in dentistry, Part 5: Diagnostic tests for oral conditions.

    PubMed

    Petrie, A; Bulman, J S; Osborn, J F

    2002-12-07

    A diagnostic test is a simple test, sometimes based on a clinical measurement, which is used when the gold-standard test providing a definitive diagnosis of a given condition is too expensive, invasive or time-consuming to perform. The diagnostic test can be used to diagnose a dental condition in an individual patient or as a screening device in a population of apparently healthy individuals.

  5. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...

  6. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...

  7. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...

  8. 14 CFR Appendix C to Part 91 - Operations in the North Atlantic (NAT) Minimum Navigation Performance Specifications (MNPS) Airspace

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... OPERATING RULES GENERAL OPERATING AND FLIGHT RULES Pt. 91, App. C Appendix C to Part 91—Operations in the... Oceanic Control Area, excluding the areas west of 60 degrees west and south of 38 degrees 30 minutes north... shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean...

  9. 2010 Anthropometric Survey of U.S. Marine Corps Personnel: Methods and Summary Statistics

    DTIC Science & Technology

    2013-06-01

    models for the ergonomic design of working environments. Today, the entire production chain for a piece of clothing, beginning with the design and...Corps 382 crewstations and workstations. Digital models are increasingly used in the design process for seated and standing workstations, as well...International Standards for Ergonomic Design : These dimensions are useful for comparing data sets between nations, and are measured according to

  10. Estimations of ABL fluxes and other turbulence parameters from Doppler lidar data

    NASA Technical Reports Server (NTRS)

    Gal-Chen, Tzvi; Xu, Mei; Eberhard, Wynn

    1989-01-01

    Techniques for extraction boundary layer parameters from measurements of a short-pulse CO2 Doppler lidar are described. The measurements are those collected during the First International Satellites Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE). By continuously operating the lidar for about an hour, stable statistics of the radial velocities can be extracted. Assuming that the turbulence is horizontally homogeneous, the mean wind, its standard deviations, and the momentum fluxes were estimated. Spectral analysis of the radial velocities is also performed from which, by examining the amplitude of the power spectrum at the inertial range, the kinetic energy dissipation was deduced. Finally, using the statistical form of the Navier-Stokes equations, the surface heat flux is derived as the residual balance between the vertical gradient of the third moment of the vertical velocity and the kinetic energy dissipation. Combining many measurements would normally reduce the error provided that, it is unbiased and uncorrelated. The nature of some of the algorithms however, is such that, biased and correlated errors may be generated even though the raw measurements are not. Data processing procedures were developed that eliminate bias and minimize error correlation. Once bias and error correlations are accounted for, the large sample size is shown to reduce the errors substantially. The principal features of the derived turbulence statistics for two case studied are presented.

  11. Identifying the impact of social determinants of health on disease rates using correlation analysis of area-based summary information.

    PubMed

    Song, Ruiguang; Hall, H Irene; Harrison, Kathleen McDavid; Sharpe, Tanya Telfair; Lin, Lillian S; Dean, Hazel D

    2011-01-01

    We developed a statistical tool that brings together standard, accessible, and well-understood analytic approaches and uses area-based information and other publicly available data to identify social determinants of health (SDH) that significantly affect the morbidity of a specific disease. We specified AIDS as the disease of interest and used data from the American Community Survey and the National HIV Surveillance System. Morbidity and socioeconomic variables in the two data systems were linked through geographic areas that can be identified in both systems. Correlation and partial correlation coefficients were used to measure the impact of socioeconomic factors on AIDS diagnosis rates in certain geographic areas. We developed an easily explained approach that can be used by a data analyst with access to publicly available datasets and standard statistical software to identify the impact of SDH. We found that the AIDS diagnosis rate was highly correlated with the distribution of race/ethnicity, population density, and marital status in an area. The impact of poverty, education level, and unemployment depended on other SDH variables. Area-based measures of socioeconomic variables can be used to identify risk factors associated with a disease of interest. When correlation analysis is used to identify risk factors, potential confounding from other variables must be taken into account.

  12. Rapid Prototyping for In Vitro Knee Rig Investigations of Prosthetized Knee Biomechanics: Comparison with Cobalt-Chromium Alloy Implant Material

    PubMed Central

    Schröder, Christian; Steinbrück, Arnd; Müller, Tatjana; Woiczinski, Matthias; Chevalier, Yan; Müller, Peter E.; Jansson, Volkmar

    2015-01-01

    Retropatellar complications after total knee arthroplasty (TKA) such as anterior knee pain and subluxations might be related to altered patellofemoral biomechanics, in particular to trochlear design and femorotibial joint positioning. A method was developed to test femorotibial and patellofemoral joint modifications separately with 3D-rapid prototyped components for in vitro tests, but material differences may further influence results. This pilot study aims at validating the use of prostheses made of photopolymerized rapid prototype material (RPM) by measuring the sliding friction with a ring-on-disc setup as well as knee kinematics and retropatellar pressure on a knee rig. Cobalt-chromium alloy (standard prosthesis material, SPM) prostheses served as validation standard. Friction coefficients between these materials and polytetrafluoroethylene (PTFE) were additionally tested as this latter material is commonly used to protect pressure sensors in experiments. No statistical differences were found between friction coefficients of both materials to PTFE. UHMWPE shows higher friction coefficient at low axial loads for RPM, a difference that disappears at higher load. No measurable statistical differences were found in knee kinematics and retropatellar pressure distribution. This suggests that using polymer prototypes may be a valid alternative to original components for in vitro TKA studies and future investigations on knee biomechanics. PMID:25879019

  13. School furniture and work surface lighting impacts on the body posture of Paraíba's public school students.

    PubMed

    da Silva, Luiz Bueno; Coutinho, Antonio Souto; da Costa Eulálio, Eliza Juliana; Soares, Elaine Victor Gonçalves

    2012-01-01

    The main objective of this study is to evaluate the impact of school furniture and work surface lighting on the body posture of public Middle School students from Paraíba (Brazil). The survey was carried out in two public schools and the target population for the study included 8th grade groups involving a total of 31 students. Brazilian standards for lighting levels, the CEBRACE standards for furniture measurements and the Postural Assessment Software (SAPO) for the postural misalignment assay were adopted for the measurements comparison. The statistic analysis includes analyses of parametric and non-parametric correlations. The results show that the students' most affected parts of the body were the spine, the regions of the knees and head and neck, with 90% of the total number of students presenting postural misalignment. The lighting levels were usually found below 300 lux, below recommended levels. The statistic analysis show that the more adequate the furniture seems to be to the user, the less the user will complain of pain. Such results indicate the need of investments in more suitable school furniture and structural reforms aimed at improving the lighting in the classrooms, which could fulfill the students' profile and reduce their complaints.

  14. Cocaine profiling for strategic intelligence, a cross-border project between France and Switzerland: part II. Validation of the statistical methodology for the profiling of cocaine.

    PubMed

    Lociciro, S; Esseiva, P; Hayoz, P; Dujourdy, L; Besacier, F; Margot, P

    2008-05-20

    Harmonisation and optimization of analytical and statistical methodologies were carried out between two forensic laboratories (Lausanne, Switzerland and Lyon, France) in order to provide drug intelligence for cross-border cocaine seizures. Part I dealt with the optimization of the analytical method and its robustness. This second part investigates statistical methodologies that will provide reliable comparison of cocaine seizures analysed on two different gas chromatographs interfaced with a flame ionisation detectors (GC-FIDs) in two distinct laboratories. Sixty-six statistical combinations (ten data pre-treatments followed by six different distance measurements and correlation coefficients) were applied. One pre-treatment (N+S: area of each peak is divided by its standard deviation calculated from the whole data set) followed by the Cosine or Pearson correlation coefficients were found to be the best statistical compromise for optimal discrimination of linked and non-linked samples. The centralisation of the analyses in one single laboratory is not a required condition anymore to compare samples seized in different countries. This allows collaboration, but also, jurisdictional control over data.

  15. Meta- and statistical analysis of single-case intervention research data: quantitative gifts and a wish list.

    PubMed

    Kratochwill, Thomas R; Levin, Joel R

    2014-04-01

    In this commentary, we add to the spirit of the articles appearing in the special series devoted to meta- and statistical analysis of single-case intervention-design data. Following a brief discussion of historical factors leading to our initial involvement in statistical analysis of such data, we discuss: (a) the value added by including statistical-analysis recommendations in the What Works Clearinghouse Standards for single-case intervention designs; (b) the importance of visual analysis in single-case intervention research, along with the distinctive role that could be played by single-case effect-size measures; and (c) the elevated internal validity and statistical-conclusion validity afforded by the incorporation of various forms of randomization into basic single-case design structures. For the future, we envision more widespread application of quantitative analyses, as critical adjuncts to visual analysis, in both primary single-case intervention research studies and literature reviews in the behavioral, educational, and health sciences. Copyright © 2014 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  16. Measuring Retention in HIV Care: The Elusive Gold Standard

    PubMed Central

    Mugavero, Michael J.; Westfall, Andrew O.; Zinski, Anne; Davila, Jessica; Drainoni, Mari-Lynn; Gardner, Lytt I.; Keruly, Jeanne C.; Malitz, Faye; Marks, Gary; Metsch, Lisa; Wilson, Tracey E.; Giordano, Thomas P.

    2012-01-01

    Background Measuring retention in HIV primary care is complex as care includes multiple visits scheduled at varying intervals over time. We evaluated six commonly used retention measures in predicting viral load (VL) suppression and the correlation among measures. Methods Clinic-wide patient-level data from six academic HIV clinics were used for 12-months preceding implementation of the CDC/HRSA Retention in Care intervention. Six retention measures were calculated for each patient based upon scheduled primary HIV provider visits: count and dichotomous missed visits, visit adherence, 6-month gap, 4-month visit constancy, and the HRSA HAB retention measure. Spearman correlation coefficients and separate unadjusted logistic regression models compared retention measures to one another and with 12-month VL suppression, respectively. The discriminatory capacity of each measure was assessed with the c-statistic. Results Among 10,053 patients, 8,235 (82%) had 12-month VL measures, with 6,304 (77%) achieving suppression (VL<400 c/mL). All six retention measures were significantly associated (P<0.0001) with VL suppression (OR;95%CI, c-statistic): missed visit count (0.73;0.71–0.75,0.67), missed visit dichotomous (3.2;2.8–3.6,0.62), visit adherence (3.9;3.5–4.3,0.69), gap (3.0;2.6–3.3,0.61), visit constancy (2.8;2.5–3.0,0.63), HRSA HAB (3.8;3.3–4.4,0.59). Measures incorporating “no show” visits were highly correlated (Spearman coefficient=0.83–0.85), as were measures based solely upon kept visits (Spearman coefficient=0.72–0.77). Correlation coefficients were lower across these two groups of measures (Range=0.16–0.57). Conclusions Six retention measures displayed a wide range of correlation with one another, yet each measure had significant association and modest discrimination for VL suppression. These data suggest there is no clear gold standard, and that selection of a retention measure may be tailored to context. PMID:23011397

  17. Thermal infrared imaging of the variability of canopy-air temperature difference distribution for heavy metal stress levels discrimination in rice

    NASA Astrophysics Data System (ADS)

    Zhang, Biyao; Liu, Xiangnan; Liu, Meiling; Wang, Dongmin

    2017-04-01

    This paper addresses the assessment and interpretation of the canopy-air temperature difference (Tc-Ta) distribution as an indicator for discriminating between heavy metal stress levels. Tc-Ta distribution is simulated by coupling the energy balance equation with modified leaf angle distribution. Statistical indices including average value (AVG), standard deviation (SD), median, and span of Tc-Ta in the field of view of a digital thermal imager are calculated to describe Tc-Ta distribution quantitatively and, consequently, became the stress indicators. In the application, two grains of rice growing sites under "mild" and "severe" stress level were selected as study areas. A total of 96 thermal images obtained from the field measurements in the three growth stages were used for a separate application of a theoretical variation of Tc-Ta distribution. The results demonstrated that the statistical indices calculated from both simulated and measured data exhibited an upward trend as the stress level becomes serious because heavy metal stress would only raise a portion of the leaves in the canopy. Meteorological factors could barely affect the sensitivity of the statistical indices with the exception of the wind speed. Among the statistical indices, AVG and SD were demonstrated to be better indicators for stress levels discrimination.

  18. Performance evaluation of spectral vegetation indices using a statistical sensitivity function

    USGS Publications Warehouse

    Ji, Lei; Peters, Albert J.

    2007-01-01

    A great number of spectral vegetation indices (VIs) have been developed to estimate biophysical parameters of vegetation. Traditional techniques for evaluating the performance of VIs are regression-based statistics, such as the coefficient of determination and root mean square error. These statistics, however, are not capable of quantifying the detailed relationship between VIs and biophysical parameters because the sensitivity of a VI is usually a function of the biophysical parameter instead of a constant. To better quantify this relationship, we developed a “sensitivity function” for measuring the sensitivity of a VI to biophysical parameters. The sensitivity function is defined as the first derivative of the regression function, divided by the standard error of the dependent variable prediction. The function elucidates the change in sensitivity over the range of the biophysical parameter. The Student's t- or z-statistic can be used to test the significance of VI sensitivity. Additionally, we developed a “relative sensitivity function” that compares the sensitivities of two VIs when the biophysical parameters are unavailable.

  19. Intra and interrater reliability of spinal sagittal curves and mobility using pocket goniometer IncliMed® in healthy subjects.

    PubMed

    Alderighi, Marzia; Ferrari, Raffaello; Maghini, Irene; Del Felice, Alessandra; Masiero, Stefano

    2016-11-21

    Radiographic examination is the gold standard to evaluate spine curves, but ionising radiations limit routine use. Non-invasive methods, such as skin-surface goniometer (IncliMed®) should be used instead. To evaluate intra- and interrater reliability to assess sagittal curves and mobility of the spine with IncliMed®. a reliability study on agonistic football players. Thoracic kyphosis, lumbar lordosis and mobility of the spine were assessed by IncliMed®. Measurements were repeated twice by each examiner during the same session with between-rater blinding. Intrarater and interrater reliability were measured by Intraclass Correlation Coefficient (ICC), 95% Confidence Interval (CI 95%) and Standard Error of Measurement (SEM). Thirty-four healthy female football players (19.17 ± 4.52 years) were enrolled. Statistical results showed high intrarater (0.805-0.923) and interrater (0.701-0.886) reliability (ICC > 0.8). The obtained intra- and interrater SEM were low, with overall absolute intrarater values between 1.39° and 2.76° and overall interrater values between 1.71° and 4.25°. IncliMed® provides high intra- and interrater reliability in healthy subjects, with limited Standard Error of Measurement. These results encourage its use in clinical practice and scientific research.

  20. A New Approach to Extract Forest Water Use Efficiency from Eddy Covariance Data

    NASA Astrophysics Data System (ADS)

    Scanlon, T. M.; Sulman, B. N.

    2016-12-01

    Determination of forest water use efficiency (WUE) from eddy covariance data typically involves the following steps: (a) estimating gross primary productivity (GPP) from direct measurements of net ecosystem exchange (NEE) by extrapolating nighttime ecosystem respiration (ER) to daytime conditions, and (b) assuming direct evaporation (E) is minimal several days after rainfall, meaning that direct measurements of evapotranspiration (ET) are identical to transpiration (T). Both of these steps could lead to errors in the estimation of forest WUE. Here, we present a theoretical approach for estimating WUE through the analysis of standard eddy covariance data, which circumvents these steps. Only five statistics are needed from the high-frequency time series to extract WUE: CO2 flux, water vapor flux, standard deviation in CO2 concentration, standard deviation in water vapor concentration, and the correlation coefficient between CO2 and water vapor concentration for each half-hour period. The approach is based on the assumption that stomatal fluxes (i.e. photosynthesis and transpiration) lead to perfectly negative correlations and non-stomatal fluxes (i.e. ecosystem respiration and direct evaporation) lead to perfectly positive correlations within the CO2 and water vapor high frequency time series measured above forest canopies. A mathematical framework is presented, followed by a proof of concept using eddy covariance data and leaf-level measurements of WUE.

  1. Investigation of interpolation techniques for the reconstruction of the first dimension of comprehensive two-dimensional liquid chromatography-diode array detector data.

    PubMed

    Allen, Robert C; Rutan, Sarah C

    2011-10-31

    Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Low typing endurance in keyboard workers with work-related upper limb disorder

    PubMed Central

    Povlsen, Bo

    2011-01-01

    Objective To compare results of typing endurance and pain before and after a standardized functional test. Design A standardized previously published typing test on a standard QWERTY keyboard. Setting An outpatient hospital environment. Participants Sixty-one keyboard and mouse operating patients with WRULD and six normal controls. Main outcome measure Pain severity before and after the test, typing endurance and speed were recorded. Results Thirty-two patients could not complete the test before pain reached VAS 5 and this group only typed a mean of 11 minutes. The control group and the remaining group of 29 patients completed the test. Two-tailed student T test was used for evaluation. The endurance was significantly shorter in the patient group that could not complete the test (P < 0.00001) and the pain levels were also higher in this group both before (P = 0.01) and after the test (P = 0.0003). Both patient groups had more pain in the right than the left hand, both before and after typing. Conclusions Low typing endurance correlates statistically with more resting pain in keyboard and mouse operators with work-related upper limb disorder and statistically more pain after a standardized typing test. As the right hands had higher pain levels, typing alone may not be the cause of the pain as the left hand on a QWERTY keyboard does relative more keystrokes than the right hand. PMID:21637395

  3. Photogrammetric Correlation of Face with Frontal Radiographs and Direct Measurements.

    PubMed

    Negi, Gunjan; Ponnada, Swaroopa; Aravind, N K S; Chitra, Prasad

    2017-05-01

    Photogrammetry is a science of making measurements from photographs. As cephalometric analysis till date has focused mainly on skeletal relationships, photogrammetry may provide a means to reliably assess and compare soft tissue and hard tissue measurements. To compare and correlate linear measurements taken directly from subject's faces and from standardized frontal cephalometric radiographs and to correlate them with standardized frontal facial photographs of Indian population and to obtain mean values. A cross-sectional study was conducted on 30 subjects of Indian origin. Frontal cephalograms and standardized frontal photographs were obtained from subjects in the age group of 18- 25 years. Vernier calipers were used to obtain facial measurements directly. Photographs and radiographs were uploaded and measured using Nemoceph software. Analogous cephalometric, photographic and direct measurements were compared by one-way ANOVA to assess Pearson correlation coefficients for 12 linear measurements (6 vertical, 6 horizontal). Bonferroni post-hoc test was done for pair wise comparison. Among all measurements used, O R -O L (orbitale right-orbitale left) showed a high correlation r = 0.76, 0.70, 0.71. There was moderate correlation with En R -En L (endocanthion rt - endocanthion lt) r 2 = 0.62, 0.68, 0.68. Highly significant correlation was evident with N-Sn, En R -En L and Ag R -Ag L with p<0.001. A statistically significant correlation was found between photographic, radiographic and direct measurements. Therefore, photogrammetry has proven to be an alternative diagnostic tool that can be used in epidemiologic studies when there is a need for a simple, basic, non-invasive and cost-effective method.

  4. Psychoeducational Characteristics of Children with Hypohidrotic Ectodermal Dysplasia

    PubMed Central

    Maxim, Rolanda A.; Zinner, Samuel H.; Matsuo, Hisako; Prosser, Theresa M.; Fete, Mary; Leet, Terry L.; Fete, Timothy J.

    2012-01-01

    Objective. Hypohidrotic ectodermal dysplasia (HED) is an X-linked hereditary disorder characterized by hypohidrosis, hypotrichosis, and anomalous dentition. Estimates of up to 50% of affected children having intellectual disability are controversial. Method. In a cross-sectional study, 45 youth with HED (77% males, mean age 9.75 years) and 59 matched unaffected controls (70% males, mean age 9.79 years) were administered the Kaufman Brief Intelligence Test and the Kaufman Test of Educational Achievement, and their parents completed standardized neurodevelopmental and behavioral measures, educational, and health-related information regarding their child, as well as standardized and nonstandardized data regarding socioeconomic information for their family. Results. There were no statistically significant differences between the two groups in intelligence quotient composite and educational achievement scores, suggesting absence of learning disability in either group. No gender differences within or between groups were found on any performance measures. Among affected youth, parental education level correlated positively with (1) cognitive vocabulary scores and cognitive composite scores; (2) educational achievement for mathematics, reading, and composite scores. Conclusion. Youth affected with HED and unaffected matched peers have similar profiles on standardized measures of cognition, educational achievement, and adaptive functioning although children with HED may be at increased risk for ADHD. PMID:22536143

  5. Assessment of the beryllium lymphocyte proliferation test using statistical process control.

    PubMed

    Cher, Daniel J; Deubner, David C; Kelsh, Michael A; Chapman, Pamela S; Ray, Rose M

    2006-10-01

    Despite more than 20 years of surveillance and epidemiologic studies using the beryllium blood lymphocyte proliferation test (BeBLPT) as a measure of beryllium sensitization (BeS) and as an aid for diagnosing subclinical chronic beryllium disease (CBD), improvements in specific understanding of the inhalation toxicology of CBD have been limited. Although epidemiologic data suggest that BeS and CBD risks vary by process/work activity, it has proven difficult to reach specific conclusions regarding the dose-response relationship between workplace beryllium exposure and BeS or subclinical CBD. One possible reason for this uncertainty could be misclassification of BeS resulting from variation in BeBLPT testing performance. The reliability of the BeBLPT, a biological assay that measures beryllium sensitization, is unknown. To assess the performance of four laboratories that conducted this test, we used data from a medical surveillance program that offered testing for beryllium sensitization with the BeBLPT. The study population was workers exposed to beryllium at various facilities over a 10-year period (1992-2001). Workers with abnormal results were offered diagnostic workups for CBD. Our analyses used a standard statistical technique, statistical process control (SPC), to evaluate test reliability. The study design involved a repeated measures analysis of BeBLPT results generated from the company-wide, longitudinal testing. Analytical methods included use of (1) statistical process control charts that examined temporal patterns of variation for the stimulation index, a measure of cell reactivity to beryllium; (2) correlation analysis that compared prior perceptions of BeBLPT instability to the statistical measures of test variation; and (3) assessment of the variation in the proportion of missing test results and how time periods with more missing data influenced SPC findings. During the period of this study, all laboratories displayed variation in test results that were beyond what would be expected due to chance alone. Patterns of test results suggested that variations were systematic. We conclude that laboratories performing the BeBLPT or other similar biological assays of immunological response could benefit from a statistical approach such as SPC to improve quality management.

  6. Exposure to air pollution near a steel plant is associated with reduced heart rate variability: a randomised crossover study.

    PubMed

    Shutt, Robin H; Kauri, Lisa Marie; Weichenthal, Scott; Kumarathasan, Premkumari; Vincent, Renaud; Thomson, Errol M; Liu, Ling; Mahmud, Mamun; Cakmak, Sabit; Dales, Robert

    2017-01-28

    Epidemiological studies have shown that as ambient air pollution (AP) increases the risk of cardiovascular mortality also increases. The mechanisms of this effect may be linked to alterations in autonomic nervous system function. We wished to examine the effects of industrial AP on heart rate variability (HRV), a measure of subtle changes in heart rate and rhythm representing autonomic input to the heart. Sixty healthy adults were randomized to spend five consecutive 8-h days outdoors in one of two locations: (1) adjacent to a steel plant in the Bayview neighbourhood in Sault Ste Marie Ontario or (2) at a College campus, several kilometers from the plant. Following a 9-16 day washout period, participants spent five consecutive days at the other site. Ambient AP levels and ambulatory electrocardiogram recordings were collected daily. HRV analysis was undertaken on a segment of the ambulatory ECG recording during a 15 min rest period, near the end of the 8-h on-site day. Standard HRV parameters from both time and frequency domains were measured. Ambient AP was measured with fixed site monitors at both sites. Statistical analysis was completed using mixed-effects models. Compared to the College site, HRV was statistically significantly reduced at the Bayview site by 13% (95%CI 3.6,19.2) for the standard deviation of normal to normal, 8% (95%CI 0.1, 4.9) for the percent normal to normal intervals differing by more than 50 ms, and 15% (95%CI 74.9, 571.2) for low frequency power. Levels of carbon monoxide, sulphur dioxide, nitrogen dioxide, and fine and ultrafine particulates were slightly, but statistically significantly, elevated at Bayview when compared to College. Interquartile range changes in individual air pollutants were significantly associated with reductions in HRV measured on the same day. The patterns of effect showed a high degree of consistency, with nearly all pollutants significantly inversely associated with at least one measure of HRV. The significant associations between AP and changes in HRV suggest that ambient AP near a steel plant may impact autonomic nervous system control of the heart.

  7. A model for predicting sulcus-to-sulcus diameter in posterior chamber phakic intraocular lens candidates: correlation between ocular biometric parameters.

    PubMed

    Ghoreishi, Mohammad; Abdi-Shahshahani, Mehdi; Peyman, Alireza; Pourazizi, Mohsen

    2018-02-21

    The aim of this study was to determine the correlation between ocular biometric parameters and sulcus-to-sulcus (STS) diameter. This was a cross-sectional study of preoperative ocular biometry data of patients who were candidates for phakic intraocular lens (IOL) surgery. Subjects underwent ocular biometry analysis, including refraction error evaluation using an autorefractor and Orbscan topography for white-to-white (WTW) corneal diameter and measurement. Pentacam was used to perform WTW corneal diameter and measurements of minimum and maximum keratometry (K). Measurements of STS and angle-to-angle (ATA) were obtained using a 50-MHz B-mode ultrasound device. Anterior optical coherence tomography was performed for anterior chamber depth measurement. Pearson's correlation test and stepwise linear regression analysis were used to find a model to predict STS. Fifty-eight eyes of 58 patients were enrolled. Mean age ± standard deviation of sample was 28.95 ± 6.04 years. The Pearson's correlation coefficient between STS with WTW, ATA, mean K was 0.383, 0.492, and - 0.353, respectively, which was statistically significant (all P < 0.001). Using stepwise linear regression analysis, there is a statistically significant association between STS with WTW (P = 0.011) and mean K (P = 0.025). The standardized coefficient was 0.323 and - 0.284 for WTW and mean K, respectively. The stepwise linear regression analysis equation was: (STS = 9.549 + 0.518 WTW - 0.083 mean K). Based on our result, given the correlation of STS with WTW and mean K and potential of direct and essay measurement of WTW and mean K, it seems that current IOL sizing protocols could be estimating with WTW and mean K.

  8. Power-law statistics of neurophysiological processes analyzed using short signals

    NASA Astrophysics Data System (ADS)

    Pavlova, Olga N.; Runnova, Anastasiya E.; Pavlov, Alexey N.

    2018-04-01

    We discuss the problem of quantifying power-law statistics of complex processes from short signals. Based on the analysis of electroencephalograms (EEG) we compare three interrelated approaches which enable characterization of the power spectral density (PSD) and show that an application of the detrended fluctuation analysis (DFA) or the wavelet-transform modulus maxima (WTMM) method represents a useful way of indirect characterization of the PSD features from short data sets. We conclude that despite DFA- and WTMM-based measures can be obtained from the estimated PSD, these tools outperform the standard spectral analysis when characterization of the analyzed regime should be provided based on a very limited amount of data.

  9. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Öztürk, Hande; Noyan, I. Cevdet

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  10. Expected values and variances of Bragg peak intensities measured in a nanocrystalline powder diffraction experiment

    DOE PAGES

    Öztürk, Hande; Noyan, I. Cevdet

    2017-08-24

    A rigorous study of sampling and intensity statistics applicable for a powder diffraction experiment as a function of crystallite size is presented. Our analysis yields approximate equations for the expected value, variance and standard deviations for both the number of diffracting grains and the corresponding diffracted intensity for a given Bragg peak. The classical formalism published in 1948 by Alexander, Klug & Kummer [J. Appl. Phys.(1948),19, 742–753] appears as a special case, limited to large crystallite sizes, here. It is observed that both the Lorentz probability expression and the statistics equations used in the classical formalism are inapplicable for nanocrystallinemore » powder samples.« less

  11. Characterizing and Addressing the Need for Statistical Adjustment of Global Climate Model Data

    NASA Astrophysics Data System (ADS)

    White, K. D.; Baker, B.; Mueller, C.; Villarini, G.; Foley, P.; Friedman, D.

    2017-12-01

    As part of its mission to research and measure the effects of the changing climate, the U. S. Army Corps of Engineers (USACE) regularly uses the World Climate Research Programme's Coupled Model Intercomparison Project Phase 5 (CMIP5) multi-model dataset. However, these data are generated at a global level and are not fine-tuned for specific watersheds. This often causes CMIP5 output to vary from locally observed patterns in the climate. Several downscaling methods have been developed to increase the resolution of the CMIP5 data and decrease systemic differences to support decision-makers as they evaluate results at the watershed scale. Evaluating preliminary comparisons of observed and projected flow frequency curves over the US revealed a simple framework for water resources decision makers to plan and design water resources management measures under changing conditions using standard tools. Using this framework as a basis, USACE has begun to explore to use of statistical adjustment to alter global climate model data to better match the locally observed patterns while preserving the general structure and behavior of the model data. When paired with careful measurement and hypothesis testing, statistical adjustment can be particularly effective at navigating the compromise between the locally observed patterns and the global climate model structures for decision makers.

  12. Multidimensional assessment of self-regulated learning with middle school math students.

    PubMed

    Callan, Gregory L; Cleary, Timothy J

    2018-03-01

    This study examined the convergent and predictive validity of self-regulated learning (SRL) measures situated in mathematics. The sample included 100 eighth graders from a diverse, urban school district. Four measurement formats were examined including, 2 broad-based (i.e., self-report questionnaire and teacher ratings) and 2 task-specific measures (i.e., SRL microanalysis and behavioral traces). Convergent validity was examined across task-difficulty, and the predictive validity was examined across 3 mathematics outcomes: 2 measures of mathematical problem solving skill (i.e., practice session math problems, posttest math problems) and a global measure of mathematical skill (i.e., standardized math test). Correlation analyses were used to examine convergent validity and revealed medium correlations between measures within the same category (i.e., broad-based or task-specific). Relations between measurement classes were not statistically significant. Separate regressions examined the predictive validity of the SRL measures. While controlling all other predictors, a SRL microanalysis metacognitive-monitoring measure emerged as a significant predictor of all 3 outcomes and teacher ratings accounted for unique variance on 2 of the outcomes (i.e., posttest math problems and standardized math test). Results suggest that a multidimensional assessment approach should be considered by school psychologists interested in measuring SRL. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. Approach for Self-Calibrating CO2 Measurements with Linear Membrane-Based Gas Sensors

    PubMed Central

    Lazik, Detlef; Sood, Pramit

    2016-01-01

    Linear membrane-based gas sensors that can be advantageously applied for the measurement of a single gas component in large heterogeneous systems, e.g., for representative determination of CO2 in the subsurface, can be designed depending on the properties of the observation object. A resulting disadvantage is that the permeation-based sensor response depends on operating conditions, the individual site-adapted sensor geometry, the membrane material, and the target gas component. Therefore, calibration is needed, especially of the slope, which could change over several orders of magnitude. A calibration-free approach based on an internal gas standard is developed to overcome the multi-criterial slope dependency. This results in a normalization of sensor response and enables the sensor to assess the significance of measurement. The approach was proofed on the example of CO2 analysis in dry air with tubular PDMS membranes for various CO2 concentrations of an internal standard. Negligible temperature dependency was found within an 18 K range. The transformation behavior of the measurement signal and the influence of concentration variations of the internal standard on the measurement signal were shown. Offsets that were adjusted based on the stated theory for the given measurement conditions and material data from the literature were in agreement with the experimentally determined offsets. A measurement comparison with an NDIR reference sensor shows an unexpectedly low bias (<1%) of the non-calibrated sensor response, and comparable statistical uncertainty. PMID:27869656

  14. A framework for the meta-analysis of Bland-Altman studies based on a limits of agreement approach.

    PubMed

    Tipton, Elizabeth; Shuster, Jonathan

    2017-10-15

    Bland-Altman method comparison studies are common in the medical sciences and are used to compare a new measure to a gold-standard (often costlier or more invasive) measure. The distribution of these differences is summarized by two statistics, the 'bias' and standard deviation, and these measures are combined to provide estimates of the limits of agreement (LoA). When these LoA are within the bounds of clinically insignificant differences, the new non-invasive measure is preferred. Very often, multiple Bland-Altman studies have been conducted comparing the same two measures, and random-effects meta-analysis provides a means to pool these estimates. We provide a framework for the meta-analysis of Bland-Altman studies, including methods for estimating the LoA and measures of uncertainty (i.e., confidence intervals). Importantly, these LoA are likely to be wider than those typically reported in Bland-Altman meta-analyses. Frequently, Bland-Altman studies report results based on repeated measures designs but do not properly adjust for this design in the analysis. Meta-analyses of Bland-Altman studies frequently exclude these studies for this reason. We provide a meta-analytic approach that allows inclusion of estimates from these studies. This includes adjustments to the estimate of the standard deviation and a method for pooling the estimates based upon robust variance estimation. An example is included based on a previously published meta-analysis. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  15. Evaluation of the Eclipse eMC algorithm for bolus electron conformal therapy using a standard verification dataset.

    PubMed

    Carver, Robert L; Sprunger, Conrad P; Hogstrom, Kenneth R; Popple, Richard A; Antolak, John A

    2016-05-08

    The purpose of this study was to evaluate the accuracy and calculation speed of electron dose distributions calculated by the Eclipse electron Monte Carlo (eMC) algorithm for use with bolus electron conformal therapy (ECT). The recent com-mercial availability of bolus ECT technology requires further validation of the eMC dose calculation algorithm. eMC-calculated electron dose distributions for bolus ECT have been compared to previously measured TLD-dose points throughout patient-based cylindrical phantoms (retromolar trigone and nose), whose axial cross sections were based on the mid-PTV (planning treatment volume) CT anatomy. The phantoms consisted of SR4 muscle substitute, SR4 bone substitute, and air. The treatment plans were imported into the Eclipse treatment planning system, and electron dose distributions calculated using 1% and < 0.2% statistical uncertainties. The accuracy of the dose calculations using moderate smoothing and no smooth-ing were evaluated. Dose differences (eMC-calculated less measured dose) were evaluated in terms of absolute dose difference, where 100% equals the given dose, as well as distance to agreement (DTA). Dose calculations were also evaluated for calculation speed. Results from the eMC for the retromolar trigone phantom using 1% statistical uncertainty without smoothing showed calculated dose at 89% (41/46) of the measured TLD-dose points was within 3% dose difference or 3 mm DTA of the measured value. The average dose difference was -0.21%, and the net standard deviation was 2.32%. Differences as large as 3.7% occurred immediately distal to the mandible bone. Results for the nose phantom, using 1% statistical uncertainty without smoothing, showed calculated dose at 93% (53/57) of the measured TLD-dose points within 3% dose difference or 3 mm DTA. The average dose difference was 1.08%, and the net standard deviation was 3.17%. Differences as large as 10% occurred lateral to the nasal air cavities. Including smoothing had insignificant effects on the accuracy of the retromolar trigone phantom calculations, but reduced the accuracy of the nose phantom calculations in the high-gradient dose areas. Dose calculation times with 1% statistical uncertainty for the retromolar trigone and nose treatment plans were 30 s and 24 s, respectively, using 16 processors (Intel Xeon E5-2690, 2.9 GHz) on a framework agent server (FAS). In comparison, the eMC was significantly more accurate than the pencil beam algorithm (PBA). The eMC has comparable accuracy to the pencil beam redefinition algorithm (PBRA) used for bolus ECT planning and has acceptably low dose calculation times. The eMC accuracy decreased when smoothing was used in high-gradient dose regions. The eMC accuracy was consistent with that previously reported for accuracy of the eMC electron dose algorithm and shows that the algorithm is suitable for clinical implementation of bolus ECT.

  16. Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.

    PubMed

    Satorra, Albert; Bentler, Peter M

    2010-06-01

    A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.

  17. Calibration of equipment for analysis of drinking water fluoride: a comparison study.

    PubMed

    Quock, Ryan L; Chan, Jarvis T

    2012-03-01

    Current American Dental Association evidence-based recommendations for prescription of dietary fluoride supplements are based in part on the fluoride concentration of a pediatric patient's drinking water. With these recommendations in mind, this study compared the relative accuracy of fluoride concentration analysis when a common apparatus is calibrated with different combinations of standard values. Fluoride solutions in increments of 0.1 ppm, from a range of 0.1 to 1.0 ppm fluoride, as well as 2.0 and 4.0 ppm, were gravimetrically prepared and fluoride concentration measured in pentad, using a fluoride ion-specific electrode and millivolt meter. Fluoride concentrations of these solutions were recorded after calibration with the following 3 different combinations of standard fluoride solutions: 0.1 ppm and 0.5 ppm, 0.1 ppm and 1.0 ppm, 0.5 ppm and 1.0 ppm. Statistical analysis showed significant differences in the fluoride content of water samples obtained with different two-standard fluoride solutions. Among the two-standard fluoride solutions tested, using 0.5 ppm and 1.0 ppm as two-standard fluoride solutions provided the most accurate fluoride measurement of water samples containing fluoride in the range of 0.1 ppm to 4.0 ppm. This information should be valuable to dental clinics or laboratories in fluoride analysis of drinking water samples.

  18. An Independent Filter for Gene Set Testing Based on Spectral Enrichment.

    PubMed

    Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H

    2015-01-01

    Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.

  19. Therapeutic effect of acupuncture combining standard swallowing training for post-stroke dysphagia: A prospective cohort study.

    PubMed

    Mao, Li-Ya; Li, Li-Li; Mao, Zhong-Nan; Han, Yan-Ping; Zhang, Xiao-Ling; Yao, Jun-Xiao; Li, Ming

    2016-07-01

    To assess the therapeutic effect of acupuncture combining standard swallowing training for patients with dysphagia after stroke. A total of 105 consecutively admitted patients with post-stroke dysphagia in the Affiliated Hospital of Gansu University of Chinese Medicine were included: 50 patients from the Department of Neurology and Rehabilitation received standard swallowing training and acupuncture treatment (acupuncture group); 55 patients from the Department of Neurology received standard swallowing training only (control group). Participants in both groups received 5-day therapy per week for a 4-week period. The primary outcome measures included the scores of Videofluoroscopic Swallow Study (VFSS) and the Standardized Swallowing Assessment (SSA); the secondary outcome measure was the Royal Brisbane Hospital Outcome Measure for Swallowing (RBHOMS), all of which were assessed before and after the 4-week treatment. A total of 98 subjects completed the study (45 in the acupuncture group and 53 in the control group). Significant differences were seen in VFSS, SSA and RBHOMS scores in each group after 4-week treatment as compared with before treatment (P<0.01). Comparison between the groups after 4-week treatment showed that the VFSS P=0.007) and SSA scores (P=0.000) were more significantly improved in the acupuncture group than the control group. However, there was no statistical difference (P=0.710) between the acupuncture and the control groups in RBHOMS scores. Acupuncture combined with the standard swallowing training was an effective therapy for post-stroke dysphagia, and acupuncture therapy is worth further investigation in the treatment of post-stroke dysphagia.

  20. Cardiac arrest risk standardization using administrative data compared to registry data.

    PubMed

    Grossestreuer, Anne V; Gaieski, David F; Donnino, Michael W; Nelson, Joshua I M; Mutter, Eric L; Carr, Brendan G; Abella, Benjamin S; Wiebe, Douglas J

    2017-01-01

    Methods for comparing hospitals regarding cardiac arrest (CA) outcomes, vital for improving resuscitation performance, rely on data collected by cardiac arrest registries. However, most CA patients are treated at hospitals that do not participate in such registries. This study aimed to determine whether CA risk standardization modeling based on administrative data could perform as well as that based on registry data. Two risk standardization logistic regression models were developed using 2453 patients treated from 2000-2015 at three hospitals in an academic health system. Registry and administrative data were accessed for all patients. The outcome was death at hospital discharge. The registry model was considered the "gold standard" with which to compare the administrative model, using metrics including comparing areas under the curve, calibration curves, and Bland-Altman plots. The administrative risk standardization model had a c-statistic of 0.891 (95% CI: 0.876-0.905) compared to a registry c-statistic of 0.907 (95% CI: 0.895-0.919). When limited to only non-modifiable factors, the administrative model had a c-statistic of 0.818 (95% CI: 0.799-0.838) compared to a registry c-statistic of 0.810 (95% CI: 0.788-0.831). All models were well-calibrated. There was no significant difference between c-statistics of the models, providing evidence that valid risk standardization can be performed using administrative data. Risk standardization using administrative data performs comparably to standardization using registry data. This methodology represents a new tool that can enable opportunities to compare hospital performance in specific hospital systems or across the entire US in terms of survival after CA.

Top