ERIC Educational Resources Information Center
Henry, Gary T.; And Others
1992-01-01
A statistical technique is presented for developing performance standards based on benchmark groups. The benchmark groups are selected using a multivariate technique that relies on a squared Euclidean distance method. For each observation unit (a school district in the example), a unique comparison group is selected. (SLD)
[The evaluation of costs: standards of medical care and clinical statistic groups].
Semenov, V Iu; Samorodskaia, I V
2014-01-01
The article presents the comparative analysis of techniques of evaluation of costs of hospital treatment using medical economic standards of medical care and clinical statistical groups. The technique of evaluation of costs on the basis of clinical statistical groups was developed almost fifty years ago and is largely applied in a number of countries. Nowadays, in Russia the payment for completed case of treatment on the basis of medical economic standards is the main mode of payment for medical care in hospital. It is very conditionally a Russian analogue of world-wide prevalent system of diagnostic related groups. The tariffs for these cases of treatment as opposed to clinical statistical groups are counted on basis of standards of provision of medical care approved by Minzdrav of Russia. The information derived from generalization of cases of treatment of real patients is not applied.
Incorporating Nonparametric Statistics into Delphi Studies in Library and Information Science
ERIC Educational Resources Information Center
Ju, Boryung; Jin, Tao
2013-01-01
Introduction: The Delphi technique is widely used in library and information science research. However, many researchers in the field fail to employ standard statistical tests when using this technique. This makes the technique vulnerable to criticisms of its reliability and validity. The general goal of this article is to explore how…
Quality Space and Launch Requirements Addendum to AS9100C
2015-03-05
45 8.9.1 Statistical Process Control (SPC) .......................................................................... 45 8.9.1.1 Out of Control...Systems Center SME Subject Matter Expert SOW Statement of Work SPC Statistical Process Control SPO System Program Office SRP Standard Repair...individual data exceeding the control limits. Control limits are developed using standard statistical methods or other approved techniques and are based on
Code of Federal Regulations, 2010 CFR
2010-07-01
..., other techniques, such as the use of statistical models or the use of historical data could be..., mathematical techniques should be applied to account for the trends to ensure that the expected annual values... emission patterns, either the most recent representative year(s) could be used or statistical techniques or...
Gene Identification Algorithms Using Exploratory Statistical Analysis of Periodicity
NASA Astrophysics Data System (ADS)
Mukherjee, Shashi Bajaj; Sen, Pradip Kumar
2010-10-01
Studying periodic pattern is expected as a standard line of attack for recognizing DNA sequence in identification of gene and similar problems. But peculiarly very little significant work is done in this direction. This paper studies statistical properties of DNA sequences of complete genome using a new technique. A DNA sequence is converted to a numeric sequence using various types of mappings and standard Fourier technique is applied to study the periodicity. Distinct statistical behaviour of periodicity parameters is found in coding and non-coding sequences, which can be used to distinguish between these parts. Here DNA sequences of Drosophila melanogaster were analyzed with significant accuracy.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
ERIC Educational Resources Information Center
Vivo, Juana-Maria; Franco, Manuel
2008-01-01
This article attempts to present a novel application of a method of measuring accuracy for academic success predictors that could be used as a standard. This procedure is known as the receiver operating characteristic (ROC) curve, which comes from statistical decision techniques. The statistical prediction techniques provide predictor models and…
The Effect of Student-Driven Projects on the Development of Statistical Reasoning
ERIC Educational Resources Information Center
Sovak, Melissa M.
2010-01-01
Research has shown that even if students pass a standard introductory statistics course, they often still lack the ability to reason statistically. Many instructional techniques for enhancing the development of statistical reasoning have been discussed, although there is often little to no experimental evidence that they produce effective results…
An Application of Indian Health Service Standards for Alcoholism Programs.
ERIC Educational Resources Information Center
Burns, Thomas R.
1984-01-01
Discusses Phoenix-area applications of 1981 Indian Health Service standards for alcoholism programs. Results of standard statistical techniques note areas of deficiency through application of a one-tailed z test at .05 level of significance. Factor analysis sheds further light on design of standards. Implications for revisions are suggested.…
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
NCES Handbook of Survey Methods. NCES 2011-609
ERIC Educational Resources Information Center
Burns, Shelley, Ed.; Wang, Xiaolei, Ed.; Henning, Alexandra, Ed.
2011-01-01
Since its inception, the National Center for Education Statistics (NCES) has been committed to the practice of documenting its statistical methods for its customers and of seeking to avoid misinterpretation of its published data. The reason for this policy is to assure customers that proper statistical standards and techniques have been observed,…
Molenaar, Peter C M
2008-01-01
It is argued that general mathematical-statistical theorems imply that standard statistical analysis techniques of inter-individual variation are invalid to investigate developmental processes. Developmental processes have to be analyzed at the level of individual subjects, using time series data characterizing the patterns of intra-individual variation. It is shown that standard statistical techniques based on the analysis of inter-individual variation appear to be insensitive to the presence of arbitrary large degrees of inter-individual heterogeneity in the population. An important class of nonlinear epigenetic models of neural growth is described which can explain the occurrence of such heterogeneity in brain structures and behavior. Links with models of developmental instability are discussed. A simulation study based on a chaotic growth model illustrates the invalidity of standard analysis of inter-individual variation, whereas time series analysis of intra-individual variation is able to recover the true state of affairs. (c) 2007 Wiley Periodicals, Inc.
Statistical Evaluation of Time Series Analysis Techniques
NASA Technical Reports Server (NTRS)
Benignus, V. A.
1973-01-01
The performance of a modified version of NASA's multivariate spectrum analysis program is discussed. A multiple regression model was used to make the revisions. Performance improvements were documented and compared to the standard fast Fourier transform by Monte Carlo techniques.
Methods for trend analysis: Examples with problem/failure data
NASA Technical Reports Server (NTRS)
Church, Curtis K.
1989-01-01
Statistics are emphasized as an important role in quality control and reliability. Consequently, Trend Analysis Techniques recommended a variety of statistical methodologies that could be applied to time series data. The major goal of the working handbook, using data from the MSFC Problem Assessment System, is to illustrate some of the techniques in the NASA standard, some different techniques, and to notice patterns of data. Techniques for trend estimation used are: regression (exponential, power, reciprocal, straight line) and Kendall's rank correlation coefficient. The important details of a statistical strategy for estimating a trend component are covered in the examples. However, careful analysis and interpretation is necessary because of small samples and frequent zero problem reports in a given time period. Further investigations to deal with these issues are being conducted.
Optimisation of radiation dose and image quality in mobile neonatal chest radiography.
Hinojos-Armendáriz, V I; Mejía-Rosales, S J; Franco-Cabrera, M C
2018-05-01
To optimise the radiation dose and image quality for chest radiography in the neonatal intensive care unit (NICU) by increasing the mean beam energy. Two techniques for the acquisition of NICU AP chest X-ray images were compared for image quality and radiation dose. 73 images were acquired using a standard technique (56 kV, 3.2 mAs and no additional filtration) and 90 images with a new technique (62 kV, 2 mAs and 2 mm Al filtration). The entrance surface air kerma (ESAK) was measured using a phantom and compared between the techniques and against established diagnostic reference levels (DRL). Images were evaluated using seven image quality criteria independently by three radiologists. Images quality and radiation dose were compared statistically between the standard and new techniques. The maximum ESAK for the new technique was 40.20 μGy, 43.7% of the ESAK of the standard technique. Statistical evaluation demonstrated no significant differences in image quality between the two acquisition techniques. Based on the techniques and acquisition factors investigated within this study, it is possible to lower the radiation dose without any significant effects on image quality by adding filtration (2 mm Al) and increasing the tube potential. Such steps are relatively simple to undertake and as such, other departments should consider testing and implementing this dose reduction strategy within clinical practice where appropriate. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
Safakish, Ramin
2017-01-01
Lower back pain (LBP) is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI) joint pain is responsible for LBP in 18%-30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques.
NASA Technical Reports Server (NTRS)
Navard, Sharon E.
1989-01-01
In recent years there has been a push within NASA to use statistical techniques to improve the quality of production. Two areas where statistics are used are in establishing product and process quality control of flight hardware and in evaluating the uncertainty of calibration of instruments. The Flight Systems Quality Engineering branch is responsible for developing and assuring the quality of all flight hardware; the statistical process control methods employed are reviewed and evaluated. The Measurement Standards and Calibration Laboratory performs the calibration of all instruments used on-site at JSC as well as those used by all off-site contractors. These calibrations must be performed in such a way as to be traceable to national standards maintained by the National Institute of Standards and Technology, and they must meet a four-to-one ratio of the instrument specifications to calibrating standard uncertainty. In some instances this ratio is not met, and in these cases it is desirable to compute the exact uncertainty of the calibration and determine ways of reducing it. A particular example where this problem is encountered is with a machine which does automatic calibrations of force. The process of force calibration using the United Force Machine is described in detail. The sources of error are identified and quantified when possible. Suggestions for improvement are made.
Shaikh, Muhammad Mujtaba; Memon, Abdul Jabbar; Hussain, Manzoor
2016-09-01
In this article, we describe details of the data used in the research paper "Confidence bounds for energy conservation in electric motors: An economical solution using statistical techniques" [1]. The data presented in this paper is intended to show benefits of high efficiency electric motors over the standard efficiency motors of similar rating in the industrial sector of Pakistan. We explain how the data was collected and then processed by means of formulas to show cost effectiveness of energy efficient motors in terms of three important parameters: annual energy saving, cost saving and payback periods. This data can be further used to construct confidence bounds for the parameters using statistical techniques as described in [1].
Economic and outcomes consequences of TachoSil®: a systematic review.
Colombo, Giorgio L; Bettoni, Daria; Di Matteo, Sergio; Grumi, Camilla; Molon, Cinzia; Spinelli, Daniela; Mauro, Gaetano; Tarozzo, Alessia; Bruno, Giacomo M
2014-01-01
TachoSil(®) is a medicated sponge coated with human fibrinogen and human thrombin. It is indicated as a support treatment in adult surgery to improve hemostasis, promote tissue sealing, and support sutures when standard surgical techniques are insufficient. This review systematically analyses the international scientific literature relating to the use of TachoSil in hemostasis and as a surgical sealant, from the point of view of its economic impact. We carried out a systematic review of the PubMed literature up to November 2013. Based on the selection criteria, papers were grouped according to the following outcomes: reduction of time to hemostasis; decrease in length of hospital stay; and decrease in postoperative complications. Twenty-four scientific papers were screened, 13 (54%) of which were randomized controlled trials and included a total of 2,116 patients, 1,055 of whom were treated with TachoSil. In the clinical studies carried out in patients undergoing hepatic, cardiac, or renal surgery, the time to hemostasis obtained with TachoSil was lower (1-4 minutes) than the time measured with other techniques and hemostatic drugs, with statistically significant differences. Moreover, in 13 of 15 studies, TachoSil showed a statistically significant reduction in postoperative complications in comparison with the standard surgical procedure. The range of the observed decrease in the length of hospital stay for TachoSil patients was 2.01-3.58 days versus standard techniques, with a statistically significant difference in favor of TachoSil in eight of 15 studies. This analysis shows that TachoSil has a role as a supportive treatment in surgery to improve hemostasis and promote tissue sealing when standard techniques are insufficient, with a consequent decrease in postoperative complications and hospital costs.
Belfort, Michael A; Whitehead, William E; Shamshirsaz, Alireza A; Bateni, Zhoobin H; Olutoye, Oluyinka O; Olutoye, Olutoyin A; Mann, David G; Espinoza, Jimmy; Williams, Erin; Lee, Timothy C; Keswani, Sundeep G; Ayres, Nancy; Cassady, Christopher I; Mehollin-Ray, Amy R; Sanz Cortes, Magdalena; Carreras, Elena; Peiro, Jose L; Ruano, Rodrigo; Cass, Darrell L
2017-04-01
To describe development of a two-port fetoscopic technique for spina bifida repair in the exteriorized, carbon dioxide-filled uterus and report early results of two cohorts of patients: the first 15 treated with an iterative technique and the latter 13 with a standardized technique. This was a retrospective cohort study (2014-2016). All patients met Management of Myelomeningocele Study selection criteria. The intraoperative approach was iterative in the first 15 patients and was then standardized. Obstetric, maternal, fetal, and early neonatal outcomes were compared. Standard parametric and nonparametric tests were used as appropriate. Data for 28 patients (22 endoscopic only, four hybrid, two abandoned) are reported, but only those with a complete fetoscopic repair were analyzed (iterative technique [n=10] compared with standardized technique [n=12]). Maternal demographics and gestational age (median [range]) at fetal surgery (25.4 [22.9-25.9] compared with 24.8 [24-25.6] weeks) were similar, but delivery occurred at 35.9 (26-39) weeks of gestation with the iterative technique compared with 39 (35.9-40) weeks of gestation with the standardized technique (P<.01). Duration of surgery (267 [107-434] compared with 246 [206-333] minutes), complication rates, preterm prelabor rupture of membranes rates (4/12 [33%] compared with 1/10 [10%]), and vaginal delivery rates (5/12 [42%] compared with 6/10 [60%]) were not statistically different in the iterative and standardized techniques, respectively. In 6 of 12 (50%) compared with 1 of 10 (10%), respectively (P=.07), there was leakage of cerebrospinal fluid from the repair site at birth. Management of Myelomeningocele Study criteria for hydrocephalus-death at discharge were met in 9 of 12 (75%) and 3 of 10 (30%), respectively, and 7 of 12 (58%) compared with 2 of 10 (20%) have been treated for hydrocephalus to date. These latter differences were not statistically significant. Fetoscopic open neural tube defect repair does not appear to increase maternal-fetal complications as compared with repair by hysterotomy, allows for vaginal delivery, and may reduce long-term maternal risks. ClinicalTrials.gov, https://clinicaltrials.gov, NCT02230072.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1988-01-01
This Standard presents descriptive and analytical techniques for NASA trend analysis applications. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. Use of this Standard is not mandatory; however, it should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend Analysis is neither a precise term nor a circumscribed methodology, but rather connotes, generally, quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this Standard. The document presents the basic ideas needed for qualitative and quantitative assessment of trends, together with relevant examples. A list of references provides additional sources of information.
Development of a technique for estimating noise covariances using multiple observers
NASA Technical Reports Server (NTRS)
Bundick, W. Thomas
1988-01-01
Friedland's technique for estimating the unknown noise variances of a linear system using multiple observers has been extended by developing a general solution for the estimates of the variances, developing the statistics (mean and standard deviation) of these estimates, and demonstrating the solution on two examples.
Weak value amplification considered harmful
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-03-01
We show using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of parameter estimation and signal detection. We show that using all data and considering the joint distribution of all measurement outcomes yields the optimal estimator. Moreover, we show estimation using the maximum likelihood technique with weak values as small as possible produces better performance for quantum metrology. In doing so, we identify the optimal experimental arrangement to be the one which reveals the maximal eigenvalue of the square of system observables. We also show these conclusions do not change in the presence of technical noise.
A study of the feasibility of statistical analysis of airport performance simulation
NASA Technical Reports Server (NTRS)
Myers, R. H.
1982-01-01
The feasibility of conducting a statistical analysis of simulation experiments to study airport capacity is investigated. First, the form of the distribution of airport capacity is studied. Since the distribution is non-Gaussian, it is important to determine the effect of this distribution on standard analysis of variance techniques and power calculations. Next, power computations are made in order to determine how economic simulation experiments would be if they are designed to detect capacity changes from condition to condition. Many of the conclusions drawn are results of Monte-Carlo techniques.
Analytics for Cyber Network Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Plantenga, Todd.; Kolda, Tamara Gibson
2011-06-01
This report provides a brief survey of analytics tools considered relevant to cyber network defense (CND). Ideas and tools come from elds such as statistics, data mining, and knowledge discovery. Some analytics are considered standard mathematical or statistical techniques, while others re ect current research directions. In all cases the report attempts to explain the relevance to CND with brief examples.
Probability sampling in legal cases: Kansas cellphone users
NASA Astrophysics Data System (ADS)
Kadane, Joseph B.
2012-10-01
Probability sampling is a standard statistical technique. This article introduces the basic ideas of probability sampling, and shows in detail how probability sampling was used in a particular legal case.
Sun, Zong-ke; Wu, Rong; Ding, Pei; Xue, Jin-Rong
2006-07-01
To compare between rapid detection method of enzyme substrate technique and multiple-tube fermentation technique in water coliform bacteria detection. Using inoculated and real water samples to compare the equivalence and false positive rate between two methods. Results demonstrate that enzyme substrate technique shows equivalence with multiple-tube fermentation technique (P = 0.059), false positive rate between the two methods has no statistical difference. It is suggested that enzyme substrate technique can be used as a standard method for water microbiological safety evaluation.
Han, Kihwan; Kwon, Hyuk Joon; Choi, Tae Hyun; Kim, Jun Hyung; Son, Daegu
2010-03-01
The aim of this study was to standardize clinical photogrammetric techniques, and to compare anthropometry with photogrammetry. To standardize clinical photography, we have developed a photographic cephalostat and chair. We investigated the repeatability of the standardized clinical photogrammetric technique. Then, with 40 landmarks, a total of 96 anthropometric measurement items was obtained from 100 Koreans. Ninety six photogrammetric measurements from the same subjects were also obtained from standardized clinical photographs using Adobe Photoshop version 7.0 (Adobe Systems Corporation, San Jose, CA, USA). The photogrammetric and anthropometric measurement data (mm, degree) were then compared. A coefficient was obtained by dividing the anthropometric measurements by the photogrammetric measurements. The repeatability of the standardized photography was statistically significantly high (p=0.463). Among the 96 measurement items, 44 items were reliable; for these items the photogrammetric measurements were not different to the anthropometric measurements. The remaining 52 items must be classified as unreliable. By developing a photographic cephalostat and chair, we have standardized clinical photogrammetric techniques. The reliable set of measurement items can be used as anthropometric measurements. For unreliable measurement items, applying a suitable coefficient to the photogrammetric measurement allows the anthropometric measurement to be obtained indirectly.
Statistical segmentation of multidimensional brain datasets
NASA Astrophysics Data System (ADS)
Desco, Manuel; Gispert, Juan D.; Reig, Santiago; Santos, Andres; Pascau, Javier; Malpica, Norberto; Garcia-Barreno, Pedro
2001-07-01
This paper presents an automatic segmentation procedure for MRI neuroimages that overcomes part of the problems involved in multidimensional clustering techniques like partial volume effects (PVE), processing speed and difficulty of incorporating a priori knowledge. The method is a three-stage procedure: 1) Exclusion of background and skull voxels using threshold-based region growing techniques with fully automated seed selection. 2) Expectation Maximization algorithms are used to estimate the probability density function (PDF) of the remaining pixels, which are assumed to be mixtures of gaussians. These pixels can then be classified into cerebrospinal fluid (CSF), white matter and grey matter. Using this procedure, our method takes advantage of using the full covariance matrix (instead of the diagonal) for the joint PDF estimation. On the other hand, logistic discrimination techniques are more robust against violation of multi-gaussian assumptions. 3) A priori knowledge is added using Markov Random Field techniques. The algorithm has been tested with a dataset of 30 brain MRI studies (co-registered T1 and T2 MRI). Our method was compared with clustering techniques and with template-based statistical segmentation, using manual segmentation as a gold-standard. Our results were more robust and closer to the gold-standard.
Lu, Z. Q. J.; Lowhorn, N. D.; Wong-Ng, W.; Zhang, W.; Thomas, E. L.; Otani, M.; Green, M. L.; Tran, T. N.; Caylor, C.; Dilley, N. R.; Downey, A.; Edwards, B.; Elsner, N.; Ghamaty, S.; Hogan, T.; Jie, Q.; Li, Q.; Martin, J.; Nolas, G.; Obara, H.; Sharp, J.; Venkatasubramanian, R.; Willigan, R.; Yang, J.; Tritt, T.
2009-01-01
In an effort to develop a Standard Reference Material (SRM™) for Seebeck coefficient, we have conducted a round-robin measurement survey of two candidate materials—undoped Bi2Te3 and Constantan (55 % Cu and 45 % Ni alloy). Measurements were performed in two rounds by twelve laboratories involved in active thermoelectric research using a number of different commercial and custom-built measurement systems and techniques. In this paper we report the detailed statistical analyses on the interlaboratory measurement results and the statistical methodology for analysis of irregularly sampled measurement curves in the interlaboratory study setting. Based on these results, we have selected Bi2Te3 as the prototype standard material. Once available, this SRM will be useful for future interlaboratory data comparison and instrument calibrations. PMID:27504212
Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K
2014-12-01
An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.
UNCERTAINTY ON RADIATION DOSES ESTIMATED BY BIOLOGICAL AND RETROSPECTIVE PHYSICAL METHODS.
Ainsbury, Elizabeth A; Samaga, Daniel; Della Monaca, Sara; Marrale, Maurizio; Bassinet, Celine; Burbidge, Christopher I; Correcher, Virgilio; Discher, Michael; Eakins, Jon; Fattibene, Paola; Güçlü, Inci; Higueras, Manuel; Lund, Eva; Maltar-Strmecki, Nadica; McKeever, Stephen; Rääf, Christopher L; Sholom, Sergey; Veronese, Ivan; Wieser, Albrecht; Woda, Clemens; Trompier, Francois
2018-03-01
Biological and physical retrospective dosimetry are recognised as key techniques to provide individual estimates of dose following unplanned exposures to ionising radiation. Whilst there has been a relatively large amount of recent development in the biological and physical procedures, development of statistical analysis techniques has failed to keep pace. The aim of this paper is to review the current state of the art in uncertainty analysis techniques across the 'EURADOS Working Group 10-Retrospective dosimetry' members, to give concrete examples of implementation of the techniques recommended in the international standards, and to further promote the use of Monte Carlo techniques to support characterisation of uncertainties. It is concluded that sufficient techniques are available and in use by most laboratories for acute, whole body exposures to highly penetrating radiation, but further work will be required to ensure that statistical analysis is always wholly sufficient for the more complex exposure scenarios.
Summary Report on NRL Participation in the Microwave Landing System Program.
1980-08-19
shifters were measured and statistically analyzed. Several research contracts for promising phased array techniques were awarded to industrial contractors...program was written for compiling statistical data on the measurements, which reads out inser- sertion phase characteristics and standard deviation...GLOSSARY OF TERMS ALPA Airline Pilots’ Association ATA Air Transport Association AWA Australiasian Wireless Amalgamated AWOP All-weather Operations
ERIC Educational Resources Information Center
Fouladi, Rachel T.
2000-01-01
Provides an overview of standard and modified normal theory and asymptotically distribution-free covariance and correlation structure analysis techniques and details Monte Carlo simulation results on Type I and Type II error control. Demonstrates through the simulation that robustness and nonrobustness of structure analysis techniques vary as a…
Code of Federal Regulations, 2011 CFR
2011-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2010 CFR
2010-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2012 CFR
2012-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2013 CFR
2013-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
Code of Federal Regulations, 2014 CFR
2014-01-01
.... agricultural and rural economy. (2) Administering a methodological research program to improve agricultural... design and data collection methodologies to the agricultural statistics program. Major functions include...) Designing, testing, and establishing survey techniques and standards, including sample design, sample...
NASA Technical Reports Server (NTRS)
Bollman, W. E.; Chadwick, C.
1982-01-01
A number of interplanetary missions now being planned involve placing deterministic maneuvers along the flight path to alter the trajectory. Lee and Boain (1973) examined the statistics of trajectory correction maneuver (TCM) magnitude with no deterministic ('bias') component. The Delta v vector magnitude statistics were generated for several values of random Delta v standard deviations using expansions in terms of infinite hypergeometric series. The present investigation uses a different technique (Monte Carlo simulation) to generate Delta v magnitude statistics for a wider selection of random Delta v standard deviations and also extends the analysis to the case of nonzero deterministic Delta v's. These Delta v magnitude statistics are plotted parametrically. The plots are useful in assisting the analyst in quickly answering questions about the statistics of Delta v magnitude for single TCM's consisting of both a deterministic and a random component. The plots provide quick insight into the nature of the Delta v magnitude distribution for the TCM.
Dark matter constraints from a joint analysis of dwarf Spheroidal galaxy observations with VERITAS
Archambault, S.; Archer, A.; Benbow, W.; ...
2017-04-05
We present constraints on the annihilation cross section of weakly interacting massive particles dark matter based on the joint statistical analysis of four dwarf galaxies with VERITAS. These results are derived from an optimized photon weighting statistical technique that improves on standard imaging atmospheric Cherenkov telescope (IACT) analyses by utilizing the spectral and spatial properties of individual photon events.
The Importance of Practice in the Development of Statistics.
1983-01-01
RESOLUTION TEST CHART NATIONAL BUREAU OIF STANDARDS 1963 -A NRC Technical Summary Report #2471 C THE IMORTANCE OF PRACTICE IN to THE DEVELOPMENT OF STATISTICS...component analysis, bioassay, limits for a ratio, quality control, sampling inspection, non-parametric tests , transformation theory, ARIMA time series...models, sequential tests , cumulative sum charts, data analysis plotting techniques, and a resolution of the Bayes - frequentist controversy. It appears
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solaimani, Mohiuddin; Iftekhar, Mohammed; Khan, Latifur
Anomaly detection refers to the identi cation of an irregular or unusual pat- tern which deviates from what is standard, normal, or expected. Such deviated patterns typically correspond to samples of interest and are assigned different labels in different domains, such as outliers, anomalies, exceptions, or malware. Detecting anomalies in fast, voluminous streams of data is a formidable chal- lenge. This paper presents a novel, generic, real-time distributed anomaly detection framework for heterogeneous streaming data where anomalies appear as a group. We have developed a distributed statistical approach to build a model and later use it to detect anomaly. Asmore » a case study, we investigate group anomaly de- tection for a VMware-based cloud data center, which maintains a large number of virtual machines (VMs). We have built our framework using Apache Spark to get higher throughput and lower data processing time on streaming data. We have developed a window-based statistical anomaly detection technique to detect anomalies that appear sporadically. We then relaxed this constraint with higher accuracy by implementing a cluster-based technique to detect sporadic and continuous anomalies. We conclude that our cluster-based technique out- performs other statistical techniques with higher accuracy and lower processing time.« less
On statistical inference in time series analysis of the evolution of road safety.
Commandeur, Jacques J F; Bijleveld, Frits D; Bergel-Hayat, Ruth; Antoniou, Constantinos; Yannis, George; Papadimitriou, Eleonora
2013-11-01
Data collected for building a road safety observatory usually include observations made sequentially through time. Examples of such data, called time series data, include annual (or monthly) number of road traffic accidents, traffic fatalities or vehicle kilometers driven in a country, as well as the corresponding values of safety performance indicators (e.g., data on speeding, seat belt use, alcohol use, etc.). Some commonly used statistical techniques imply assumptions that are often violated by the special properties of time series data, namely serial dependency among disturbances associated with the observations. The first objective of this paper is to demonstrate the impact of such violations to the applicability of standard methods of statistical inference, which leads to an under or overestimation of the standard error and consequently may produce erroneous inferences. Moreover, having established the adverse consequences of ignoring serial dependency issues, the paper aims to describe rigorous statistical techniques used to overcome them. In particular, appropriate time series analysis techniques of varying complexity are employed to describe the development over time, relating the accident-occurrences to explanatory factors such as exposure measures or safety performance indicators, and forecasting the development into the near future. Traditional regression models (whether they are linear, generalized linear or nonlinear) are shown not to naturally capture the inherent dependencies in time series data. Dedicated time series analysis techniques, such as the ARMA-type and DRAG approaches are discussed next, followed by structural time series models, which are a subclass of state space methods. The paper concludes with general recommendations and practice guidelines for the use of time series models in road safety research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Niglas, Mark; McCann, Claire; Keller, Brian M; Makhani, Nadiya; Presutti, Joseph; Vesprini, Danny; Rakovitch, Eileen; Elzibak, Alyaa; Mashouf, Shahram; Lee, Justin
2016-01-01
Breath-hold techniques can reduce cardiac dose in breast radiotherapy. The reverse semi-decubitus (RSD) technique is an alternative free-breathing method used at our centre. This study compares the dosimetry of free-breathing supine, RSD and moderate deep inspiration breath-hold (mDIBH) techniques. Twelve patients with left-sided breast cancer who were simulated using standard supine, RSD and mDIBH techniques were identified retrospectively. New plans using standard breast tangents and techniques for internal mammary chain (IMC) nodal coverage were assessed. Using standard tangents, mean heart dose, heart V25Gy and mean left anterior descending artery (LAD) dose were found to be significantly lower for RSD and mDIBH when compared to free-breathing supine (p ⩽ 0.03). Using wide-tangents, the maximum LAD point dose was also lower for RSD and mDIBH (p ⩽ 0.02). There were no statistically significant dosimetric differences found between the RSD and mDIBH simulation techniques for standard breast-tangent plans, though organ-at-risk doses were lower for mDIBH in wide-tangent plans. There was no improvement in cardiac dosimetry between RSD and free-breathing supine when using an electron field IMC plan. For patients unable to tolerate breath-hold, the RSD technique is an alternative approach that can reduce cardiac dose. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Incorporating principal component analysis into air quality model evaluation
The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Princi...
Study of statistical coding for digital TV
NASA Technical Reports Server (NTRS)
Gardenhire, L. W.
1972-01-01
The results are presented for a detailed study to determine a pseudo-optimum statistical code to be installed in a digital TV demonstration test set. Studies of source encoding were undertaken, using redundancy removal techniques in which the picture is reproduced within a preset tolerance. A method of source encoding, which preliminary studies show to be encouraging, is statistical encoding. A pseudo-optimum code was defined and the associated performance of the code was determined. The format was fixed at 525 lines per frame, 30 frames per second, as per commercial standards.
NASA standard: Trend analysis techniques
NASA Technical Reports Server (NTRS)
1990-01-01
Descriptive and analytical techniques for NASA trend analysis applications are presented in this standard. Trend analysis is applicable in all organizational elements of NASA connected with, or supporting, developmental/operational programs. This document should be consulted for any data analysis activity requiring the identification or interpretation of trends. Trend analysis is neither a precise term nor a circumscribed methodology: it generally connotes quantitative analysis of time-series data. For NASA activities, the appropriate and applicable techniques include descriptive and graphical statistics, and the fitting or modeling of data by linear, quadratic, and exponential models. Usually, but not always, the data is time-series in nature. Concepts such as autocorrelation and techniques such as Box-Jenkins time-series analysis would only rarely apply and are not included in this document. The basic ideas needed for qualitative and quantitative assessment of trends along with relevant examples are presented.
Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons
2014-01-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829
NASA Astrophysics Data System (ADS)
Machicoane, Nathanaël; López-Caballero, Miguel; Bourgoin, Mickael; Aliseda, Alberto; Volk, Romain
2017-10-01
We present a method to improve the accuracy of velocity measurements for fluid flow or particles immersed in it, based on a multi-time-step approach that allows for cancellation of noise in the velocity measurements. Improved velocity statistics, a critical element in turbulent flow measurements, can be computed from the combination of the velocity moments computed using standard particle tracking velocimetry (PTV) or particle image velocimetry (PIV) techniques for data sets that have been collected over different values of time intervals between images. This method produces Eulerian velocity fields and Lagrangian velocity statistics with much lower noise levels compared to standard PIV or PTV measurements, without the need of filtering and/or windowing. Particle displacement between two frames is computed for multiple different time-step values between frames in a canonical experiment of homogeneous isotropic turbulence. The second order velocity structure function of the flow is computed with the new method and compared to results from traditional measurement techniques in the literature. Increased accuracy is also demonstrated by comparing the dissipation rate of turbulent kinetic energy measured from this function against previously validated measurements.
Detection of Anomalies in Hydrometric Data Using Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Lauzon, N.; Lence, B. J.
2002-12-01
This work focuses on the detection of anomalies in hydrometric data sequences, such as 1) outliers, which are individual data having statistical properties that differ from those of the overall population; 2) shifts, which are sudden changes over time in the statistical properties of the historical records of data; and 3) trends, which are systematic changes over time in the statistical properties. For the purpose of the design and management of water resources systems, it is important to be aware of these anomalies in hydrometric data, for they can induce a bias in the estimation of water quantity and quality parameters. These anomalies may be viewed as specific patterns affecting the data, and therefore pattern recognition techniques can be used for identifying them. However, the number of possible patterns is very large for each type of anomaly and consequently large computing capacities are required to account for all possibilities using the standard statistical techniques, such as cluster analysis. Artificial intelligence techniques, such as the Kohonen neural network and fuzzy c-means, are clustering techniques commonly used for pattern recognition in several areas of engineering and have recently begun to be used for the analysis of natural systems. They require much less computing capacity than the standard statistical techniques, and therefore are well suited for the identification of outliers, shifts and trends in hydrometric data. This work constitutes a preliminary study, using synthetic data representing hydrometric data that can be found in Canada. The analysis of the results obtained shows that the Kohonen neural network and fuzzy c-means are reasonably successful in identifying anomalies. This work also addresses the problem of uncertainties inherent to the calibration procedures that fit the clusters to the possible patterns for both the Kohonen neural network and fuzzy c-means. Indeed, for the same database, different sets of clusters can be established with these calibration procedures. A simple method for analyzing uncertainties associated with the Kohonen neural network and fuzzy c-means is developed here. The method combines the results from several sets of clusters, either from the Kohonen neural network or fuzzy c-means, so as to provide an overall diagnosis as to the identification of outliers, shifts and trends. The results indicate an improvement in the performance for identifying anomalies when the method of combining cluster sets is used, compared with when only one cluster set is used.
NASA Astrophysics Data System (ADS)
Wang, Hao; Wang, Qunwei; He, Ming
2018-05-01
In order to investigate and improve the level of detection technology of water content in liquid chemical reagents of domestic laboratories, proficiency testing provider PT0031 (CNAS) has organized proficiency testing program of water content in toluene, 48 laboratories from 18 provinces/cities/municipals took part in the PT. This paper introduces the implementation process of proficiency testing for determination of water content in toluene, including sample preparation, homogeneity and stability test, the results of statistics of iteration robust statistic technique and analysis, summarized and analyzed those of the different test standards which are widely used in the laboratories, put forward the technological suggestions for the improvement of the test quality of water content. Satisfactory results were obtained by 43 laboratories, amounting to 89.6% of the total participating laboratories.
Statistical methodology: II. Reliability and validity assessment in study design, Part B.
Karras, D J
1997-02-01
Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.
Sci-Thur PM - Colourful Interactions: Highlights 08: ARC TBI using Single-Step Optimized VMAT Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hudson, Alana; Gordon, Deborah; Moore, Roseanne
Purpose: This work outlines a new TBI delivery technique to replace a lateral POP full bolus technique. The new technique is done with VMAT arc delivery, without bolus, treating the patient prone and supine. The benefits of the arc technique include: increased patient experience and safety, better dose conformity, better organ at risk sparing, decreased therapist time and reduction of therapist injuries. Methods: In this work we build on a technique developed by Jahnke et al. We use standard arc fields with gantry speeds corrected for varying distance to the patient followed by a single step VMAT optimization on amore » patient CT to increase dose inhomogeneity and to reduce dose to the lungs (vs. blocks). To compare the arc TBI technique to our full bolus technique, we produced plans on patient CTs for both techniques and evaluated several dosimetric parameters using an ANOVA test. Results and Conclusions: The arc technique is able reduce both the hot areas to the body (D2% reduced from 122.2% to 111.8% p<0.01) and the lungs (mean lung dose reduced from 107.5% to 99.1%, p<0.01), both statistically significant, while maintaining coverage (D98% = 97.8% vs. 94.6%, p=0.313, not statistically significant). We developed a more patient and therapist-friendly TBI treatment technique that utilizes single-step optimized VMAT plans. It was found that this technique was dosimetrically equivalent to our previous lateral technique in terms of coverage and statistically superior in terms of reduced lung dose.« less
Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.
2017-01-01
Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883
Fujioka, Kouki; Shimizu, Nobuo; Manome, Yoshinobu; Ikeda, Keiichi; Yamamoto, Kenji; Tomizawa, Yasuko
2013-01-01
Electronic noses have the benefit of obtaining smell information in a simple and objective manner, therefore, many applications have been developed for broad analysis areas such as food, drinks, cosmetics, medicine, and agriculture. However, measurement values from electronic noses have a tendency to vary under humidity or alcohol exposure conditions, since several types of sensors in the devices are affected by such variables. Consequently, we show three techniques for reducing the variation of sensor values: (1) using a trapping system to reduce the infering components; (2) performing statistical standardization (calculation of z-score); and (3) selecting suitable sensors. With these techniques, we discriminated the volatiles of four types of fresh mushrooms: golden needle (Flammulina velutipes), white mushroom (Agaricus bisporus), shiitake (Lentinus edodes), and eryngii (Pleurotus eryngii) among six fresh mushrooms (hen of the woods (Grifola frondosa), shimeji (Hypsizygus marmoreus) plus the above mushrooms). Additionally, we succeeded in discrimination of white mushroom, only comparing with artificial mushroom flavors, such as champignon flavor and truffle flavor. In conclusion, our techniques will expand the options to reduce variations in sensor values. PMID:24233028
Eng, Kevin H; Schiller, Emily; Morrell, Kayla
2015-11-03
Researchers developing biomarkers for cancer prognosis from quantitative gene expression data are often faced with an odd methodological discrepancy: while Cox's proportional hazards model, the appropriate and popular technique, produces a continuous and relative risk score, it is hard to cast the estimate in clear clinical terms like median months of survival and percent of patients affected. To produce a familiar Kaplan-Meier plot, researchers commonly make the decision to dichotomize a continuous (often unimodal and symmetric) score. It is well known in the statistical literature that this procedure induces significant bias. We illustrate the liabilities of common techniques for categorizing a risk score and discuss alternative approaches. We promote the use of the restricted mean survival (RMS) and the corresponding RMS curve that may be thought of as an analog to the best fit line from simple linear regression. Continuous biomarker workflows should be modified to include the more rigorous statistical techniques and descriptive plots described in this article. All statistics discussed can be computed via standard functions in the Survival package of the R statistical programming language. Example R language code for the RMS curve is presented in the appendix.
Visually guided tube thoracostomy insertion comparison to standard of care in a large animal model.
Hernandez, Matthew C; Vogelsang, David; Anderson, Jeff R; Thiels, Cornelius A; Beilman, Gregory; Zielinski, Martin D; Aho, Johnathon M
2017-04-01
Tube thoracostomy (TT) is a lifesaving procedure for a variety of thoracic pathologies. The most commonly utilized method for placement involves open dissection and blind insertion. Image guided placement is commonly utilized but is limited by an inability to see distal placement location. Unfortunately, TT is not without complications. We aim to demonstrate the feasibility of a disposable device to allow for visually directed TT placement compared to the standard of care in a large animal model. Three swine were sequentially orotracheally intubated and anesthetized. TT was conducted utilizing a novel visualization device, tube thoracostomy visual trocar (TTVT) and standard of care (open technique). Position of the TT in the chest cavity were recorded using direct thoracoscopic inspection and radiographic imaging with the operator blinded to results. Complications were evaluated using a validated complication grading system. Standard descriptive statistical analyses were performed. Thirty TT were placed, 15 using TTVT technique, 15 using standard of care open technique. All of the TT placed using TTVT were without complication and in optimal position. Conversely, 27% of TT placed using standard of care open technique resulted in complications. Necropsy revealed no injury to intrathoracic organs. Visual directed TT placement using TTVT is feasible and non-inferior to the standard of care in a large animal model. This improvement in instrumentation has the potential to greatly improve the safety of TT. Further study in humans is required. Therapeutic Level II. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Consolidation/Transition Model in Moral Reasoning Development.
ERIC Educational Resources Information Center
Walker, Lawrence J.; Gustafson, Paul; Hennig, Karl H.
2001-01-01
This longitudinal study with 62 children and adolescents examined the validity of the consolidation/transition model in the context of moral reasoning development. Results of standard statistical and Bayesian techniques supported the hypotheses regarding cyclical patterns of change and predictors of stage transition, and demonstrated the utility…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Callister, Stephen J.; Barry, Richard C.; Adkins, Joshua N.
2006-02-01
Central tendency, linear regression, locally weighted regression, and quantile techniques were investigated for normalization of peptide abundance measurements obtained from high-throughput liquid chromatography-Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR MS). Arbitrary abundances of peptides were obtained from three sample sets, including a standard protein sample, two Deinococcus radiodurans samples taken from different growth phases, and two mouse striatum samples from control and methamphetamine-stressed mice (strain C57BL/6). The selected normalization techniques were evaluated in both the absence and presence of biological variability by estimating extraneous variability prior to and following normalization. Prior to normalization, replicate runs from each sample setmore » were observed to be statistically different, while following normalization replicate runs were no longer statistically different. Although all techniques reduced systematic bias, assigned ranks among the techniques revealed significant trends. For most LC-FTICR MS analyses, linear regression normalization ranked either first or second among the four techniques, suggesting that this technique was more generally suitable for reducing systematic biases.« less
Suppaphol, Sorasak; Worathanarat, Patarawan; Kawinwongkovit, Viroj; Pittayawutwinit, Preecha
2012-04-01
To compare the operative outcome of carpal tunnel release between limited open carpal tunnel release using direct vision and tunneling technique (group A) with standard open carpal tunnel release (group B). Twenty-eight patients were enrolled in the present study. A single blind randomized control trial study was conducted to compare the postoperative results between group A and B. The study parameters were Levine's symptom severity and functional score, grip and pinch strength, and average two-point discrimination. The postoperative results between two groups were comparable with no statistical significance. Only grip strength at three months follow up was significantly greater in group A than in group B. The limited open carpal tunnel release in the present study is effective comparable to the standard open carpal tunnel release. The others advantage of this technique are better cosmesis and improvement in grip strength at the three months postoperative period.
A Deterministic Annealing Approach to Clustering AIRS Data
NASA Technical Reports Server (NTRS)
Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander
2012-01-01
We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique
Multiple Contact Dates and SARS Incubation Periods
2004-01-01
Many severe acute respiratory syndrome (SARS) patients have multiple possible incubation periods due to multiple contact dates. Multiple contact dates cannot be used in standard statistical analytic techniques, however. I present a simple spreadsheet-based method that uses multiple contact dates to calculate the possible incubation periods of SARS. PMID:15030684
de Jong, Marjan; Lucas, Cees; Bredero, Hansje; van Adrichem, Leon; Tibboel, Dick; van Dijk, Monique
2012-08-01
This article is a report of a randomized controlled trial of the effects of 'M' technique massage with or without mandarin oil compared to standard postoperative care on infants' levels of pain and distress, heart rate and mean arterial pressure after major craniofacial surgery. There is a growing interest in non-pharmacological interventions such as aromatherapy massage in hospitalized children to relieve pain and distress but well performed studies are lacking. This randomized controlled trial allocated 60 children aged 3-36 months after craniofacial surgery from January 2008 to August 2009 to one of three conditions; 'M' technique massage with carrier oil, 'M' technique massage with mandarin oil or standard postoperative care. Primary outcome measures were changes in COMFORT behaviour scores, Numeric Rating Scale pain and Numeric Rating Scale distress scores assessed from videotape by an observer blinded for the condition. In all three groups, the mean postintervention COMFORT behaviour scores were higher than the baseline scores, but differences were not statistically significant. Heart rate and mean arterial pressure showed a statistically significant change across the three assessment periods in all three groups. These changes were not related with the intervention. Results do not support a benefit of 'M' technique massage with or without mandarin oil in these young postoperative patients. Several reasons may account for this: massage given too soon after general anaesthesia, young patients' fear of strangers touching them, patients not used to massage. © 2011 Blackwell Publishing Ltd.
Descriptive and inferential statistical methods used in burns research.
Al-Benna, Sammy; Al-Ajam, Yazan; Way, Benjamin; Steinstraesser, Lars
2010-05-01
Burns research articles utilise a variety of descriptive and inferential methods to present and analyse data. The aim of this study was to determine the descriptive methods (e.g. mean, median, SD, range, etc.) and survey the use of inferential methods (statistical tests) used in articles in the journal Burns. This study defined its population as all original articles published in the journal Burns in 2007. Letters to the editor, brief reports, reviews, and case reports were excluded. Study characteristics, use of descriptive statistics and the number and types of statistical methods employed were evaluated. Of the 51 articles analysed, 11(22%) were randomised controlled trials, 18(35%) were cohort studies, 11(22%) were case control studies and 11(22%) were case series. The study design and objectives were defined in all articles. All articles made use of continuous and descriptive data. Inferential statistics were used in 49(96%) articles. Data dispersion was calculated by standard deviation in 30(59%). Standard error of the mean was quoted in 19(37%). The statistical software product was named in 33(65%). Of the 49 articles that used inferential statistics, the tests were named in 47(96%). The 6 most common tests used (Student's t-test (53%), analysis of variance/co-variance (33%), chi(2) test (27%), Wilcoxon & Mann-Whitney tests (22%), Fisher's exact test (12%)) accounted for the majority (72%) of statistical methods employed. A specified significance level was named in 43(88%) and the exact significance levels were reported in 28(57%). Descriptive analysis and basic statistical techniques account for most of the statistical tests reported. This information should prove useful in deciding which tests should be emphasised in educating burn care professionals. These results highlight the need for burn care professionals to have a sound understanding of basic statistics, which is crucial in interpreting and reporting data. Advice should be sought from professionals in the fields of biostatistics and epidemiology when using more advanced statistical techniques. Copyright 2009 Elsevier Ltd and ISBI. All rights reserved.
Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.
Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C
2015-02-01
Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Improving cerebellar segmentation with statistical fusion
NASA Astrophysics Data System (ADS)
Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.
2016-03-01
The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.
MacLean, Adam L; Harrington, Heather A; Stumpf, Michael P H; Byrne, Helen M
2016-01-01
The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non-exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine.
Rathi, Monika; Ahrenkiel, S P; Carapella, J J; Wanlass, M W
2013-02-01
Given an unknown multicomponent alloy, and a set of standard compounds or alloys of known composition, can one improve upon popular standards-based methods for energy dispersive X-ray (EDX) spectrometry to quantify the elemental composition of the unknown specimen? A method is presented here for determining elemental composition of alloys using transmission electron microscopy-based EDX with appropriate standards. The method begins with a discrete set of related reference standards of known composition, applies multivariate statistical analysis to those spectra, and evaluates the compositions with a linear matrix algebra method to relate the spectra to elemental composition. By using associated standards, only limited assumptions about the physical origins of the EDX spectra are needed. Spectral absorption corrections can be performed by providing an estimate of the foil thickness of one or more reference standards. The technique was applied to III-V multicomponent alloy thin films: composition and foil thickness were determined for various III-V alloys. The results were then validated by comparing with X-ray diffraction and photoluminescence analysis, demonstrating accuracy of approximately 1% in atomic fraction.
Wilcox, Rand; Carlson, Mike; Azen, Stan; Clark, Florence
2013-03-01
Recently, there have been major advances in statistical techniques for assessing central tendency and measures of association. The practical utility of modern methods has been documented extensively in the statistics literature, but they remain underused and relatively unknown in clinical trials. Our objective was to address this issue. STUDY DESIGN AND PURPOSE: The first purpose was to review common problems associated with standard methodologies (low power, lack of control over type I errors, and incorrect assessments of the strength of the association). The second purpose was to summarize some modern methods that can be used to circumvent such problems. The third purpose was to illustrate the practical utility of modern robust methods using data from the Well Elderly 2 randomized controlled trial. In multiple instances, robust methods uncovered differences among groups and associations among variables that were not detected by classic techniques. In particular, the results demonstrated that details of the nature and strength of the association were sometimes overlooked when using ordinary least squares regression and Pearson correlation. Modern robust methods can make a practical difference in detecting and describing differences between groups and associations between variables. Such procedures should be applied more frequently when analyzing trial-based data. Copyright © 2013 Elsevier Inc. All rights reserved.
De Los Ríos, F. A.; Paluszny, M.
2015-01-01
We consider some methods to extract information about the rotator cuff based on magnetic resonance images; the study aims to define an alternative method of display that might facilitate the detection of partial tears in the supraspinatus tendon. Specifically, we are going to use families of ellipsoidal triangular patches to cover the humerus head near the affected area. These patches are going to be textured and displayed with the information of the magnetic resonance images using the trilinear interpolation technique. For the generation of points to texture each patch, we propose a new method that guarantees the uniform distribution of its points using a random statistical method. Its computational cost, defined as the average computing time to generate a fixed number of points, is significantly lower as compared with deterministic and other standard statistical techniques. PMID:25650281
Impact of Standardized Communication Techniques on Errors during Simulated Neonatal Resuscitation.
Yamada, Nicole K; Fuerch, Janene H; Halamek, Louis P
2016-03-01
Current patterns of communication in high-risk clinical situations, such as resuscitation, are imprecise and prone to error. We hypothesized that the use of standardized communication techniques would decrease the errors committed by resuscitation teams during neonatal resuscitation. In a prospective, single-blinded, matched pairs design with block randomization, 13 subjects performed as a lead resuscitator in two simulated complex neonatal resuscitations. Two nurses assisted each subject during the simulated resuscitation scenarios. In one scenario, the nurses used nonstandard communication; in the other, they used standardized communication techniques. The performance of the subjects was scored to determine errors committed (defined relative to the Neonatal Resuscitation Program algorithm), time to initiation of positive pressure ventilation (PPV), and time to initiation of chest compressions (CC). In scenarios in which subjects were exposed to standardized communication techniques, there was a trend toward decreased error rate, time to initiation of PPV, and time to initiation of CC. While not statistically significant, there was a 1.7-second improvement in time to initiation of PPV and a 7.9-second improvement in time to initiation of CC. Should these improvements in human performance be replicated in the care of real newborn infants, they could improve patient outcomes and enhance patient safety. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Explorations in statistics: the log transformation.
Curran-Everett, Douglas
2018-06-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This thirteenth installment of Explorations in Statistics explores the log transformation, an established technique that rescales the actual observations from an experiment so that the assumptions of some statistical analysis are better met. A general assumption in statistics is that the variability of some response Y is homogeneous across groups or across some predictor variable X. If the variability-the standard deviation-varies in rough proportion to the mean value of Y, a log transformation can equalize the standard deviations. Moreover, if the actual observations from an experiment conform to a skewed distribution, then a log transformation can make the theoretical distribution of the sample mean more consistent with a normal distribution. This is important: the results of a one-sample t test are meaningful only if the theoretical distribution of the sample mean is roughly normal. If we log-transform our observations, then we want to confirm the transformation was useful. We can do this if we use the Box-Cox method, if we bootstrap the sample mean and the statistic t itself, and if we assess the residual plots from the statistical model of the actual and transformed sample observations.
Statistical crystallography of surface micelle spacing
NASA Technical Reports Server (NTRS)
Noever, David A.
1992-01-01
The aggregation of the recently reported surface micelles of block polyelectrolytes is analyzed using techniques of statistical crystallography. A polygonal lattice (Voronoi mosaic) connects center-to-center points, yielding statistical agreement with crystallographic predictions; Aboav-Weaire's law and Lewis's law are verified. This protocol supplements the standard analysis of surface micelles leading to aggregation number determination and, when compared to numerical simulations, allows further insight into the random partitioning of surface films. In particular, agreement with Lewis's law has been linked to the geometric packing requirements of filling two-dimensional space which compete with (or balance) physical forces such as interfacial tension, electrostatic repulsion, and van der Waals attraction.
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan
2017-09-01
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (Mea{{n}RHD} , ST{{D}RHD} and C{{V}RHD}{) }~ of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (C{{V}RHD} ) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology.
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Sedlmair, Martin; Choudhury, Kingshuk Roy; Pezeshk, Aria; Sahiner, Berkman; Samei, Ehsan
2017-01-01
Virtual nodule insertion paves the way towards the development of standardized databases of hybrid CT images with known lesions. The purpose of this study was to assess three methods (an established and two newly developed techniques) for inserting virtual lung nodules into CT images. Assessment was done by comparing virtual nodule volume and shape to the CT-derived volume and shape of synthetic nodules. 24 synthetic nodules (three sizes, four morphologies, two repeats) were physically inserted into the lung cavity of an anthropomorphic chest phantom (KYOTO KAGAKU). The phantom was imaged with and without nodules on a commercial CT scanner (SOMATOM Definition Flash, Siemens) using a standard thoracic CT protocol at two dose levels (1.4 and 22 mGy CTDIvol). Raw projection data were saved and reconstructed with filtered back-projection and sinogram affirmed iterative reconstruction (SAFIRE, strength 5) at 0.6 mm slice thickness. Corresponding 3D idealized, virtual nodule models were co-registered with the CT images to determine each nodule’s location and orientation. Virtual nodules were voxelized, partial volume corrected, and inserted into nodule-free CT data (accounting for system imaging physics) using two methods: projection-based Technique A, and image-based Technique B. Also a third Technique C based on cropping a region of interest from the acquired image of the real nodule and blending it into the nodule-free image was tested. Nodule volumes were measured using a commercial segmentation tool (iNtuition, TeraRecon, Inc.) and deformation was assessed using the Hausdorff distance. Nodule volumes and deformations were compared between the idealized, CT-derived and virtual nodules using a linear mixed effects regression model which utilized the mean, standard deviation, and coefficient of variation (MeanRHD, and STDRHD CVRHD) of the regional Hausdorff distance. Overall, there was a close concordance between the volumes of the CT-derived and virtual nodules. Percent differences between them were less than 3% for all insertion techniques and were not statistically significant in most cases. Correlation coefficient values were greater than 0.97. The deformation according to the Hausdorff distance was also similar between the CT-derived and virtual nodules with minimal statistical significance in the (CVRHD) for Techniques A, B, and C. This study shows that both projection-based and image-based nodule insertion techniques yield realistic nodule renderings with statistical similarity to the synthetic nodules with respect to nodule volume and deformation. These techniques could be used to create a database of hybrid CT images containing nodules of known size, location and morphology. PMID:28786399
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Costa, S. R. X.; Paiao, L. B. F.; Mendonca, F. J.; Shimabukuro, Y. E.; Duarte, V.
1983-01-01
The two phase sampling technique was applied to estimate the area cultivated with sugar cane in an approximately 984 sq km pilot region of Campos. Correlation between existing aerial photography and LANDSAT data was used. The two phase sampling technique corresponded to 99.6% of the results obtained by aerial photography, taken as ground truth. This estimate has a standard deviation of 225 ha, which constitutes a coefficient of variation of 0.6%.
An analytic technique for statistically modeling random atomic clock errors in estimation
NASA Technical Reports Server (NTRS)
Fell, P. J.
1981-01-01
Minimum variance estimation requires that the statistics of random observation errors be modeled properly. If measurements are derived through the use of atomic frequency standards, then one source of error affecting the observable is random fluctuation in frequency. This is the case, for example, with range and integrated Doppler measurements from satellites of the Global Positioning and baseline determination for geodynamic applications. An analytic method is presented which approximates the statistics of this random process. The procedure starts with a model of the Allan variance for a particular oscillator and develops the statistics of range and integrated Doppler measurements. A series of five first order Markov processes is used to approximate the power spectral density obtained from the Allan variance.
Ries(compiler), Kernell G.; With sections by Atkins, J. B.; Hummel, P.R.; Gray, Matthew J.; Dusenbury, R.; Jennings, M.E.; Kirby, W.H.; Riggs, H.C.; Sauer, V.B.; Thomas, W.O.
2007-01-01
The National Streamflow Statistics (NSS) Program is a computer program that should be useful to engineers, hydrologists, and others for planning, management, and design applications. NSS compiles all current U.S. Geological Survey (USGS) regional regression equations for estimating streamflow statistics at ungaged sites in an easy-to-use interface that operates on computers with Microsoft Windows operating systems. NSS expands on the functionality of the USGS National Flood Frequency Program, and replaces it. The regression equations included in NSS are used to transfer streamflow statistics from gaged to ungaged sites through the use of watershed and climatic characteristics as explanatory or predictor variables. Generally, the equations were developed on a statewide or metropolitan-area basis as part of cooperative study programs. Equations are available for estimating rural and urban flood-frequency statistics, such as the 1 00-year flood, for every state, for Puerto Rico, and for the island of Tutuila, American Samoa. Equations are available for estimating other statistics, such as the mean annual flow, monthly mean flows, flow-duration percentiles, and low-flow frequencies (such as the 7-day, 0-year low flow) for less than half of the states. All equations available for estimating streamflow statistics other than flood-frequency statistics assume rural (non-regulated, non-urbanized) conditions. The NSS output provides indicators of the accuracy of the estimated streamflow statistics. The indicators may include any combination of the standard error of estimate, the standard error of prediction, the equivalent years of record, or 90 percent prediction intervals, depending on what was provided by the authors of the equations. The program includes several other features that can be used only for flood-frequency estimation. These include the ability to generate flood-frequency plots, and plots of typical flood hydrographs for selected recurrence intervals, estimates of the probable maximum flood, extrapolation of the 500-year flood when an equation for estimating it is not available, and weighting techniques to improve flood-frequency estimates for gaging stations and ungaged sites on gaged streams. This report describes the regionalization techniques used to develop the equations in NSS and provides guidance on the applicability and limitations of the techniques. The report also includes a users manual and a summary of equations available for estimating basin lagtime, which is needed by the program to generate flood hydrographs. The NSS software and accompanying database, and the documentation for the regression equations included in NSS, are available on the Web at http://water.usgs.gov/software/.
Three-dimensional accuracy of different correction methods for cast implant bars
Kwon, Ji-Yung; Kim, Chang-Whe; Lim, Young-Jun; Kwon, Ho-Beom
2014-01-01
PURPOSE The aim of the present study was to evaluate the accuracy of three techniques for correction of cast implant bars. MATERIALS AND METHODS Thirty cast implant bars were fabricated on a metal master model. All cast implant bars were sectioned at 5 mm from the left gold cylinder using a disk of 0.3 mm thickness, and then each group of ten specimens was corrected by gas-air torch soldering, laser welding, and additional casting technique. Three dimensional evaluation including horizontal, vertical, and twisting measurements was based on measurement and comparison of (1) gap distances of the right abutment replica-gold cylinder interface at buccal, distal, lingual side, (2) changes of bar length, and (3) axis angle changes of the right gold cylinders at the step of the post-correction measurements on the three groups with a contact and non-contact coordinate measuring machine. One-way analysis of variance (ANOVA) and paired t-test were performed at the significance level of 5%. RESULTS Gap distances of the cast implant bars after correction procedure showed no statistically significant difference among groups. Changes in bar length between pre-casting and post-correction measurement were statistically significance among groups. Axis angle changes of the right gold cylinders were not statistically significance among groups. CONCLUSION There was no statistical significance among three techniques in horizontal, vertical and axial errors. But, gas-air torch soldering technique showed the most consistent and accurate trend in the correction of implant bar error. However, Laser welding technique, showed a large mean and standard deviation in vertical and twisting measurement and might be technique-sensitive method. PMID:24605205
Visualizing statistical significance of disease clusters using cartograms.
Kronenfeld, Barry J; Wong, David W S
2017-05-15
Health officials and epidemiological researchers often use maps of disease rates to identify potential disease clusters. Because these maps exaggerate the prominence of low-density districts and hide potential clusters in urban (high-density) areas, many researchers have used density-equalizing maps (cartograms) as a basis for epidemiological mapping. However, we do not have existing guidelines for visual assessment of statistical uncertainty. To address this shortcoming, we develop techniques for visual determination of statistical significance of clusters spanning one or more districts on a cartogram. We developed the techniques within a geovisual analytics framework that does not rely on automated significance testing, and can therefore facilitate visual analysis to detect clusters that automated techniques might miss. On a cartogram of the at-risk population, the statistical significance of a disease cluster is determinate from the rate, area and shape of the cluster under standard hypothesis testing scenarios. We develop formulae to determine, for a given rate, the area required for statistical significance of a priori and a posteriori designated regions under certain test assumptions. Uniquely, our approach enables dynamic inference of aggregate regions formed by combining individual districts. The method is implemented in interactive tools that provide choropleth mapping, automated legend construction and dynamic search tools to facilitate cluster detection and assessment of the validity of tested assumptions. A case study of leukemia incidence analysis in California demonstrates the ability to visually distinguish between statistically significant and insignificant regions. The proposed geovisual analytics approach enables intuitive visual assessment of statistical significance of arbitrarily defined regions on a cartogram. Our research prompts a broader discussion of the role of geovisual exploratory analyses in disease mapping and the appropriate framework for visually assessing the statistical significance of spatial clusters.
Handling nonnormality and variance heterogeneity for quantitative sublethal toxicity tests.
Ritz, Christian; Van der Vliet, Leana
2009-09-01
The advantages of using regression-based techniques to derive endpoints from environmental toxicity data are clear, and slowly, this superior analytical technique is gaining acceptance. As use of regression-based analysis becomes more widespread, some of the associated nuances and potential problems come into sharper focus. Looking at data sets that cover a broad spectrum of standard test species, we noticed that some model fits to data failed to meet two key assumptions-variance homogeneity and normality-that are necessary for correct statistical analysis via regression-based techniques. Failure to meet these assumptions often is caused by reduced variance at the concentrations showing severe adverse effects. Although commonly used with linear regression analysis, transformation of the response variable only is not appropriate when fitting data using nonlinear regression techniques. Through analysis of sample data sets, including Lemna minor, Eisenia andrei (terrestrial earthworm), and algae, we show that both the so-called Box-Cox transformation and use of the Poisson distribution can help to correct variance heterogeneity and nonnormality and so allow nonlinear regression analysis to be implemented. Both the Box-Cox transformation and the Poisson distribution can be readily implemented into existing protocols for statistical analysis. By correcting for nonnormality and variance heterogeneity, these two statistical tools can be used to encourage the transition to regression-based analysis and the depreciation of less-desirable and less-flexible analytical techniques, such as linear interpolation.
Toward Robust and Efficient Climate Downscaling for Wind Energy
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Rife, D.; Pinto, J. O.; Monaghan, A. J.; Davis, C. A.
2011-12-01
This presentation describes a more accurate and economical (less time, money and effort) wind resource assessment technique for the renewable energy industry, that incorporates innovative statistical techniques and new global mesoscale reanalyzes. The technique judiciously selects a collection of "case days" that accurately represent the full range of wind conditions observed at a given site over a 10-year period, in order to estimate the long-term energy yield. We will demonstrate that this new technique provides a very accurate and statistically reliable estimate of the 10-year record of the wind resource by intelligently choosing a sample of ±120 case days. This means that the expense of downscaling to quantify the wind resource at a prospective wind farm can be cut by two thirds from the current industry practice of downscaling a randomly chosen 365-day sample to represent winds over a "typical" year. This new estimate of the long-term energy yield at a prospective wind farm also has far less statistical uncertainty than the current industry standard approach. This key finding has the potential to reduce significantly market barriers to both onshore and offshore wind farm development, since insurers and financiers charge prohibitive premiums on investments that are deemed to be high risk. Lower uncertainty directly translates to lower perceived risk, and therefore far more attractive financing terms could be offered to wind farm developers who employ this new technique.
The Timeseries Toolbox - A Web Application to Enable Accessible, Reproducible Time Series Analysis
NASA Astrophysics Data System (ADS)
Veatch, W.; Friedman, D.; Baker, B.; Mueller, C.
2017-12-01
The vast majority of data analyzed by climate researchers are repeated observations of physical process or time series data. This data lends itself of a common set of statistical techniques and models designed to determine trends and variability (e.g., seasonality) of these repeated observations. Often, these same techniques and models can be applied to a wide variety of different time series data. The Timeseries Toolbox is a web application designed to standardize and streamline these common approaches to time series analysis and modeling with particular attention to hydrologic time series used in climate preparedness and resilience planning and design by the U. S. Army Corps of Engineers. The application performs much of the pre-processing of time series data necessary for more complex techniques (e.g. interpolation, aggregation). With this tool, users can upload any dataset that conforms to a standard template and immediately begin applying these techniques to analyze their time series data.
Nguyen, Phung Anh; Yang, Hsuan-Chia; Xu, Rong; Li, Yu-Chuan Jack
2018-01-01
Traditional Chinese Medicine utilization has rapidly increased worldwide. However, there is limited database provides the information of TCM herbs and diseases. The study aims to identify and evaluate the meaningful associations between TCM herbs and breast cancer by using the association rule mining (ARM) techniques. We employed the ARM techniques for 19.9 million TCM prescriptions by using Taiwan National Health Insurance claim database from 1999 to 2013. 364 TCM herbs-breast cancer associations were derived from those prescriptions and were then filtered by their support of 20. Resulting of 296 associations were evaluated by comparing to a gold-standard that was curated information from Chinese-Wikipedia with the following terms, cancer, tumor, malignant. All 14 TCM herbs-breast cancer associations with their confidence of 1% were valid when compared to gold-standard. For other confidences, the statistical results showed consistently with high precisions. We thus succeed to identify the TCM herbs-breast cancer associations with useful techniques.
Rothfeld, Alex; Pawlak, Amanda; Liebler, Stephenie A H; Morris, Michael; Paci, James M
2018-04-01
Patellar tendon repair with braided polyethylene suture alone is subject to knot slippage and failure. Several techniques to augment the primary repair have been described. Purpose/Hypothesis: The purpose was to evaluate a novel patellar tendon repair technique augmented with a knotless suture anchor internal brace with suture tape (SAIB). The hypothesis was that this technique would be biomechanically superior to a nonaugmented repair and equivalent to a standard augmentation with an 18-gauge steel wire. Controlled laboratory study. Midsubstance patellar tendon tears were created in 32 human cadaveric knees. Two comparison groups were created. Group 1 compared #2 supersuture repair without augmentation to #2 supersuture repair with SAIB augmentation. Group 2 compared #2 supersuture repair with an 18-gauge stainless steel cerclage wire augmentation to #2 supersuture repair with SAIB augmentation. The specimens were potted and biomechanically loaded on a materials testing machine. Yield load, maximum load, mode of failure, plastic displacement, elastic displacement, and total displacement were calculated for each sample. Standard statistical analysis was performed. There was a statistically significant increase in the mean ± SD yield load and maximum load in the SAIB augmentation group compared with supersuture alone (mean yield load: 646 ± 202 N vs 229 ± 60 N; mean maximum load: 868 ± 162 N vs 365 ± 54 N; P < .001). Group 2 showed no statistically significant differences between the augmented repairs (mean yield load: 495 ± 213 N vs 566 ± 172 N; P = .476; mean maximum load: 737 ± 210 N vs 697 ± 130 N; P = .721). Patellar tendon repair augmented with SAIB is biomechanically superior to repair without augmentation and is equivalent to repair with augmentation with an 18-gauge stainless steel cerclage wire. This novel patellar tendon repair augmentation is equivalent to standard 18-gauge wire augmentation at time zero. It does not require a second surgery for removal, and it is biomechanically superior to primary repair alone.
Al-Ekrish, Asma'a A; Al-Shawaf, Reema; Schullian, Peter; Al-Sadhan, Ra'ed; Hörmann, Romed; Widmann, Gerlig
2016-10-01
To assess the comparability of linear measurements of dental implant sites recorded from multidetector computed tomography (MDCT) images obtained using standard-dose filtered backprojection (FBP) technique with those from various ultralow doses combined with FBP, adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR) techniques. The results of the study may contribute to MDCT dose optimization for dental implant site imaging. MDCT scans of two cadavers were acquired using a standard reference protocol and four ultralow-dose test protocols (TP). The volume CT dose index of the different dose protocols ranged from a maximum of 30.48-36.71 mGy to a minimum of 0.44-0.53 mGy. All scans were reconstructed using FBP, ASIR-50, ASIR-100, and MBIR, and either a bone or standard reconstruction kernel. Linear measurements were recorded from standardized images of the jaws by two examiners. Intra- and inter-examiner reliability of the measurements were analyzed using Cronbach's alpha and inter-item correlation. Agreement between the measurements obtained with the reference-dose/FBP protocol and each of the test protocols was determined with Bland-Altman plots and linear regression. Statistical significance was set at a P-value of 0.05. No systematic variation was found between the linear measurements obtained with the reference protocol and the other imaging protocols. The only exceptions were TP3/ASIR-50 (bone kernel) and TP4/ASIR-100 (bone and standard kernels). The mean measurement differences between these three protocols and the reference protocol were within ±0.1 mm, with the 95 % confidence interval limits being within the range of ±1.15 mm. A nearly 97.5 % reduction in dose did not significantly affect the height and width measurements of edentulous jaws regardless of the reconstruction algorithm used.
First uncertainty evaluation of the FoCS-2 primary frequency standard
NASA Astrophysics Data System (ADS)
Jallageas, A.; Devenoges, L.; Petersen, M.; Morel, J.; Bernier, L. G.; Schenker, D.; Thomann, P.; Südmeyer, T.
2018-06-01
We report the uncertainty evaluation of the Swiss continuous primary frequency standard FoCS-2 (Fontaine Continue Suisse). Unlike other primary frequency standards which are working with clouds of cold atoms, this fountain uses a continuous beam of cold caesium atoms bringing a series of metrological advantages and specific techniques for the evaluation of the uncertainty budget. Recent improvements of FoCS-2 have made possible the evaluation of the frequency shifts and of their uncertainties in the order of . When operating in an optimal regime a relative frequency instability of is obtained. The relative standard uncertainty reported in this article, , is strongly dominated by the statistics of the frequency measurements.
Baldewijns, Greet; Luca, Stijn; Nagels, William; Vanrumste, Bart; Croonenborghs, Tom
2015-01-01
It has been shown that gait speed and transfer times are good measures of functional ability in elderly. However, data currently acquired by systems that measure either gait speed or transfer times in the homes of elderly people require manual reviewing by healthcare workers. This reviewing process is time-consuming. To alleviate this burden, this paper proposes the use of statistical process control methods to automatically detect both positive and negative changes in transfer times. Three SPC techniques: tabular CUSUM, standardized CUSUM and EWMA, known for their ability to detect small shifts in the data, are evaluated on simulated transfer times. This analysis shows that EWMA is the best-suited method with a detection accuracy of 82% and an average detection time of 9.64 days.
Advanced techniques and technology for efficient data storage, access, and transfer
NASA Technical Reports Server (NTRS)
Rice, Robert F.; Miller, Warner
1991-01-01
Advanced techniques for efficiently representing most forms of data are being implemented in practical hardware and software form through the joint efforts of three NASA centers. These techniques adapt to local statistical variations to continually provide near optimum code efficiency when representing data without error. Demonstrated in several earlier space applications, these techniques are the basis of initial NASA data compression standards specifications. Since the techniques clearly apply to most NASA science data, NASA invested in the development of both hardware and software implementations for general use. This investment includes high-speed single-chip very large scale integration (VLSI) coding and decoding modules as well as machine-transferrable software routines. The hardware chips were tested in the laboratory at data rates as high as 700 Mbits/s. A coding module's definition includes a predictive preprocessing stage and a powerful adaptive coding stage. The function of the preprocessor is to optimally process incoming data into a standard form data source that the second stage can handle.The built-in preprocessor of the VLSI coder chips is ideal for high-speed sampled data applications such as imaging and high-quality audio, but additionally, the second stage adaptive coder can be used separately with any source that can be externally preprocessed into the 'standard form'. This generic functionality assures that the applicability of these techniques and their recent high-speed implementations should be equally broad outside of NASA.
Rescaled earthquake recurrence time statistics: application to microrepeaters
NASA Astrophysics Data System (ADS)
Goltz, Christian; Turcotte, Donald L.; Abaimov, Sergey G.; Nadeau, Robert M.; Uchida, Naoki; Matsuzawa, Toru
2009-01-01
Slip on major faults primarily occurs during `characteristic' earthquakes. The recurrence statistics of characteristic earthquakes play an important role in seismic hazard assessment. A major problem in determining applicable statistics is the short sequences of characteristic earthquakes that are available worldwide. In this paper, we introduce a rescaling technique in which sequences can be superimposed to establish larger numbers of data points. We consider the Weibull and log-normal distributions, in both cases we rescale the data using means and standard deviations. We test our approach utilizing sequences of microrepeaters, micro-earthquakes which recur in the same location on a fault. It seems plausible to regard these earthquakes as a miniature version of the classic characteristic earthquakes. Microrepeaters are much more frequent than major earthquakes, leading to longer sequences for analysis. In this paper, we present results for the analysis of recurrence times for several microrepeater sequences from Parkfield, CA as well as NE Japan. We find that, once the respective sequence can be considered to be of sufficient stationarity, the statistics can be well fitted by either a Weibull or a log-normal distribution. We clearly demonstrate this fact by our technique of rescaled combination. We conclude that the recurrence statistics of the microrepeater sequences we consider are similar to the recurrence statistics of characteristic earthquakes on major faults.
Microscopic assessment of the sealing ability of three endodontic filling techniques
Cueva-Goig, Roger; Llena-Puy, Mª Carmen
2016-01-01
Background Several techniques have been proposed for root canal filling. New rotary files, with non-standardized taper, are appearing, so, points adapted to the taper of the last instrument used to prepare the canal can help in the obturation process. The aim of this study is to assess the sealing ability of different root canal filling techniques. Material and Methods Root canals from 30 teeth were shaped with Mtwo and divided in three groups; A, standard lateral condensation with size 35 and 20 gutta-percha points; B, standard lateral condensation and injected gutta-percha; C, single gutta-percha point (standardized 35 Mtwo), continuous wave technique and injected gutta-percha. Root surfaces were covered with nail varnish, except for the apical 2 mm, and submerged in a NO3Ag2 solution; apical stain penetration was measured in mm. Data were compared using the Kruskal-Wallis test with a 90% confidence interval. Results A and B groups showed stain leakage in the 90% of the cases, whereas it was of 80% for group C. Stain leakage intervals were 1-5 mm for groups A and B and 1-3 mm for group C. There were no statistically significant differences between the three studied groups (p>.05). Conclusions All the analyzed root canal filling techniques showed some apical stain leakage, without significant differences among them. Key words:Gutta-percha filling, microleakage, single cone, injected gutta-percha, warm gutta-percha. PMID:26855702
Dabarakis, Nikolaos N; Alexander, Veis; Tsirlis, Anastasios T; Parissis, Nikolaos A; Nikolaos, Maroufidis
2007-01-01
To clinically evaluate the jet injection Injex (Rösch AG Medizintechnik) using 2 different anesthetic solutions, and to compare the jet injection and the standard needle injection techniques. Of the 32 patients in the study, 10 received mepivacaine 3% anesthetic solution by means of the jet injection technique, while the remaining 22 patients received lidocaine 2% with epinephrine 1:80,000 by the same method. The 14 patients in whom pulp anesthesia was achieved were selected for an additional evaluation of the pulp reaction using standard needle injection anesthesia. The differences between the 2 compounds with Injex were statistically evaluated by means of independent-samples t test analysis. The differences between subgroups receiving both jet injection and needle injection anesthesia were evaluated by means of paired t test analysis. The administration of mepivacaine 3% using Injex did not achieve pulp anesthesia in any of the 10 patients, although the soft tissue anesthesia was successful. The administration of lidocaine with epinephrine using Injex resulted in pulp anesthesia in only 14 patients; soft tissue anesthesia was observed in all patients of this group. There was no statistically significant difference between Injex and the needle injection technique in onset of anesthesia. However, the duration of anesthesia was significantly longer for the needle infiltration group than for the Injex injection group. The anesthetic solution should be combined with a vasoconstriction agent when the Injex technique is implemented.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors.
Zhang, Yajia; Hauser, Kris
2013-01-01
Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion.
Unbiased, scalable sampling of protein loop conformations from probabilistic priors
2013-01-01
Background Protein loops are flexible structures that are intimately tied to function, but understanding loop motion and generating loop conformation ensembles remain significant computational challenges. Discrete search techniques scale poorly to large loops, optimization and molecular dynamics techniques are prone to local minima, and inverse kinematics techniques can only incorporate structural preferences in adhoc fashion. This paper presents Sub-Loop Inverse Kinematics Monte Carlo (SLIKMC), a new Markov chain Monte Carlo algorithm for generating conformations of closed loops according to experimentally available, heterogeneous structural preferences. Results Our simulation experiments demonstrate that the method computes high-scoring conformations of large loops (>10 residues) orders of magnitude faster than standard Monte Carlo and discrete search techniques. Two new developments contribute to the scalability of the new method. First, structural preferences are specified via a probabilistic graphical model (PGM) that links conformation variables, spatial variables (e.g., atom positions), constraints and prior information in a unified framework. The method uses a sparse PGM that exploits locality of interactions between atoms and residues. Second, a novel method for sampling sub-loops is developed to generate statistically unbiased samples of probability densities restricted by loop-closure constraints. Conclusion Numerical experiments confirm that SLIKMC generates conformation ensembles that are statistically consistent with specified structural preferences. Protein conformations with 100+ residues are sampled on standard PC hardware in seconds. Application to proteins involved in ion-binding demonstrate its potential as a tool for loop ensemble generation and missing structure completion. PMID:24565175
Hauptmann, C; Roulet, J-C; Niederhauser, J J; Döll, W; Kirlangic, M E; Lysyansky, B; Krachkovskyi, V; Bhatti, M A; Barnikol, U B; Sasse, L; Bührle, C P; Speckmann, E-J; Götz, M; Sturm, V; Freund, H-J; Schnell, U; Tass, P A
2009-12-01
In the past decade deep brain stimulation (DBS)-the application of electrical stimulation to specific target structures via implanted depth electrodes-has become the standard treatment for medically refractory Parkinson's disease and essential tremor. These diseases are characterized by pathological synchronized neuronal activity in particular brain areas. We present an external trial DBS device capable of administering effectively desynchronizing stimulation techniques developed with methods from nonlinear dynamics and statistical physics according to a model-based approach. These techniques exploit either stochastic phase resetting principles or complex delayed-feedback mechanisms. We explain how these methods are implemented into a safe and user-friendly device.
Characterization of controlled bone defects using 2D and 3D ultrasound imaging techniques.
Parmar, Biren J; Longsine, Whitney; Sabonghy, Eric P; Han, Arum; Tasciotti, Ennio; Weiner, Bradley K; Ferrari, Mauro; Righetti, Raffaella
2010-08-21
Ultrasound is emerging as an attractive alternative modality to standard x-ray and CT methods for bone assessment applications. As of today, however, there is a lack of systematic studies that investigate the performance of diagnostic ultrasound techniques in bone imaging applications. This study aims at understanding the performance limitations of new ultrasound techniques for imaging bones in controlled experiments in vitro. Experiments are performed on samples of mammalian and non-mammalian bones with controlled defects with size ranging from 400 microm to 5 mm. Ultrasound findings are statistically compared with those obtained from the same samples using standard x-ray imaging modalities and optical microscopy. The results of this study demonstrate that it is feasible to use diagnostic ultrasound imaging techniques to assess sub-millimeter bone defects in real time and with high accuracy and precision. These results also demonstrate that ultrasound imaging techniques perform comparably better than x-ray imaging and optical imaging methods, in the assessment of a wide range of controlled defects both in mammalian and non-mammalian bones. In the future, ultrasound imaging techniques might provide a cost-effective, real-time, safe and portable diagnostic tool for bone imaging applications.
Densely calculated facial soft tissue thickness for craniofacial reconstruction in Chinese adults.
Shui, Wuyang; Zhou, Mingquan; Deng, Qingqiong; Wu, Zhongke; Ji, Yuan; Li, Kang; He, Taiping; Jiang, Haiyan
2016-09-01
Craniofacial reconstruction (CFR) is used to recreate a likeness of original facial appearance for an unidentified skull; this technique has been applied in both forensics and archeology. Many CFR techniques rely on the average facial soft tissue thickness (FSTT) of anatomical landmarks, related to ethnicity, age, sex, body mass index (BMI), etc. Previous studies typically employed FSTT at sparsely distributed anatomical landmarks, where different landmark definitions may affect the contrasting results. In the present study, a total of 90,198 one-to-one correspondence skull vertices are established on 171 head CT-scans and the FSTT of each corresponding vertex is calculated (hereafter referred to as densely calculated FSTT) for statistical analysis and CFR. Basic descriptive statistics (i.e., mean and standard deviation) for densely calculated FSTT are reported separately according to sex and age. Results show that 76.12% of overall vertices indicate that the FSTT is greater in males than females, with the exception of vertices around the zygoma, zygomatic arch and mid-lateral orbit. These sex-related significant differences are found at 55.12% of all vertices and the statistically age-related significant differences are depicted between the three age groups at a majority of all vertices (73.31% for males and 63.43% for females). Five non-overlapping categories are given and the descriptive statistics (i.e., mean, standard deviation, local standard deviation and percentage) are reported. Multiple appearances are produced using the densely calculated FSTT of various age and sex groups, and a quantitative assessment is provided to examine how relevant the choice of FSTT is to increasing the accuracy of CFR. In conclusion, this study provides a new perspective in understanding the distribution of FSTT and the construction of a new densely calculated FSTT database for craniofacial reconstruction. Copyright © 2016. Published by Elsevier Ireland Ltd.
Discrete disorder models for many-body localization
NASA Astrophysics Data System (ADS)
Janarek, Jakub; Delande, Dominique; Zakrzewski, Jakub
2018-04-01
Using exact diagonalization technique, we investigate the many-body localization phenomenon in the 1D Heisenberg chain comparing several disorder models. In particular we consider a family of discrete distributions of disorder strengths and compare the results with the standard uniform distribution. Both statistical properties of energy levels and the long time nonergodic behavior are discussed. The results for different discrete distributions are essentially identical to those obtained for the continuous distribution, provided the disorder strength is rescaled by the standard deviation of the random distribution. Only for the binary distribution significant deviations are observed.
Application of survival analysis methodology to the quantitative analysis of LC-MS proteomics data.
Tekwe, Carmen D; Carroll, Raymond J; Dabney, Alan R
2012-08-01
Protein abundance in quantitative proteomics is often based on observed spectral features derived from liquid chromatography mass spectrometry (LC-MS) or LC-MS/MS experiments. Peak intensities are largely non-normal in distribution. Furthermore, LC-MS-based proteomics data frequently have large proportions of missing peak intensities due to censoring mechanisms on low-abundance spectral features. Recognizing that the observed peak intensities detected with the LC-MS method are all positive, skewed and often left-censored, we propose using survival methodology to carry out differential expression analysis of proteins. Various standard statistical techniques including non-parametric tests such as the Kolmogorov-Smirnov and Wilcoxon-Mann-Whitney rank sum tests, and the parametric survival model and accelerated failure time-model with log-normal, log-logistic and Weibull distributions were used to detect any differentially expressed proteins. The statistical operating characteristics of each method are explored using both real and simulated datasets. Survival methods generally have greater statistical power than standard differential expression methods when the proportion of missing protein level data is 5% or more. In particular, the AFT models we consider consistently achieve greater statistical power than standard testing procedures, with the discrepancy widening with increasing missingness in the proportions. The testing procedures discussed in this article can all be performed using readily available software such as R. The R codes are provided as supplemental materials. ctekwe@stat.tamu.edu.
NASA Technical Reports Server (NTRS)
Edwards, S. F.; Kantsios, A. G.; Voros, J. P.; Stewart, W. F.
1975-01-01
The development of a radiometric technique for determining the spectral and total normal emittance of materials heated to temperatures of 800, 1100, and 1300 K by direct comparison with National Bureau of Standards (NBS) reference specimens is discussed. Emittances are measured over the spectral range of 1 to 15 microns and are statistically compared with NBS reference specimens. Results are included for NBS reference specimens, Rene 41, alundum, zirconia, AISI type 321 stainless steel, nickel 201, and a space-shuttle reusable surface insulation.
Earthquake prediction evaluation standards applied to the VAN Method
NASA Astrophysics Data System (ADS)
Jackson, David D.
Earthquake prediction research must meet certain standards before it can be suitably evaluated for potential application in decision making. For methods that result in a binary (on or off) alarm condition, requirements include (1) a quantitative description of observables that trigger an alarm, (2) a quantitative description, including ranges of time, location, and magnitude, of the predicted earthquakes, (3) documented evidence of all previous alarms, (4) a complete list of predicted earthquakes, (5) a complete list of unpredicted earthquakes. The VAN technique [Varotsos and Lazaridou, 1991; Varotsos et al., 1996] has not yet been stated as a testable hypothesis. It fails criteria (1) and (2) so it is not ready to be evaluated properly. Although telegrams were transmitted in advance of claimed successes, these telegrams did not fully specify the predicted events, and all of the published statistical evaluations involve many subjective ex post facto decisions. Lacking a statistically demonstrated relationship to earthquakes, a candidate prediction technique should satisfy several plausibility criteria, including: (1) a reasonable relationship between the location of the candidate precursor and that of the predicted earthquake, (2) some demonstration that the candidate precursory observations are related to stress, strain, or other quantities related to earthquakes, and (3) the existence of co-seismic as well as pre-seismic variations of the candidate precursor. The VAN technique meets none of these criteria.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
Allen, Robert C; John, Mallory G; Rutan, Sarah C; Filgueira, Marcelo R; Carr, Peter W
2012-09-07
A singular value decomposition-based background correction (SVD-BC) technique is proposed for the reduction of background contributions in online comprehensive two-dimensional liquid chromatography (LC×LC) data. The SVD-BC technique was compared to simply subtracting a blank chromatogram from a sample chromatogram and to a previously reported background correction technique for one dimensional chromatography, which uses an asymmetric weighted least squares (AWLS) approach. AWLS was the only background correction technique to completely remove the background artifacts from the samples as evaluated by visual inspection. However, the SVD-BC technique greatly reduced or eliminated the background artifacts as well and preserved the peak intensity better than AWLS. The loss in peak intensity by AWLS resulted in lower peak counts at the detection thresholds established using standards samples. However, the SVD-BC technique was found to introduce noise which led to detection of false peaks at the lower detection thresholds. As a result, the AWLS technique gave more precise peak counts than the SVD-BC technique, particularly at the lower detection thresholds. While the AWLS technique resulted in more consistent percent residual standard deviation values, a statistical improvement in peak quantification after background correction was not found regardless of the background correction technique used. Copyright © 2012 Elsevier B.V. All rights reserved.
Simplified estimation of age-specific reference intervals for skewed data.
Wright, E M; Royston, P
1997-12-30
Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.
Weak Value Amplification is Suboptimal for Estimation and Detection
NASA Astrophysics Data System (ADS)
Ferrie, Christopher; Combes, Joshua
2014-01-01
We show by using statistically rigorous arguments that the technique of weak value amplification does not perform better than standard statistical techniques for the tasks of single parameter estimation and signal detection. Specifically, we prove that postselection, a necessary ingredient for weak value amplification, decreases estimation accuracy and, moreover, arranging for anomalously large weak values is a suboptimal strategy. In doing so, we explicitly provide the optimal estimator, which in turn allows us to identify the optimal experimental arrangement to be the one in which all outcomes have equal weak values (all as small as possible) and the initial state of the meter is the maximal eigenvalue of the square of the system observable. Finally, we give precise quantitative conditions for when weak measurement (measurements without postselection or anomalously large weak values) can mitigate the effect of uncharacterized technical noise in estimation.
How to compare cross-lagged associations in a multilevel autoregressive model.
Schuurman, Noémi K; Ferrer, Emilio; de Boer-Sonnenschein, Mieke; Hamaker, Ellen L
2016-06-01
By modeling variables over time it is possible to investigate the Granger-causal cross-lagged associations between variables. By comparing the standardized cross-lagged coefficients, the relative strength of these associations can be evaluated in order to determine important driving forces in the dynamic system. The aim of this study was twofold: first, to illustrate the added value of a multilevel multivariate autoregressive modeling approach for investigating these associations over more traditional techniques; and second, to discuss how the coefficients of the multilevel autoregressive model should be standardized for comparing the strength of the cross-lagged associations. The hierarchical structure of multilevel multivariate autoregressive models complicates standardization, because subject-based statistics or group-based statistics can be used to standardize the coefficients, and each method may result in different conclusions. We argue that in order to make a meaningful comparison of the strength of the cross-lagged associations, the coefficients should be standardized within persons. We further illustrate the bivariate multilevel autoregressive model and the standardization of the coefficients, and we show that disregarding individual differences in dynamics can prove misleading, by means of an empirical example on experienced competence and exhaustion in persons diagnosed with burnout. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Du, Yiping P; Jin, Zhaoyang
2009-10-01
To develop a robust algorithm for tissue-air segmentation in magnetic resonance imaging (MRI) using the statistics of phase and magnitude of the images. A multivariate measure based on the statistics of phase and magnitude was constructed for tissue-air volume segmentation. The standard deviation of first-order phase difference and the standard deviation of magnitude were calculated in a 3 x 3 x 3 kernel in the image domain. To improve differentiation accuracy, the uniformity of phase distribution in the kernel was also calculated and linear background phase introduced by field inhomogeneity was corrected. The effectiveness of the proposed volume segmentation technique was compared to a conventional approach that uses the magnitude data alone. The proposed algorithm was shown to be more effective and robust in volume segmentation in both synthetic phantom and susceptibility-weighted images of human brain. Using our proposed volume segmentation method, veins in the peripheral regions of the brain were well depicted in the minimum-intensity projection of the susceptibility-weighted images. Using the additional statistics of phase, tissue-air volume segmentation can be substantially improved compared to that using the statistics of magnitude data alone. (c) 2009 Wiley-Liss, Inc.
Douglas-fir survival and growth in response to spring planting date and depth
R. O. Strothmann
1971-01-01
Douglas-fir seedlings were planted by four methods in late February and late March on hot, south-facing slopes in northwestern California. Besides standard planting, the techniques used included deep planting at two different depths, and shading the lower stem. The differences in survival after 3 growing seasons were not statistically significant, but deep planting had...
Singla, Sanjeev; Mittal, Geeta; Raghav; Mittal, Rajinder K
2014-01-01
Background: Abdominal pain and shoulder tip pain after laparoscopic cholecystectomy are distressing for the patient. Various causes of this pain are peritoneal stretching and diaphragmatic irritation by high intra-abdominal pressure caused by pneumoperitoneum . We designed a study to compare the post operative pain after laparoscopic cholecystectomy at low pressure (7-8 mm of Hg) and standard pressure technique (12-14 mm of Hg). Aim : To compare the effect of low pressure and standard pressure pneumoperitoneum in post laparoscopic cholecystectomy pain . Further to study the safety of low pressure pneumoperitoneum in laparoscopic cholecystectomy. Settings and Design: A prospective randomised double blind study. Materials and Methods: A prospective randomised double blind study was done in 100 ASA grade I & II patients. They were divided into two groups -50 each. Group A patients underwent laparoscopic cholecystectomy with low pressure pneumoperitoneum (7-8 mm Hg) while group B underwent laparoscopic cholecystectomy with standard pressure pneumoperitoneum (12-13 mm Hg). Both the groups were compared for pain intensity, analgesic requirement and complications. Statistical Analysis: Demographic data and intraoperative complications were analysed using chi-square test. Frequency of pain, intensity of pain and analgesics consumption was compared by applying ANOVA test. Results: Post-operative pain score was significantly less in low pressure group as compared to standard pressure group. Number of patients requiring rescue analgesic doses was more in standard pressure group . This was statistically significant. Also total analgesic consumption was more in standard pressure group. There was no difference in intraoperative complications. Conclusion: This study demonstrates the use of simple expedient of reducing the pressure of pneumoperitoneum to 8 mm results in reduction in both intensity and frequency of post-operative pain and hence early recovery and better outcome.This study also shows that low pressure technique is safe with comparable rate of intraoperative complications. PMID:24701492
[The development of hospital medical supplies information management system].
Cao, Shaoping; Gu, Hongqing; Zhang, Peng; Wang, Qiang
2010-05-01
The information management of medical materials by using high-tech computer, in order to improve the efficiency of the consumption of medical supplies, hospital supplies and develop a new technology way to manage the hospital and material support. Using C # NET, JAVA techniques to develop procedures for the establishment of hospital material management information system, set the various management modules, production of various statistical reports, standard operating procedures. The system is convenient, functional and strong, fluent statistical functions. It can always fully grasp and understand the whole hospital supplies run dynamic information, as a modern and effective tool for hospital materials management.
Fukuda, Haruhisa; Kuroki, Manabu
2016-03-01
To develop and internally validate a surgical site infection (SSI) prediction model for Japan. Retrospective observational cohort study. We analyzed surveillance data submitted to the Japan Nosocomial Infections Surveillance system for patients who had undergone target surgical procedures from January 1, 2010, through December 31, 2012. Logistic regression analyses were used to develop statistical models for predicting SSIs. An SSI prediction model was constructed for each of the procedure categories by statistically selecting the appropriate risk factors from among the collected surveillance data and determining their optimal categorization. Standard bootstrapping techniques were applied to assess potential overfitting. The C-index was used to compare the predictive performances of the new statistical models with those of models based on conventional risk index variables. The study sample comprised 349,987 cases from 428 participant hospitals throughout Japan, and the overall SSI incidence was 7.0%. The C-indices of the new statistical models were significantly higher than those of the conventional risk index models in 21 (67.7%) of the 31 procedure categories (P<.05). No significant overfitting was detected. Japan-specific SSI prediction models were shown to generally have higher accuracy than conventional risk index models. These new models may have applications in assessing hospital performance and identifying high-risk patients in specific procedure categories.
21 CFR 820.250 - Statistical techniques.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Statistical techniques. 820.250 Section 820.250...) MEDICAL DEVICES QUALITY SYSTEM REGULATION Statistical Techniques § 820.250 Statistical techniques. (a... statistical techniques required for establishing, controlling, and verifying the acceptability of process...
21 CFR 820.250 - Statistical techniques.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Statistical techniques. 820.250 Section 820.250...) MEDICAL DEVICES QUALITY SYSTEM REGULATION Statistical Techniques § 820.250 Statistical techniques. (a... statistical techniques required for establishing, controlling, and verifying the acceptability of process...
20 CFR 634.4 - Statistical standards.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 20 Employees' Benefits 3 2011-04-01 2011-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...
20 CFR 634.4 - Statistical standards.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 20 Employees' Benefits 3 2010-04-01 2010-04-01 false Statistical standards. 634.4 Section 634.4... System § 634.4 Statistical standards. Recipients shall agree to provide required data following the statistical standards prescribed by the Bureau of Labor Statistics for cooperative statistical programs. ...
Haile, Tariku Gebre
2017-01-01
Background. In many studies, compliance with standard precautions among healthcare workers was reported to be inadequate. Objective. The aim of this study was to assess compliance with standard precautions and associated factors among healthcare workers in northwest Ethiopia. Methods. An institution-based cross-sectional study was conducted from March 01 to April 30, 2014. Simple random sampling technique was used to select participants. Data were entered into Epi info 3.5.1 and were exported to SPSS version 20.0 for statistical analysis. Multivariate logistic regression analyses were computed and adjusted odds ratio with 95% confidence interval was calculated to identify associated factors. Results. The proportion of healthcare workers who always comply with standard precautions was found to be 12%. Being a female healthcare worker (AOR [95% CI] 2.18 [1.12–4.23]), higher infection risk perception (AOR [95% CI] 3.46 [1.67–7.18]), training on standard precautions (AOR [95% CI] 2.90 [1.20–7.02]), accessibility of personal protective equipment (AOR [95% CI] 2.87 [1.41–5.86]), and management support (AOR [95% CI] 2.23 [1.11–4.53]) were found to be statistically significant. Conclusion and Recommendation. Compliance with standard precautions among the healthcare workers is very low. Interventions which include training of healthcare workers on standard precautions and consistent management support are recommended. PMID:28191020
Hayat, Matthew J.; Powell, Amanda; Johnson, Tessa; Cadwell, Betsy L.
2017-01-01
Statistical literacy and knowledge is needed to read and understand the public health literature. The purpose of this study was to quantify basic and advanced statistical methods used in public health research. We randomly sampled 216 published articles from seven top tier general public health journals. Studies were reviewed by two readers and a standardized data collection form completed for each article. Data were analyzed with descriptive statistics and frequency distributions. Results were summarized for statistical methods used in the literature, including descriptive and inferential statistics, modeling, advanced statistical techniques, and statistical software used. Approximately 81.9% of articles reported an observational study design and 93.1% of articles were substantively focused. Descriptive statistics in table or graphical form were reported in more than 95% of the articles, and statistical inference reported in more than 76% of the studies reviewed. These results reveal the types of statistical methods currently used in the public health literature. Although this study did not obtain information on what should be taught, information on statistical methods being used is useful for curriculum development in graduate health sciences education, as well as making informed decisions about continuing education for public health professionals. PMID:28591190
Hayat, Matthew J; Powell, Amanda; Johnson, Tessa; Cadwell, Betsy L
2017-01-01
Statistical literacy and knowledge is needed to read and understand the public health literature. The purpose of this study was to quantify basic and advanced statistical methods used in public health research. We randomly sampled 216 published articles from seven top tier general public health journals. Studies were reviewed by two readers and a standardized data collection form completed for each article. Data were analyzed with descriptive statistics and frequency distributions. Results were summarized for statistical methods used in the literature, including descriptive and inferential statistics, modeling, advanced statistical techniques, and statistical software used. Approximately 81.9% of articles reported an observational study design and 93.1% of articles were substantively focused. Descriptive statistics in table or graphical form were reported in more than 95% of the articles, and statistical inference reported in more than 76% of the studies reviewed. These results reveal the types of statistical methods currently used in the public health literature. Although this study did not obtain information on what should be taught, information on statistical methods being used is useful for curriculum development in graduate health sciences education, as well as making informed decisions about continuing education for public health professionals.
Abu-Tahun, Ibrahim; El-Ma'aita, Ahmad; Khraisat, Ameen
2016-08-01
The aim of this study was to report the satisfaction of fifth year undergraduate students on the clinical use of rotary endodontic preparation compared with stainless steel standard technique and to evaluate the impact of rotary nickel-titanium instruments on undergraduate teaching. This study was carried out by the fifth year undergraduate students attending peer review sessions as a part of their training program using a questionnaire to assess their satisfaction with these two techniques. The overall results indicated a statistically significant satisfaction of the undergraduate students with the use of the nickel-titanium system (P < 0.001) compared to stainless steel standard technique. Under the conditions of this study, the results showed a positive acceptance and consensus among novice dental students regarding the use of ProTaper rotary files and the need for undergraduate teaching of rotary nickel-titanium systems in Jordan. © 2015 Australian Society of Endodontology Inc.
Hoch, Jeffrey S; Briggs, Andrew H; Willan, Andrew R
2002-07-01
Economic evaluation is often seen as a branch of health economics divorced from mainstream econometric techniques. Instead, it is perceived as relying on statistical methods for clinical trials. Furthermore, the statistic of interest in cost-effectiveness analysis, the incremental cost-effectiveness ratio is not amenable to regression-based methods, hence the traditional reliance on comparing aggregate measures across the arms of a clinical trial. In this paper, we explore the potential for health economists undertaking cost-effectiveness analysis to exploit the plethora of established econometric techniques through the use of the net-benefit framework - a recently suggested reformulation of the cost-effectiveness problem that avoids the reliance on cost-effectiveness ratios and their associated statistical problems. This allows the formulation of the cost-effectiveness problem within a standard regression type framework. We provide an example with empirical data to illustrate how a regression type framework can enhance the net-benefit method. We go on to suggest that practical advantages of the net-benefit regression approach include being able to use established econometric techniques, adjust for imperfect randomisation, and identify important subgroups in order to estimate the marginal cost-effectiveness of an intervention. Copyright 2002 John Wiley & Sons, Ltd.
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-01-01
Background Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Material/Methods Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. Results ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Conclusions Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal. PMID:26092929
Stavileci, Miranda; Hoxha, Veton; Görduysus, Ömer; Tatar, Ilkan; Laperre, Kjell; Hostens, Jeroen; Küçükkaya, Selen; Muhaxheri, Edmond
2015-06-20
Complete mechanical preparation of the root canal system is rarely achieved. Therefore, the purpose of this study was to evaluate and compare the root canal shaping efficacy of ProTaper rotary files and standard stainless steel K-files using micro-computed tomography. Sixty extracted upper second premolars were selected and divided into 2 groups of 30 teeth each. Before preparation, all samples were scanned by micro-computed tomography. Thirty teeth were prepared with the ProTaper system and the other 30 with stainless steel files. After preparation, the untouched surface and root canal straightening were evaluated with micro-computed tomography. The percentage of untouched root canal surface was calculated in the coronal, middle, and apical parts of the canal. We also calculated straightening of the canal after root canal preparation. Results from the 2 groups were statistically compared using the Minitab statistical package. ProTaper rotary files left less untouched root canal surface compared with manual preparation in coronal, middle, and apical sector (p<0.001). Similarly, there was a statistically significant difference in root canal straightening after preparation between the techniques (p<0.001). Neither manual nor rotary techniques completely prepared the root canal, and both techniques caused slight straightening of the root canal.
A Monte Carlo Simulation Study of the Reliability of Intraindividual Variability
Estabrook, Ryne; Grimm, Kevin J.; Bowles, Ryan P.
2012-01-01
Recent research has seen intraindividual variability (IIV) become a useful technique to incorporate trial-to-trial variability into many types of psychological studies. IIV as measured by individual standard deviations (ISDs) has shown unique prediction to several types of positive and negative outcomes (Ram, Rabbit, Stollery, & Nesselroade, 2005). One unanswered question regarding measuring intraindividual variability is its reliability and the conditions under which optimal reliability is achieved. Monte Carlo simulation studies were conducted to determine the reliability of the ISD compared to the intraindividual mean. The results indicate that ISDs generally have poor reliability and are sensitive to insufficient measurement occasions, poor test reliability, and unfavorable amounts and distributions of variability in the population. Secondary analysis of psychological data shows that use of individual standard deviations in unfavorable conditions leads to a marked reduction in statistical power, although careful adherence to underlying statistical assumptions allows their use as a basic research tool. PMID:22268793
ERIC Educational Resources Information Center
Vinson, R. B.
In this report, the author suggests changes in the treatment of overhead costs by hypothesizing that "the effectiveness of standard costing in planning and controlling overhead costs can be increased through the use of probability theory and associated statistical techniques." To test the hypothesis, the author (1) presents an overview of the…
Randall, Sean M; Ferrante, Anna M; Boyd, James H; Brown, Adrian P; Semmens, James B
2016-08-01
The statistical linkage key (SLK-581) is a common tool for record linkage in Australia, due to its ability to provide some privacy protection. However, newer privacy-preserving approaches may provide greater privacy protection, while allowing high-quality linkage. To evaluate the standard SLK-581, encrypted SLK-581 and a newer privacy-preserving approach using Bloom filters, in terms of both privacy and linkage quality. Linkage quality was compared by conducting linkages on Australian health datasets using these three techniques and examining results. Privacy was compared qualitatively in relation to a series of scenarios where privacy breaches may occur. The Bloom filter technique offered greater privacy protection and linkage quality compared to the SLK-based method commonly used in Australia. The adoption of new privacy-preserving methods would allow both greater confidence in research results, while significantly improving privacy protection. © The Author(s) 2016.
Probabilistic registration of an unbiased statistical shape model to ultrasound images of the spine
NASA Astrophysics Data System (ADS)
Rasoulian, Abtin; Rohling, Robert N.; Abolmaesumi, Purang
2012-02-01
The placement of an epidural needle is among the most difficult regional anesthetic techniques. Ultrasound has been proposed to improve success of placement. However, it has not become the standard-of-care because of limitations in the depictions and interpretation of the key anatomical features. We propose to augment the ultrasound images with a registered statistical shape model of the spine to aid interpretation. The model is created with a novel deformable group-wise registration method which utilizes a probabilistic approach to register groups of point sets. The method is compared to a volume-based model building technique and it demonstrates better generalization and compactness. We instantiate and register the shape model to a spine surface probability map extracted from the ultrasound images. Validation is performed on human subjects. The achieved registration accuracy (2-4 mm) is sufficient to guide the choice of puncture site and trajectory of an epidural needle.
The method of expected number of deaths, 1786-1886-1986.
Keiding, N
1987-04-01
"The method of expected number of deaths is an integral part of standardization of vital rates, which is one of the oldest statistical techniques. The expected number of deaths was calculated in 18th century actuarial mathematics...but the method seems to have been forgotten, and was reinvented in connection with 19th century studies of geographical and occupational variations of mortality.... It is noted that standardization of rates is intimately connected to the study of relative mortality, and a short description of very recent developments in the methodology of that area is included." (SUMMARY IN FRE) excerpt
On evaluating compliance with air pollution levels 'not to be exceeded more than once per year'
NASA Technical Reports Server (NTRS)
Neustadter, H. E.; Sidik, S. M.
1974-01-01
The point of view taken is that the Environmental Protection Agency (EPA) Air Quality Standards (AQS) represent conditions which must be made to exist in the ambient environment. The statistical techniques developed should serve as tools for measuring the closeness to achieving the desired quality of air. It is shown that the sampling frequency recommended by EPA is inadequate to meet these objectives when the standard is expressed as a level not to be exceeded more than once per year and sampling frequency is once every three days or less frequent.
Precise measurement of the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scott, Rebecca J.; Thompson, Maxwell N.; Rassool, Roger P.
2011-08-15
State-of-the-art signal digitization and analysis techniques have been used to measure the half-life of the Fermi {beta} decay of {sup 26}Al{sup m}. The half-life was determined to be 6347.8 {+-} 2.5 ms. This new datum contributes to the experimental testing of the conserved-vector-current hypothesis and the required unitarity of the Cabibbo-Kobayashi-Maskawa matrix: two essential components of the standard model. Detailed discussion of the experimental techniques and data analysis and a thorough investigation of the statistical and systematic uncertainties are presented.
Implementing statistical equating for MRCP(UK) Parts 1 and 2.
McManus, I C; Chis, Liliana; Fox, Ray; Waller, Derek; Tang, Peter
2014-09-26
The MRCP(UK) exam, in 2008 and 2010, changed the standard-setting of its Part 1 and Part 2 examinations from a hybrid Angoff/Hofstee method to statistical equating using Item Response Theory, the reference group being UK graduates. The present paper considers the implementation of the change, the question of whether the pass rate increased amongst non-UK candidates, any possible role of Differential Item Functioning (DIF), and changes in examination predictive validity after the change. Analysis of data of MRCP(UK) Part 1 exam from 2003 to 2013 and Part 2 exam from 2005 to 2013. Inspection suggested that Part 1 pass rates were stable after the introduction of statistical equating, but showed greater annual variation probably due to stronger candidates taking the examination earlier. Pass rates seemed to have increased in non-UK graduates after equating was introduced, but was not associated with any changes in DIF after statistical equating. Statistical modelling of the pass rates for non-UK graduates found that pass rates, in both Part 1 and Part 2, were increasing year on year, with the changes probably beginning before the introduction of equating. The predictive validity of Part 1 for Part 2 was higher with statistical equating than with the previous hybrid Angoff/Hofstee method, confirming the utility of IRT-based statistical equating. Statistical equating was successfully introduced into the MRCP(UK) Part 1 and Part 2 written examinations, resulting in higher predictive validity than the previous Angoff/Hofstee standard setting. Concerns about an artefactual increase in pass rates for non-UK candidates after equating were shown not to be well-founded. Most likely the changes resulted from a genuine increase in candidate ability, albeit for reasons which remain unclear, coupled with a cognitive illusion giving the impression of a step-change immediately after equating began. Statistical equating provides a robust standard-setting method, with a better theoretical foundation than judgemental techniques such as Angoff, and is more straightforward and requires far less examiner time to provide a more valid result. The present study provides a detailed case study of introducing statistical equating, and issues which may need to be considered with its introduction.
Sutlive, Thomas G; Mabry, Lance M; Easterling, Emmanuel J; Durbin, Jose D; Hanson, Stephen L; Wainner, Robert S; Childs, John D
2009-07-01
To determine whether military health care beneficiaries with low back pain (LBP) who are likely to respond successfully to spinal manipulation experience a difference in short-term clinical outcomes based on the manipulation technique that is used. Sixty patients with LBP identified as likely responders to manipulation underwent a standardized clinical examination and were randomized to receive a lumbopelvic (LP) or lumbar neutral gap (NG) manipulation technique. Outcome measures were a numeric pain rating scale and the modified Oswestry Disability Questionnaire. Both the LP and NG groups experienced statistically significant reductions in pain and disability at 48 hours postmanipulation. The improvements seen in each group were small because of the short follow-up. There were no statistically significant or clinically meaningful differences in pain or disability between the two groups. The two manipulation techniques used in this study were equally effective at reducing pain and disability when compared at 48 hours posttreatment. Clinicians may employ either technique for the treatment of LBP and can expect similar outcomes in those who satisfy the clinical prediction rule (CPR). Further research is required to determine whether differences exist at longer-term follow-up periods, after multiple treatment sessions, or in different clinical populations.
Mixing of thawed coagulation samples prior to testing: Is any technique better than another?
Lima-Oliveira, Gabriel; Adcock, Dorothy M; Salvagno, Gian Luca; Favaloro, Emmanuel J; Lippi, Giuseppe
2016-12-01
Thus study was aimed to investigate whether the mixing technique could influence the results of routine and specialized clotting tests on post-thawed specimens. The sample population consisted of 13 healthy volunteers. Venous blood was collected by evacuated system into three 3.5mL tubes containing 0.109mmol/L buffered sodium citrate. The three blood tubes of each subject were pooled immediately after collection inside a Falcon 15mL tube, then mixed by 6 gentle end-over-end inversions, and centrifuged at 1500g for 15min. Plasma-pool of each subject was then divided in 4 identical aliquots. All aliquots were thawed after 2-day freezing -70°C. Immediately afterwards, the plasma of the four paired aliquots were treated using four different techniques: (a) reference procedure, entailing 6 gentle end-over-end inversions; (b) placing the sample on a blood tube rocker (i.e., rotor mixing) for 5min to induce agitation and mixing; (c) use of a vortex mixer for 20s to induce agitation and mixing; and (d) no mixing. The significance of differences against the reference technique for mixing thawed plasma specimens (i.e., 6 gentle end-over-end inversions) were assessed with paired Student's t-test. The statistical significance was set at p<0.05. As compared to the reference 6-time gentle inversion technique, statistically significant differences were only observed for fibrinogen, and factor VIII in plasma mixed on tube rocker. Some trends were observed in the remaining other cases, but the bias did not achieve statistical significance. We hence suggest that each laboratory should standardize the procedures for mixing of thawed plasma according to a single technique. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Ostrander, Roger V; McKinney, Bart I
2012-10-01
Studies suggest that arthroscopic repair techniques may have high recurrence rates for larger rotator cuff tears. A more anatomic repair may improve the success rate when performing arthroscopic rotator cuff repair. We hypothesized that a triple-row modification of the suture-bridge technique for rotator cuff repair would result in significantly more footprint contact area and pressure between the rotator cuff and the humeral tuberosity. Eighteen ovine infraspinatus tendons were repaired using 1 of 3 simulated arthroscopic techniques: a double-row repair, the suture-bridge technique, and a triple-row repair. The triple-row repair technique is a modification of the suture-bridge technique that uses an additional reducing anchor between the medial and lateral rows. Six samples were tested per group. Pressure-indicating film was used to measure the footprint contact area and pressure after each repair. The triple-row repair resulted in significantly more rotator cuff footprint contact area and contact pressure compared with the double-row technique and the standard suture-bridge technique. No statistical difference in contact area or contact pressure was found between the double-row technique and the suture-bridge technique. The triple-row technique for rotator cuff repair results in significantly more footprint contact area and contact pressure compared with the double-row and standard suture-bridge techniques. This more anatomic repair may improve the healing rate when performing arthroscopic rotator cuff repair. Copyright © 2012 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Mosby, Inc. All rights reserved.
Attenberger, Ulrike I; Runge, Val M; Williams, Kenneth D; Stemmer, Alto; Michaely, Henrik J; Schoenberg, Stefan O; Reiser, Maximilian F; Wintersperger, Bernd J
2009-03-01
Motion artifacts often markedly degrade image quality in clinical scans. The BLADE technique offers an alternative k-space sampling scheme reducing the effect of patient related motion on image quality. The purpose of this study is the comparison of imaging artifacts, signal-to-noise (SNR), and contrast-to-noise ratio (CNR) of a new turboFLASH BLADE k-space trajectory with the standard Cartesian k-space sampling for brain imaging, using a 32-channel coil at 3T. The results from 32 patients included after informed consent are reported. This study was performed with a 32-channel head coil on a 3T scanner. Sagittal and axial T1-weighted FLASH sequences (TR/TE 250/2.46 milliseconds, flip angle 70-degree), acquired with Cartesian k-space sampling and T1-weighted turboFLASH sequences (TR/TE/TIsag/TIax 3200/2.77/1144/1056 milliseconds, flip angle 20-degree), using PROPELLER (BLADE) k-space trajectory, were compared. SNR and CNR were evaluated using a paired student t test. The frequency of motion artifacts was assessed in a blinded read. To analyze the differences between both techniques a McNemar test was performed. A P value <0.05 was considered statistically significant. From the blinded read, the overall preference in terms of diagnostic image quality was statistically significant in favor of the BLADE turboFLASH data sets, compared with standard FLASH for both sagittal (P < 0.0001) and axial (P < 0.0001) planes. The frequency of motion artifacts from the scalp was higher for standard FLASH sequences than for BLADE sequences on both axial (47%, P < 0.0003) and sagittal (69%, P < 0.0001) planes. BLADE was preferred in 100% (sagittal plane) and 80% (axial plane) of in-patient data sets and in 68% (sagittal plane) and 73% (axial plane) of out-patient data sets.The BLADE T1 scan did have lower SNRmean (BLADEax 179 +/- 98, Cartesianax 475 +/- 145, BLADEsag 171 +/- 51, and Cartesiansag 697 +/- 129) with P values indicating accordingly a statistically significant difference (Pax <0.0001, Psag < 0.0001), because of the fundamental difference in imaging approach (FLASH vs. turboFLASH). Differences for CNR were also statistically significant, independent of imaging plane (Pax = 0.001, Psag = 0.02). Results demonstrate that turboFLASH BLADE is applicable at 3T with a 32-channel head coil for T1-weighted imaging, with reduced ghost artifacts. This approach offers the first truly clinically applicable T1-weighted BLADE technique for brain imaging at 3T, with consistent excellent image quality.
The Statistical Loop Analyzer (SLA)
NASA Technical Reports Server (NTRS)
Lindsey, W. C.
1985-01-01
The statistical loop analyzer (SLA) is designed to automatically measure the acquisition, tracking and frequency stability performance characteristics of symbol synchronizers, code synchronizers, carrier tracking loops, and coherent transponders. Automated phase lock and system level tests can also be made using the SLA. Standard baseband, carrier and spread spectrum modulation techniques can be accomodated. Through the SLA's phase error jitter and cycle slip measurements the acquisition and tracking thresholds of the unit under test are determined; any false phase and frequency lock events are statistically analyzed and reported in the SLA output in probabilistic terms. Automated signal drop out tests can be performed in order to trouble shoot algorithms and evaluate the reacquisition statistics of the unit under test. Cycle slip rates and cycle slip probabilities can be measured using the SLA. These measurements, combined with bit error probability measurements, are all that are needed to fully characterize the acquisition and tracking performance of a digital communication system.
Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.
Measurement of the relationship between perceived and computed color differences
NASA Astrophysics Data System (ADS)
García, Pedro A.; Huertas, Rafael; Melgosa, Manuel; Cui, Guihua
2007-07-01
Using simulated data sets, we have analyzed some mathematical properties of different statistical measurements that have been employed in previous literature to test the performance of different color-difference formulas. Specifically, the properties of the combined index PF/3 (performance factor obtained as average of three terms), widely employed in current literature, have been considered. A new index named standardized residual sum of squares (STRESS), employed in multidimensional scaling techniques, is recommended. The main difference between PF/3 and STRESS is that the latter is simpler and allows inferences on the statistical significance of two color-difference formulas with respect to a given set of visual data.
Statistical mechanics of broadcast channels using low-density parity-check codes.
Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David
2003-03-01
We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.
NASA Astrophysics Data System (ADS)
Kanniyappan, Udayakumar; Gnanatheepaminstein, Einstein; Prakasarao, Aruna; Dornadula, Koteeswaran; Singaravelu, Ganesan
2017-02-01
Cancer is one of the most common human threats around the world and diagnosis based on optical spectroscopy especially fluorescence technique has been established as the standard approach among scientist to explore the biochemical and morphological changes in tissues. In this regard, the present work aims to extract spectral signatures of the various fluorophores present in oral tissues using parallel factor analysis (PARAFAC). Subsequently, the statistical analysis also to be performed to show its diagnostic potential in distinguishing malignant, premalignant from normal oral tissues. Hence, the present study may lead to the possible and/or alternative tool for oral cancer diagnosis.
Agundu, Prince Umor C
2003-01-01
Public health dispensaries in Nigeria in recent times have demonstrated the poise to boost corporate productivity in the new millennium and to drive the nation closer to concretising the lofty goal of health-for-all. This is very pronounced considering the face-lift giving to the physical environment, increase in the recruitment and development of professionals, and upward review of financial subventions. However, there is little or no emphasis on basic statistical appreciation/application which enhances the decision making ability of corporate executives. This study used the responses from 120 senior public health officials in Nigeria and analyzed them with chi-square statistical technique. The results established low statistical aptitude, inadequate statistical training programmes, little/no emphasis on statistical literacy compared to computer literacy, amongst others. Consequently, it was recommended that these lapses be promptly addressed to enhance official executive performance in the establishments. Basic statistical data presentation typologies have been articulated in this study to serve as first-aid instructions to the target group, as they represent the contributions of eminent scholars in this area of intellectualism.
Statistical analysis of fNIRS data: a comprehensive review.
Tak, Sungho; Ye, Jong Chul
2014-01-15
Functional near-infrared spectroscopy (fNIRS) is a non-invasive method to measure brain activities using the changes of optical absorption in the brain through the intact skull. fNIRS has many advantages over other neuroimaging modalities such as positron emission tomography (PET), functional magnetic resonance imaging (fMRI), or magnetoencephalography (MEG), since it can directly measure blood oxygenation level changes related to neural activation with high temporal resolution. However, fNIRS signals are highly corrupted by measurement noises and physiology-based systemic interference. Careful statistical analyses are therefore required to extract neuronal activity-related signals from fNIRS data. In this paper, we provide an extensive review of historical developments of statistical analyses of fNIRS signal, which include motion artifact correction, short source-detector separation correction, principal component analysis (PCA)/independent component analysis (ICA), false discovery rate (FDR), serially-correlated errors, as well as inference techniques such as the standard t-test, F-test, analysis of variance (ANOVA), and statistical parameter mapping (SPM) framework. In addition, to provide a unified view of various existing inference techniques, we explain a linear mixed effect model with restricted maximum likelihood (ReML) variance estimation, and show that most of the existing inference methods for fNIRS analysis can be derived as special cases. Some of the open issues in statistical analysis are also described. Copyright © 2013 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Currit, P. A.
1983-01-01
The Cleanroom software development methodology is designed to take the gamble out of product releases for both suppliers and receivers of the software. The ingredients of this procedure are a life cycle of executable product increments, representative statistical testing, and a standard estimate of the MTTF (Mean Time To Failure) of the product at the time of its release. A statistical approach to software product testing using randomly selected samples of test cases is considered. A statistical model is defined for the certification process which uses the timing data recorded during test. A reasonableness argument for this model is provided that uses previously published data on software product execution. Also included is a derivation of the certification model estimators and a comparison of the proposed least squares technique with the more commonly used maximum likelihood estimators.
Virtual lab demonstrations improve students' mastery of basic biology laboratory techniques.
Maldarelli, Grace A; Hartmann, Erica M; Cummings, Patrick J; Horner, Robert D; Obom, Kristina M; Shingles, Richard; Pearlman, Rebecca S
2009-01-01
Biology laboratory classes are designed to teach concepts and techniques through experiential learning. Students who have never performed a technique must be guided through the process, which is often difficult to standardize across multiple lab sections. Visual demonstration of laboratory procedures is a key element in teaching pedagogy. The main goals of the study were to create videos explaining and demonstrating a variety of lab techniques that would serve as teaching tools for undergraduate and graduate lab courses and to assess the impact of these videos on student learning. Demonstrations of individual laboratory procedures were videotaped and then edited with iMovie. Narration for the videos was edited with Audacity. Undergraduate students were surveyed anonymously prior to and following screening to assess the impact of the videos on student lab performance by completion of two Participant Perception Indicator surveys. A total of 203 and 171 students completed the pre- and posttesting surveys, respectively. Statistical analyses were performed to compare student perceptions of knowledge of, confidence in, and experience with the lab techniques before and after viewing the videos. Eleven demonstrations were recorded. Chi-square analysis revealed a significant increase in the number of students reporting increased knowledge of, confidence in, and experience with the lab techniques after viewing the videos. Incorporation of instructional videos as prelaboratory exercises has the potential to standardize techniques and to promote successful experimental outcomes.
Impervious surfaces mapping using high resolution satellite imagery
NASA Astrophysics Data System (ADS)
Shirmeen, Tahmina
In recent years, impervious surfaces have emerged not only as an indicator of the degree of urbanization, but also as an indicator of environmental quality. As impervious surface area increases, storm water runoff increases in velocity, quantity, temperature and pollution load. Any of these attributes can contribute to the degradation of natural hydrology and water quality. Various image processing techniques have been used to identify the impervious surfaces, however, most of the existing impervious surface mapping tools used moderate resolution imagery. In this project, the potential of standard image processing techniques to generate impervious surface data for change detection analysis using high-resolution satellite imagery was evaluated. The city of Oxford, MS was selected as the study site for this project. Standard image processing techniques, including Normalized Difference Vegetation Index (NDVI), Principal Component Analysis (PCA), a combination of NDVI and PCA, and image classification algorithms, were used to generate impervious surfaces from multispectral IKONOS and QuickBird imagery acquired in both leaf-on and leaf-off conditions. Accuracy assessments were performed, using truth data generated by manual classification, with Kappa statistics and Zonal statistics to select the most appropriate image processing techniques for impervious surface mapping. The performance of selected image processing techniques was enhanced by incorporating Soil Brightness Index (SBI) and Greenness Index (GI) derived from Tasseled Cap Transformed (TCT) IKONOS and QuickBird imagery. A time series of impervious surfaces for the time frame between 2001 and 2007 was made using the refined image processing techniques to analyze the changes in IS in Oxford. It was found that NDVI and the combined NDVI--PCA methods are the most suitable image processing techniques for mapping impervious surfaces in leaf-off and leaf-on conditions respectively, using high resolution multispectral imagery. It was also found that IS data generated by these techniques can be refined by removing the conflicting dry soil patches using SBI and GI obtained from TCT of the same imagery used for IS data generation. The change detection analysis of the IS time series shows that Oxford experienced the major changes in IS from the year 2001 to 2004 and 2006 to 2007.
An Automated Statistical Technique for Counting Distinct Multiple Sclerosis Lesions.
Dworkin, J D; Linn, K A; Oguz, I; Fleishman, G M; Bakshi, R; Nair, G; Calabresi, P A; Henry, R G; Oh, J; Papinutto, N; Pelletier, D; Rooney, W; Stern, W; Sicotte, N L; Reich, D S; Shinohara, R T
2018-04-01
Lesion load is a common biomarker in multiple sclerosis, yet it has historically shown modest association with clinical outcome. Lesion count, which encapsulates the natural history of lesion formation and is thought to provide complementary information, is difficult to assess in patients with confluent (ie, spatially overlapping) lesions. We introduce a statistical technique for cross-sectionally counting pathologically distinct lesions. MR imaging was used to assess the probability of a lesion at each location. The texture of this map was quantified using a novel technique, and clusters resembling the center of a lesion were counted. Validity compared with a criterion standard count was demonstrated in 60 subjects observed longitudinally, and reliability was determined using 14 scans of a clinically stable subject acquired at 7 sites. The proposed count and the criterion standard count were highly correlated ( r = 0.97, P < .001) and not significantly different (t 59 = -.83, P = .41), and the variability of the proposed count across repeat scans was equivalent to that of lesion load. After accounting for lesion load and age, lesion count was negatively associated ( t 58 = -2.73, P < .01) with the Expanded Disability Status Scale. Average lesion size had a higher association with the Expanded Disability Status Scale ( r = 0.35, P < .01) than lesion load ( r = 0.10, P = .44) or lesion count ( r = -.12, P = .36) alone. This study introduces a novel technique for counting pathologically distinct lesions using cross-sectional data and demonstrates its ability to recover obscured longitudinal information. The proposed count allows more accurate estimation of lesion size, which correlated more closely with disability scores than either lesion load or lesion count alone. © 2018 by American Journal of Neuroradiology.
Nevada Applied Ecology Group procedures handbook for environmental transuranics
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M.G.; Dunaway, P.B.
The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and others. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerning a procedure,more » it has been included to indicate special studies or applications perhaps more complex than the routine standard sampling procedures utilized.« less
Nevada Applied Ecology Group procedures handbook for environmental transuranics
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, M.G.; Dunaway, P.B.
The activities of the Nevada Applied Ecology Group (NAEG) integrated research studies of environmental plutonium and other transuranics at the Nevada Test Site have required many standardized field and laboratory procedures. These include sampling techniques, collection and preparation, radiochemical and wet chemistry analysis, data bank storage and reporting, and statistical considerations for environmental samples of soil, vegetation, resuspended particles, animals, and other biological material. This document, printed in two volumes, includes most of the Nevada Applied Ecology Group standard procedures, with explanations as to the specific applications involved in the environmental studies. Where there is more than one document concerningmore » a procedure, it has been included to indicate special studies or applications more complex than the routine standard sampling procedures utilized.« less
Reduction of lithologic-log data to numbers for use in the digital computer
Morgan, C.O.; McNellis, J.M.
1971-01-01
The development of a standardized system for conveniently coding lithologic-log data for use in the digital computer has long been needed. The technique suggested involves a reduction of the original written alphanumeric log to a numeric log by use of computer programs. This numeric log can then be retrieved as a written log, interrogated for pertinent information, or analyzed statistically. ?? 1971 Plenum Publishing Corporation.
Integrating Formal Methods and Testing 2002
NASA Technical Reports Server (NTRS)
Cukic, Bojan
2002-01-01
Traditionally, qualitative program verification methodologies and program testing are studied in separate research communities. None of them alone is powerful and practical enough to provide sufficient confidence in ultra-high reliability assessment when used exclusively. Significant advances can be made by accounting not only tho formal verification and program testing. but also the impact of many other standard V&V techniques, in a unified software reliability assessment framework. The first year of this research resulted in the statistical framework that, given the assumptions on the success of the qualitative V&V and QA procedures, significantly reduces the amount of testing needed to confidently assess reliability at so-called high and ultra-high levels (10-4 or higher). The coming years shall address the methodologies to realistically estimate the impacts of various V&V techniques to system reliability and include the impact of operational risk to reliability assessment. Combine formal correctness verification, process and product metrics, and other standard qualitative software assurance methods with statistical testing with the aim of gaining higher confidence in software reliability assessment for high-assurance applications. B) Quantify the impact of these methods on software reliability. C) Demonstrate that accounting for the effectiveness of these methods reduces the number of tests needed to attain certain confidence level. D) Quantify and justify the reliability estimate for systems developed using various methods.
Territories typification technique with use of statistical models
NASA Astrophysics Data System (ADS)
Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.
2018-05-01
Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.
Redmond, Tony; O'Leary, Neil; Hutchison, Donna M; Nicolela, Marcelo T; Artes, Paul H; Chauhan, Balwantray C
2013-12-01
A new analysis method called permutation of pointwise linear regression measures the significance of deterioration over time at each visual field location, combines the significance values into an overall statistic, and then determines the likelihood of change in the visual field. Because the outcome is a single P value, individualized to that specific visual field and independent of the scale of the original measurement, the method is well suited for comparing techniques with different stimuli and scales. To test the hypothesis that frequency-doubling matrix perimetry (FDT2) is more sensitive than standard automated perimetry (SAP) in identifying visual field progression in glaucoma. Patients with open-angle glaucoma and healthy controls were examined by FDT2 and SAP, both with the 24-2 test pattern, on the same day at 6-month intervals in a longitudinal prospective study conducted in a hospital-based setting. Only participants with at least 5 examinations were included. Data were analyzed with permutation of pointwise linear regression. Permutation of pointwise linear regression is individualized to each participant, in contrast to current analyses in which the statistical significance is inferred from population-based approaches. Analyses were performed with both total deviation and pattern deviation. Sixty-four patients and 36 controls were included in the study. The median age, SAP mean deviation, and follow-up period were 65 years, -2.6 dB, and 5.4 years, respectively, in patients and 62 years, +0.4 dB, and 5.2 years, respectively, in controls. Using total deviation analyses, statistically significant deterioration was identified in 17% of patients with FDT2, in 34% of patients with SAP, and in 14% of patients with both techniques; in controls these percentages were 8% with FDT2, 31% with SAP, and 8% with both. Using pattern deviation analyses, statistically significant deterioration was identified in 16% of patients with FDT2, in 17% of patients with SAP, and in 3% of patients with both techniques; in controls these values were 3% with FDT2 and none with SAP. No evidence was found that FDT2 is more sensitive than SAP in identifying visual field deterioration. In about one-third of healthy controls, age-related deterioration with SAP reached statistical significance.
BTS statistical standards manual
DOT National Transportation Integrated Search
2005-10-01
The Bureau of Transportation Statistics (BTS), like other federal statistical agencies, establishes professional standards to guide the methods and procedures for the collection, processing, storage, and presentation of statistical data. Standards an...
Rickmann, Annekatrin; Opitz, Natalia; Szurman, Peter; Boden, Karl Thomas; Jung, Sascha; Wahl, Silke; Haus, Arno; Damm, Lara-Jil; Januschowski, Kai
2018-01-01
Descemet membrane endothelial keratoplasty (DMEK) has been improved over the last decade. The aim of this study was to compare the clinical outcome of the recently introduced liquid bubble method compared to the standard manual preparation. This retrospective study evaluated the outcome of 200 patients after DMEK surgery using two different graft preparation techniques. Ninety-six DMEK were prepared by manual dissection and 104 by the novel liquid bubble technique. The mean follow-up time was 13.7 months (SD ± 8, range 6-36 months). Best corrected mean visual acuity (BCVA) increased for all patients statistically significant from baseline 0.85 logMAR (SD ± 0.5) to 0.26 logMAR (SD ± 0.27) at the final follow-up (Wilcoxon, p = 0.001). Subgroup analyses of BCVA at the final follow-up between manual dissection and liquid bubble preparation showed no statistically significant difference (Mann-Whitney U Test, p = 0.64). The mean central corneal thickness was not statistically different (manual dissection: 539 µm, SD ± 68 µm and liquid bubble technique: 534 µm, SD ± 52 µm,) between the two groups (Mann-Whitney U Test, p = 0.64). At the final follow-up, mean endothelial cell count of donor grafts was statistically not significant different at the final follow-up with 1761 cells/mm 2 (-30.7%, SD ± 352) for manual dissection compared to liquid bubble technique with 1749 cells/mm 2 (-29.9%, SD ± 501) (Mann-Whitney U-Test, p = 0.73). The re-DMEK rate was comparable for manual dissection with 8 cases (8.3%) and 7 cases (6.7%) for liquid bubble dissection (p = 0.69, Chi-Square Test). Regarding the clinical outcome, we did not find a statistical significant difference between manual dissection and liquid bubble graft preparation. Both preparation techniques lead to an equivalent clinical outcome after DMEK surgery.
NASA Technical Reports Server (NTRS)
Yijun, Huang; Guochen, Yu; Hong, Sun
1996-01-01
It is well known that the quantum Yang-Baxter equations (QYBE) play an important role in various theoretical and mathematical physics, such as completely integrable system in (1 + 1)-dimensions, exactly solvable models in statistical mechanics, the quantum inverse scattering method and the conformal field theories in 2-dimensions. Recently, much remarkable progress has been made in constructing the solutions of the QYBE associated with the representations of lie algebras. It is shown that for some cases except the standard solutions, there also exist new solutions, but the others have not non-standard solutions. In this paper by employing the weight conservation and the diagrammatic techniques we show that the solution associated with the 10-D representations of SU (4) are standard alone.
On prognostic models, artificial intelligence and censored observations.
Anand, S S; Hamilton, P W; Hughes, J G; Bell, D A
2001-03-01
The development of prognostic models for assisting medical practitioners with decision making is not a trivial task. Models need to possess a number of desirable characteristics and few, if any, current modelling approaches based on statistical or artificial intelligence can produce models that display all these characteristics. The inability of modelling techniques to provide truly useful models has led to interest in these models being purely academic in nature. This in turn has resulted in only a very small percentage of models that have been developed being deployed in practice. On the other hand, new modelling paradigms are being proposed continuously within the machine learning and statistical community and claims, often based on inadequate evaluation, being made on their superiority over traditional modelling methods. We believe that for new modelling approaches to deliver true net benefits over traditional techniques, an evaluation centric approach to their development is essential. In this paper we present such an evaluation centric approach to developing extensions to the basic k-nearest neighbour (k-NN) paradigm. We use standard statistical techniques to enhance the distance metric used and a framework based on evidence theory to obtain a prediction for the target example from the outcome of the retrieved exemplars. We refer to this new k-NN algorithm as Censored k-NN (Ck-NN). This reflects the enhancements made to k-NN that are aimed at providing a means for handling censored observations within k-NN.
Bayesian inference for the spatio-temporal invasion of alien species.
Cook, Alex; Marion, Glenn; Butler, Adam; Gibson, Gavin
2007-08-01
In this paper we develop a Bayesian approach to parameter estimation in a stochastic spatio-temporal model of the spread of invasive species across a landscape. To date, statistical techniques, such as logistic and autologistic regression, have outstripped stochastic spatio-temporal models in their ability to handle large numbers of covariates. Here we seek to address this problem by making use of a range of covariates describing the bio-geographical features of the landscape. Relative to regression techniques, stochastic spatio-temporal models are more transparent in their representation of biological processes. They also explicitly model temporal change, and therefore do not require the assumption that the species' distribution (or other spatial pattern) has already reached equilibrium as is often the case with standard statistical approaches. In order to illustrate the use of such techniques we apply them to the analysis of data detailing the spread of an invasive plant, Heracleum mantegazzianum, across Britain in the 20th Century using geo-referenced covariate information describing local temperature, elevation and habitat type. The use of Markov chain Monte Carlo sampling within a Bayesian framework facilitates statistical assessments of differences in the suitability of different habitat classes for H. mantegazzianum, and enables predictions of future spread to account for parametric uncertainty and system variability. Our results show that ignoring such covariate information may lead to biased estimates of key processes and implausible predictions of future distributions.
High-Accuracy Surface Figure Measurement of Silicon Mirrors at 80 K
NASA Technical Reports Server (NTRS)
Blake, Peter; Mink, Ronald G.; Chambers, John; Davila, Pamela; Robinson, F. David
2004-01-01
This report describes the equipment, experimental methods, and first results at a new facility for interferometric measurement of cryogenically-cooled spherical mirrors at the Goddard Space Flight Center Optics Branch. The procedure, using standard phase-shifting interferometry, has an standard combined uncertainty of 3.6 nm rms in its representation of the two-dimensional surface figure error at 80, and an uncertainty of plus or minus 1 nm in the rms statistic itself. The first mirror tested was a concave spherical silicon foam-core mirror, with a clear aperture of 120 mm. The optic surface was measured at room temperature using standard absolute techniques; and then the change in surface figure error from room temperature to 80 K was measured. The mirror was cooled within a cryostat. and its surface figure error measured through a fused-silica window. The facility and techniques will be used to measure the surface figure error at 20K of prototype lightweight silicon carbide and Cesic mirrors developed by Galileo Avionica (Italy) for the European Space Agency (ESA).
Growth rate measurement in free jet experiments
NASA Astrophysics Data System (ADS)
Charpentier, Jean-Baptiste; Renoult, Marie-Charlotte; Crumeyrolle, Olivier; Mutabazi, Innocent
2017-07-01
An experimental method was developed to measure the growth rate of the capillary instability for free liquid jets. The method uses a standard shadow-graph imaging technique to visualize a jet, produced by extruding a liquid through a circular orifice, and a statistical analysis of the entire jet. The analysis relies on the computation of the standard deviation of a set of jet profiles, obtained in the same experimental conditions. The principle and robustness of the method are illustrated with a set of emulated jet profiles. The method is also applied to free falling jet experiments conducted for various Weber numbers and two low-viscosity solutions: a Newtonian and a viscoelastic one. Growth rate measurements are found in good agreement with linear stability theory in the Rayleigh's regime, as expected from previous studies. In addition, the standard deviation curve is used to obtain an indirect measurement of the initial perturbation amplitude and to identify beads on a string structure on the jet. This last result serves to demonstrate the capability of the present technique to explore in the future the dynamics of viscoelastic liquid jets.
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
Electromigration kinetics and critical current of Pb-free interconnects
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Minhua; Rosenberg, Robert
2014-04-07
Electromigration kinetics of Pb-free solder bump interconnects have been studied using a single bump parameter sweep technique. By removing bump to bump variations in structure, texture, and composition, the single bump sweep technique has provided both activation energy and power exponents that reflect atomic migration and interface reactions with fewer samples, shorter stress time, and better statistics than standard failure testing procedures. Contact metallurgies based on Cu and Ni have been studied. Critical current, which corresponds to the Blech limit, was found to exist in the Ni metallurgy, but not in the Cu metallurgy. A temperature dependence of critical currentmore » was also observed.« less
Optical measurement of isolated canine lung filtration coefficients after alloxan infusion.
Klaesner, J W; Pou, N A; Parker, R E; Finney, C; Roselli, R J
1998-04-01
In this study, lung filtration coefficient (Kfc) was measured in eight isolated canine lung preparations by using three methods: standard gravimetric (Std), blood-corrected gravimetric (BC), and optical. The lungs were held in zone III conditions and were subjected to an average venous pressure increase of 8.79 +/- 0.93 (mean +/- SD) cmH2O. The permeability of the lungs was increased with an infusion of alloxan (75 mg/kg). The resulting Kfc values (in milliliters . min-1 . cmH2O-1 . 100 g dry lung weight-1) measured by using Std and BC gravimetric techniques before vs. after alloxan infusion were statistically different: Std, 0.527 +/- 0.290 vs. 1. 966 +/- 0.283; BC, 0.313 +/- 0.290 vs. 1.384 +/- 0.290. However, the optical technique did not show any statistical difference between pre- and postinjury with alloxan, 0.280 +/- 0.305 vs. 0.483 +/- 0. 297, respectively. The alloxan injury, quantified by using multiple-indicator techniques, showed an increase in permeability and a corresponding decrease in reflection coefficient for albumin (sigmaf). Because the optical method measures the product of Kfc and sigmaf, this study shows that albumin should not be used as an intravascular optical filtration marker when permeability is elevated. However, the optical technique, along with another means of measuring Kfc (such as BC), can be used to calculate the sigmaf of a tracer (in this study, sigmaf of 0.894 at baseline and 0.348 after injury). Another important finding of this study was that the ratio of baseline-to-injury Kfc values was not statistically different for Std and BC techniques, indicating that the percent contribution of slow blood-volume increases does not change because of injury.
Berger, Cezar Augusto Sarraf; Freitas, Renato da Silva; Malafaia, Osvaldo; Pinto, José Simão de Paula; Macedo Filho, Evaldo Dacheux; Mocellin, Marcos; Fagundes, Marina Serrato Coelho
2014-01-01
Introduction The knowledge and study of surgical techniques and anthropometric measurements of the nose make possible a qualitative and quantitative analysis of surgical results. Objective Study the main technique used in rhinoplasty on Caucasian noses and compare preoperative and postoperative anthropometric measurements of the nose. Methods A prospective study with 170 patients was performed at a private hospital. Data were collected using the Electronic System Integrated of Protocols software (Sistema Integrado de Protocolos Eletrônicos, SINPE©). The surgical techniques used in the nasal dorsum and tip were evaluated. Preoperative and 12-month follow-up photos as well as the measurements compared with the ideal aesthetic standard of a Caucasian nose were analyzed objectively. Student t test and standard deviation test were applied. Results There was a predominance of endonasal access (94.4%). The most common dorsum technique was hump removal (33.33%), and the predominance of sutures (24.76%) was observed on the nasal tip, with the lateral intercrural the most frequent (32.39%). Comparison between preoperative and postoperative photos found statistically significant alterations on the anthropometric measurements of the noses. Conclusion The main surgical techniques on Caucasian noses were evaluated, and a great variety was found. The evaluation of anthropometric measurements of the nose proved the efficiency of the performed procedures. PMID:25992149
NASA Astrophysics Data System (ADS)
Profe, Jörn; Ohlendorf, Christian
2017-04-01
XRF-scanning is the state-of-the-art technique for geochemical analyses in marine and lacustrine sedimentology for more than a decade. However, little attention has been paid to data precision and technical limitations so far. Using homogenized, dried and powdered samples (certified geochemical reference standards and samples from a lithologically-contrasting loess-paleosol sequence) minimizes many adverse effects that influence the XRF-signal when analyzing wet sediment cores. This allows the investigation of data precision under ideal conditions and documents a new application of the XRF core-scanner technology at the same time. Reliable interpretations of XRF results require data precision evaluation of single elements as a function of X-ray tube, measurement time, sample compaction and quality of peak fitting. Ten-fold measurement of each sample constitutes data precision. Data precision of XRF measurements theoretically obeys Poisson statistics. Fe and Ca exhibit largest deviations from Poisson statistics. The same elements show the least mean relative standard deviations in the range from 0.5% to 1%. This represents the technical limit of data precision achievable by the installed detector. Measurement times ≥ 30 s reveal mean relative standard deviations below 4% for most elements. The quality of peak fitting is only relevant for elements with overlapping fluorescence lines such as Ba, Ti and Mn or for elements with low concentrations such as Y, for example. Differences in sample compaction are marginal and do not change mean relative standard deviation considerably. Data precision is in the range reported for geochemical reference standards measured by conventional techniques. Therefore, XRF scanning of discrete samples provide a cost- and time-efficient alternative to conventional multi-element analyses. As best trade-off between economical operation and data quality, we recommend a measurement time of 30 s resulting in a total scan time of 30 minutes for 30 samples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng Guoyan
2010-04-15
Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction.more » The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark-based initialization. Depending on the surface-based matching techniques, the reconstruction errors were slightly different. When a surface-based iterative affine registration was used, an average reconstruction error of 1.6 mm was observed. This error was increased to 1.9 mm, when a surface-based iterative scaled rigid registration was used. Conclusions: It is feasible to reconstruct a scaled, patient-specific surface model of the pelvis from single standard AP x-ray radiograph using the present approach. The unknown scale of the reconstructed model can be estimated by performing a surface-based 3D/3D matching.« less
Zahed Zahedani, SM; Oshagh, M; Momeni Danaei, Sh; Roeinpeikar, SMM
2013-01-01
Statement of Problem: One of the major outcomes of orthodontic treatment is the apical root resorption of teeth moved during the treatment. Identifying the possible risk factors, are necessary for every orthodontist. Purpose: The aim of this study was to compare the rate of apical root resorption after fixed orthodontic treatment with standard edgewise and straight wire (MBT) method, and also to evaluate other factors effecting the rate of root resorption in orthodontic treatments. Materials and Method: In this study, parallel periapical radiographs of 127 patients imaging a total of 737 individual teeth, were collected. A total of 76 patients were treated by standard edgewise and 51 patients by straight wire method. The periapical radiographs were scanned and then the percentage of root resorption was calculated by Photoshop software. The data were analyzed by Paired-Samples t-test and the Generalized Linear Model adopting the SPSS 15.0. Results: In patients treated with straight wire method (MBT), mean root resorption was 18.26% compared to 14.82% in patients treated with standard edgewise technique (p< .05). Male patients had higher rate of root resorption,statistically significant (p< .05). Age at onset of treatment, duration of treatment, type of dental occlusion, premolar extractions and the use of intermaxillary elastics had no significant effect on the root resorption in this study. Conclusion: Having more root resorption in the straight wire method and less in the standard edgewise technique can be attributed to more root movement in pre-adjusted MBT technique due to the brackets employed in this method. PMID:24724131
Zahed Zahedani, Sm; Oshagh, M; Momeni Danaei, Sh; Roeinpeikar, Smm
2013-09-01
One of the major outcomes of orthodontic treatment is the apical root resorption of teeth moved during the treatment. Identifying the possible risk factors, are necessary for every orthodontist. The aim of this study was to compare the rate of apical root resorption after fixed orthodontic treatment with standard edgewise and straight wire (MBT) method, and also to evaluate other factors effecting the rate of root resorption in orthodontic treatments. In this study, parallel periapical radiographs of 127 patients imaging a total of 737 individual teeth, were collected. A total of 76 patients were treated by standard edgewise and 51 patients by straight wire method. The periapical radiographs were scanned and then the percentage of root resorption was calculated by Photoshop software. The data were analyzed by Paired-Samples t-test and the Generalized Linear Model adopting the SPSS 15.0. In patients treated with straight wire method (MBT), mean root resorption was 18.26% compared to 14.82% in patients treated with standard edgewise technique (p< .05). Male patients had higher rate of root resorption,statistically significant (p< .05). Age at onset of treatment, duration of treatment, type of dental occlusion, premolar extractions and the use of intermaxillary elastics had no significant effect on the root resorption in this study. Having more root resorption in the straight wire method and less in the standard edgewise technique can be attributed to more root movement in pre-adjusted MBT technique due to the brackets employed in this method.
Chemokine Prostate Cancer Biomarkers — EDRN Public Portal
STUDY DESIGN 1. The need for pre-validation studies. Preliminary data from our laboratory demonstrates a potential utility for CXCL5 and CXCL12 as biomarkers to distinguish between patients at high-risk versus low-risk for harboring prostate malignancies. However, this pilot and feasibility study utilized a very small sample size of 51 patients, which limited the ability of this study to adequately assess certain technical aspects of the ELISA technique and statistical aspects of we propose studies designed assess the robustness (Specific Aim 1) and predictive value (Specific Aim 2) of these markers in a larger study population. 2. ELISA Assays. Serum, plasma, or urine chemokine levels are assessed using 50 ul frozen specimen per sandwich ELISA in duplicate using the appropriate commercially-available capture antibodies, detection antibodies, and standard ELISA reagents (R&D; Systems), as we have described previously (15, 17, 18). Measures within each patient group are regarded as biological replicates and permit statistical comparisons between groups. For all ELISAs, a standard curve is generated with the provided standards and utilized to calculate the quantity of chemokine in the sample tested. These assays provide measures of protein concentration with excellent reproducibility, with replicate measures characterized by standard deviations from the mean on the order of <3%.
Spatial Statistics of Large Astronomical Databases: An Algorithmic Approach
NASA Technical Reports Server (NTRS)
Szapudi, Istvan
2004-01-01
In this AISRP, the we have demonstrated that the correlation function i) can be calculated for MAP in minutes (about 45 minutes for Planck) on a modest 500Mhz workstation ii) the corresponding method, although theoretically suboptimal, produces nearly optimal results for realistic noise and cut sky. This trillion fold improvement in speed over the standard maximum likelihood technique opens up tremendous new possibilities, which will be persued in the follow up.
Toth, Thomas L; Lee, Malinda S; Bendikson, Kristin A; Reindollar, Richard H
2017-04-01
To better understand practice patterns and opportunities for standardization of ET. Cross-sectional survey. Not applicable. Not applicable. An anonymous 82-question survey was emailed to the medical directors of 286 Society for Assisted Reproductive Technology member IVF practices. A follow-up survey composed of three questions specific to ET technique was emailed to the same medical directors. Descriptive statistics of the results were compiled. The survey assessed policies, protocols, restrictions, and specifics pertinent to the technique of ET. There were 117 (41%) responses; 32% practice in academic settings and 68% in private practice. Responders were experienced clinicians, half of whom had performed <10 procedures during training. Ninety-eight percent of practices allowed all practitioners to perform ET; half did not follow a standardized ET technique. Multiple steps in the ET process were identified as "highly conserved;" others demonstrated discordance. ET technique is divided among [1] trial transfer followed immediately with ET (40%); [2] afterload transfer (30%); and [3] direct transfer without prior trial or afterload (27%). Embryos are discharged in the upper (66%) and middle thirds (29%) of the endometrial cavity and not closer than 1-1.5 cm from fundus (87%). Details of each step were reported and allowed the development of a "common" practice ET procedure. ET training and practices vary widely. Improved training and standardization based on outcomes data and best practices are warranted. A common practice procedure is suggested for validation by a systematic literature review. Copyright © 2017 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Iterative metal artifact reduction: evaluation and optimization of technique.
Subhas, Naveen; Primak, Andrew N; Obuchowski, Nancy A; Gupta, Amit; Polster, Joshua M; Krauss, Andreas; Iannotti, Joseph P
2014-12-01
Iterative metal artifact reduction (IMAR) is a sinogram inpainting technique that incorporates high-frequency data from standard weighted filtered back projection (WFBP) reconstructions to reduce metal artifact on computed tomography (CT). This study was designed to compare the image quality of IMAR and WFBP in total shoulder arthroplasties (TSA); determine the optimal amount of WFBP high-frequency data needed for IMAR; and compare image quality of the standard 3D technique with that of a faster 2D technique. Eight patients with nine TSA underwent CT with standardized parameters: 140 kVp, 300 mAs, 0.6 mm collimation and slice thickness, and B30 kernel. WFBP, three 3D IMAR algorithms with different amounts of WFBP high-frequency data (IMARlo, lowest; IMARmod, moderate; IMARhi, highest), and one 2D IMAR algorithm were reconstructed. Differences in attenuation near hardware and away from hardware were measured and compared using repeated measures ANOVA. Five readers independently graded image quality; scores were compared using Friedman's test. Attenuation differences were smaller with all 3D IMAR techniques than with WFBP (p < 0.0063). With increasing high-frequency data, the attenuation difference increased slightly (differences not statistically significant). All readers ranked IMARmod and IMARhi more favorably than WFBP (p < 0.05), with IMARmod ranked highest for most structures. The attenuation difference was slightly higher with 2D than with 3D IMAR, with no significant reader preference for 3D over 2D. IMAR significantly decreases metal artifact compared to WFBP both objectively and subjectively in TSA. The incorporation of a moderate amount of WFBP high-frequency data and use of a 2D reconstruction technique optimize image quality and allow for relatively short reconstruction times.
Prasad, Rahul; Al-Keraif, Abdulaziz Abdullah; Kathuria, Nidhi; Gandhi, P V; Bhide, S V
2014-02-01
The purpose of this study was to determine whether the ringless casting and accelerated wax-elimination techniques can be combined to offer a cost-effective, clinically acceptable, and time-saving alternative for fabricating single unit castings in fixed prosthodontics. Sixty standardized wax copings were fabricated on a type IV stone replica of a stainless steel die. The wax patterns were divided into four groups. The first group was cast using the ringless investment technique and conventional wax-elimination method; the second group was cast using the ringless investment technique and accelerated wax-elimination method; the third group was cast using the conventional metal ring investment technique and conventional wax-elimination method; the fourth group was cast using the metal ring investment technique and accelerated wax-elimination method. The vertical marginal gap was measured at four sites per specimen, using a digital optical microscope at 100× magnification. The results were analyzed using two-way ANOVA to determine statistical significance. The vertical marginal gaps of castings fabricated using the ringless technique (76.98 ± 7.59 μm) were significantly less (p < 0.05) than those castings fabricated using the conventional metal ring technique (138.44 ± 28.59 μm); however, the vertical marginal gaps of the conventional (102.63 ± 36.12 μm) and accelerated wax-elimination (112.79 ± 38.34 μm) castings were not statistically significant (p > 0.05). The ringless investment technique can produce castings with higher accuracy and can be favorably combined with the accelerated wax-elimination method as a vital alternative to the time-consuming conventional technique of casting restorations in fixed prosthodontics. © 2013 by the American College of Prosthodontists.
Multivariate model of female black bear habitat use for a Geographic Information System
Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.
1993-01-01
Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.
The Global Signature of Ocean Wave Spectra
NASA Astrophysics Data System (ADS)
Portilla-Yandún, Jesús
2018-01-01
A global atlas of ocean wave spectra is developed and presented. The development is based on a new technique for deriving wave spectral statistics, which is applied to the extensive ERA-Interim database from European Centre of Medium-Range Weather Forecasts. Spectral statistics is based on the idea of long-term wave systems, which are unique and distinct at every geographical point. The identification of those wave systems allows their separation from the overall spectrum using the partition technique. Their further characterization is made using standard integrated parameters, which turn out much more meaningful when applied to the individual components than to the total spectrum. The parameters developed include the density distribution of spectral partitions, which is the main descriptor; the identified wave systems; the individual distribution of the characteristic frequencies, directions, wave height, wave age, seasonal variability of wind and waves; return periods derived from extreme value analysis; and crossing-sea probabilities. This information is made available in web format for public use at http://www.modemat.epn.edu.ec/#/nereo. It is found that wave spectral statistics offers the possibility to synthesize data while providing a direct and comprehensive view of the local and regional wave conditions.
TOMS and SBUV Data: Comparison to 3D Chemical-Transport Model Results
NASA Technical Reports Server (NTRS)
Stolarski, Richard S.; Douglass, Anne R.; Steenrod, Steve; Frith, Stacey
2003-01-01
We have updated our merged ozone data (MOD) set using the TOMS data from the new version 8 algorithm. We then analyzed these data for contributions from solar cycle, volcanoes, QBO, and halogens using a standard statistical time series model. We have recently completed a hindcast run of our 3D chemical-transport model for the same years. This model uses off-line winds from the finite-volume GCM, a full stratospheric photochemistry package, and time-varying forcing due to halogens, solar uv, and volcanic aerosols. We will report on a parallel analysis of these model results using the same statistical time series technique as used for the MOD data.
Hernekamp, J F; Reinecke, A; Neubrech, F; Bickert, B; Kneser, U; Kremer, T
2016-04-01
Four-corner fusion is a standard procedure for advanced carpal collapse. Several operative techniques and numerous implants for osseous fixation have been described. Recently, a specially designed locking plate (Aptus©, Medartis, Basel, Switzerland) was introduced. The purpose of this study was to compare functional results after osseous fixation using K-wires (standard of care, SOC) with four-corner fusion and locking plate fixation. 21 patients who underwent four-corner fusion in our institution between 2008 and 2013 were included in a retrospective analysis. In 11 patients, osseous fixation was performed using locking plates whereas ten patients underwent bone fixation with conventional K-wires. Outcome parameters were functional outcome, osseous consolidation, patient satisfaction (DASH- and Krimmer Score), pain and perioperative morbidity and the time until patients returned to daily work. Patients were divided in two groups and paired t-tests were performed for statistical analysis. No implant related complications were observed. Osseous consolidation was achieved in all cases. Differences between groups were not significant regarding active range of motion (AROM), pain and function. Overall patient satisfaction was acceptable in all cases; differences in the DASH questionnaire and the Krimmer questionnaire were not significant. One patient of the plate group required conversion to total wrist arthrodesis without implant-related complications. Both techniques for four-corner fusion have similar healing rates. Using the more expensive locking implant avoids a second operation for K-wire removal, but no statistical differences were detected in functional outcome as well as in patient satisfaction when compared to SOC.
Lee, Ji Won; Lee, Geewon; Lee, Nam Kyung; Moon, Jin Il; Ju, Yun Hye; Suh, Young Ju; Jeong, Yeon Joo
2016-01-01
The aim of the study was to assess the effectiveness of the adaptive statistical iterative reconstruction (ASIR) for dual-energy computed tomography pulmonary angiography (DE-CTPA) with a reduced iodine load. One hundred forty patients referred for chest CT were randomly divided into a DE-CTPA group with a reduced iodine load or a standard CTPA group. Quantitative and qualitative image qualities of virtual monochromatic spectral (VMS) images with filtered back projection (VMS-FBP) and those with 50% ASIR (VMS-ASIR) in the DE-CTPA group were compared. Image qualities of VMS-ASIR images in the DE-CTPA group and ASIR images in the standard CTPA group were also compared. All quantitative and qualitative indices, except attenuation value of pulmonary artery in the VMS-ASIR subgroup, were superior to those in the VMS-FBP subgroup (all P < 0.001). Noise and signal-to-noise ratio of VMS-ASIR images were superior to those of ASIR images in the standard CTPA group (P < 0.001 and P = 0.007, respectively). Regarding qualitative indices, noise was significantly lower in VMS-ASIR images of the DE-CTPA group than in ASIR images of the standard CTPA group (P = 0.001). The ASIR technique tends to improve the image quality of VMS imaging. Dual-energy computed tomography pulmonary angiography with ASIR can reduce contrast medium volume and produce images of comparable quality with those of standard CTPA.
NASA Astrophysics Data System (ADS)
Vanvyve, E.; Magontier, P.; Vandenberghe, F. C.; Delle Monache, L.; Dickinson, K.
2012-12-01
Wind energy is amongst the fastest growing sources of renewable energy in the U.S. and could supply up to 20 % of the U.S power production by 2030. An accurate and reliable wind resource assessment for prospective wind farm sites is a challenging task, yet is crucial for evaluating the long-term profitability and feasibility of a potential development. We have developed an accurate and computationally efficient wind resource assessment technique for prospective wind farm sites, which incorporates innovative statistical techniques and the new NASA Earth science dataset MERRA. This technique produces a wind resource estimate that is more accurate than that obtained by the wind energy industry's standard technique, while providing a reliable quantification of its uncertainty. The focus now is on evaluating the socio-economic value of this new technique upon using the industry's standard technique. Would it yield lower financing costs? Could it result in lower electricity prices? Are there further down-the-line positive consequences, e.g. job creation, time saved, greenhouse gas decrease? Ultimately, we expect our results will inform efforts to refine and disseminate the new technique to support the development of the U.S. renewable energy infrastructure. In order to address the above questions, we are carrying out a cost-benefit analysis based on the net present worth of the technique. We will describe this approach, including the cash-flow process of wind farm financing, how the wind resource assessment factors in, and will present current results for various hypothetical candidate wind farm sites.
Pullout strength of standard vs. cement-augmented rotator cuff repair anchors in cadaveric bone.
Aziz, Keith T; Shi, Brendan Y; Okafor, Louis C; Smalley, Jeremy; Belkoff, Stephen M; Srikumaran, Uma
2018-05-01
We evaluate a novel method of rotator cuff repair that uses arthroscopic equipment to inject bone cement into placed suture anchors. A cadaver model was used to assess the pullout strength of this technique versus anchors without augmentation. Six fresh-frozen matched pairs of upper extremities were screened to exclude those with prior operative procedures, fractures, or neoplasms. One side from each pair was randomized to undergo standard anchor fixation with the contralateral side to undergo anchor fixation augmented with bone cement. After anchor fixation, specimens were mounted on a servohydraulic testing system and suture anchors were pulled at 90° to the insertion to simulate the anatomic pull of the rotator cuff. Sutures were pulled at 1 mm/s until failure. The mean pullout strength was 540 N (95% confidence interval, 389 to 690 N) for augmented anchors and 202 N (95% confidence interval, 100 to 305 N) for standard anchors. The difference in pullout strength was statistically significant (P < 0.05). This study shows superior pullout strength of a novel augmented rotator cuff anchor technique. The described technique, which is achieved by extruding polymethylmethacrylate cement through a cannulated in situ suture anchor with fenestrations, significantly increased the ultimate failure load in cadaveric human humeri. This novel augmented fixation technique was simple and can be implemented with existing instrumentation. In osteoporotic bone, it may substantially reduce the rate of anchor failure. Copyright © 2018 Elsevier Ltd. All rights reserved.
High-Throughput Nanoindentation for Statistical and Spatial Property Determination
NASA Astrophysics Data System (ADS)
Hintsala, Eric D.; Hangen, Ude; Stauffer, Douglas D.
2018-04-01
Standard nanoindentation tests are "high throughput" compared to nearly all other mechanical tests, such as tension or compression. However, the typical rates of tens of tests per hour can be significantly improved. These higher testing rates enable otherwise impractical studies requiring several thousands of indents, such as high-resolution property mapping and detailed statistical studies. However, care must be taken to avoid systematic errors in the measurement, including choosing of the indentation depth/spacing to avoid overlap of plastic zones, pileup, and influence of neighboring microstructural features in the material being tested. Furthermore, since fast loading rates are required, the strain rate sensitivity must also be considered. A review of these effects is given, with the emphasis placed on making complimentary standard nanoindentation measurements to address these issues. Experimental applications of the technique, including mapping of welds, microstructures, and composites with varying length scales, along with studying the effect of surface roughness on nominally homogeneous specimens, will be presented.
NASA Astrophysics Data System (ADS)
Jacobson, Gloria; Rella, Chris; Farinas, Alejandro
2014-05-01
Technological advancement of instrumentation in atmospheric and other geoscience disciplines over the past decade has lead to a shift from discrete sample analysis to continuous, in-situ monitoring. Standard error analysis used for discrete measurements is not sufficient to assess and compare the error contribution of noise and drift from continuous-measurement instruments, and a different statistical analysis approach should be applied. The Allan standard deviation analysis technique developed for atomic clock stability assessment by David W. Allan [1] can be effectively and gainfully applied to continuous measurement instruments. As an example, P. Werle et al has applied these techniques to look at signal averaging for atmospheric monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS) [2]. This presentation will build on, and translate prior foundational publications to provide contextual definitions and guidelines for the practical application of this analysis technique to continuous scientific measurements. The specific example of a Picarro G2401 Cavity Ringdown Spectroscopy (CRDS) analyzer used for continuous, atmospheric monitoring of CO2, CH4 and CO will be used to define the basics features the Allan deviation, assess factors affecting the analysis, and explore the time-series to Allan deviation plot translation for different types of instrument noise (white noise, linear drift, and interpolated data). In addition, the useful application of using an Allan deviation to optimize and predict the performance of different calibration schemes will be presented. Even though this presentation will use the specific example of the Picarro G2401 CRDS Analyzer for atmospheric monitoring, the objective is to present the information such that it can be successfully applied to other instrument sets and disciplines. [1] D.W. Allan, "Statistics of Atomic Frequency Standards," Proc, IEEE, vol. 54, pp 221-230, Feb 1966 [2] P. Werle, R. Miicke, F. Slemr, "The Limits of Signal Averaging in Atmospheric Trace-Gas Monitoring by Tunable Diode-Laser Absorption Spectroscopy (TDLAS)," Applied Physics, B57, pp 131-139, April 1993
Tracking variations in the alpha activity in an electroencephalogram
NASA Technical Reports Server (NTRS)
Prabhu, K. S.
1971-01-01
The problem of tracking Alpha voltage variations in an electroencephalogram is discussed. This problem is important in encephalographic studies of sleep and effects of different stimuli on the brain. Very often the Alpha voltage is tracked by passing the EEG signal through a bandpass filter centered at the Alpha frequency, which hopefully will filter out unwanted noise from the Alpha activity. Some alternative digital techniques are suggested and their performance is compared with the standard technique. These digital techniques can be used in an environment where an electroencephalograph is interfaced with a small digital computer via an A/D convertor. They have the advantage that statistical statements about their variability can sometimes be made so that the effect sought can be assessed correctly in the presence of random fluctuations.
duVerle, David A; Yotsukura, Sohiya; Nomura, Seitaro; Aburatani, Hiroyuki; Tsuda, Koji
2016-09-13
Single-cell RNA sequencing is fast becoming one the standard method for gene expression measurement, providing unique insights into cellular processes. A number of methods, based on general dimensionality reduction techniques, have been suggested to help infer and visualise the underlying structure of cell populations from single-cell expression levels, yet their models generally lack proper biological grounding and struggle at identifying complex differentiation paths. Here we introduce cellTree: an R/Bioconductor package that uses a novel statistical approach, based on document analysis techniques, to produce tree structures outlining the hierarchical relationship between single-cell samples, while identifying latent groups of genes that can provide biological insights. With cellTree, we provide experimentalists with an easy-to-use tool, based on statistically and biologically-sound algorithms, to efficiently explore and visualise single-cell RNA data. The cellTree package is publicly available in the online Bionconductor repository at: http://bioconductor.org/packages/cellTree/ .
Majid, Omer Waleed; Ahmed, Aws Mahmood
2018-04-01
The purpose of the present study was to evaluate the anesthetic adequacy of 4% articaine 1.8 mL versus 2% lidocaine 3.6 mL without palatal injection compared with the standard technique for the extraction of maxillary molar teeth. This randomized, double-blinded, placebo-controlled clinical trial included patients requiring extraction of 1 maxillary molar under local anesthesia. Patients were randomly distributed into 1 of 3 groups: group A received 4% articaine 1.8 mL as a buccal injection and 0.2 mL as a palatal injection, group B received 4% articaine 1.8 mL plus normal saline 0.2 mL as a palatal injection, and group C received 2% lidocaine 3.6 mL plus normal saline 0.2 mL as a palatal injection. Pain was measured during injection, 8 minutes afterward, and during extraction using a visual analog scale. Initial palatal anesthesia and patients' satisfaction were measured using a 5-score verbal rating scale. Statistical analyses included descriptive statistics, analysis of variance, and Pearson χ 2 test. Differences with a P value less than .05 were considered significant. Eighty-four patients were included in the study. The average pain of injection was comparable among all study groups (P = .933). Pain during extraction in the articaine group was significantly less than that experienced in the placebo groups (P < .001), although the differences between placebo groups were insignificant. Satisfaction scores were significantly higher in the articaine group compared with the placebo groups (P < .001), with comparable results between placebo groups. Although the anesthetic effects of single placebo-controlled buccal injections of 4% articaine and 2% lidocaine were comparable, the level of anesthetic adequacy was statistically less than that achieved by 4% articaine given by the standard technique. These results do not justify the buccal and non-palatal infiltration of articaine or lidocaine as an effective alternative to the standard technique in the extraction of maxillary molar teeth. Copyright © 2017 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Machine learning modelling for predicting soil liquefaction susceptibility
NASA Astrophysics Data System (ADS)
Samui, P.; Sitharam, T. G.
2011-01-01
This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N1)60] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters [(N1)60 and peck ground acceleration (amax/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.
Image-guided optimization of the ECG trace in cardiac MRI.
Barnwell, James D; Klein, J Larry; Stallings, Cliff; Sturm, Amanda; Gillespie, Michael; Fine, Jason; Hyslop, W Brian
2012-03-01
Improper electrocardiogram (ECG) lead placement resulting in suboptimal gating may lead to reduced image quality in cardiac magnetic resonance imaging (CMR). A patientspecific systematic technique for rapid optimization of lead placement may improve CMR image quality. A rapid 3 dimensional image of the thorax was used to guide the realignment of ECG leads relative to the cardiac axis of the patient in forty consecutive adult patients. Using our novel approach and consensus reading of pre- and post-correction ECG traces, seventy-three percent of patients had a qualitative improvement in their ECG tracings, and no patient had a decrease in quality of their ECG tracing following the correction technique. Statistically significant improvement was observed independent of gender, body mass index, and cardiac rhythm. This technique provides an efficient option to improve the quality of the ECG tracing in patients who have a poor quality ECG with standard techniques.
In vitro evaluation of marginal adaptation in five ceramic restoration fabricating techniques.
Ural, Cağri; Burgaz, Yavuz; Saraç, Duygu
2010-01-01
To compare in vitro the marginal adaptation of crowns manufactured using ceramic restoration fabricating techniques. Fifty standardized master steel dies simulating molars were produced and divided into five groups, each containing 10 specimens. Test specimens were fabricated with CAD/CAM, heat-press, glass-infiltration, and conventional lost-wax techniques according to manufacturer instructions. Marginal adaptation of the test specimens was measured vertically before and after cementation using SEM. Data were statistically analyzed by one-way ANOVA with Tukey HSD tests (a = .05). Marginal adaptation of ceramic crowns was affected by fabrication technique and cementation process (P < .001). The lowest marginal opening values were obtained with Cerec-3 crowns before and after cementation (P < .001). The highest marginal discrepancy values were obtained with PFM crowns before and after cementation. Marginal adaptation values obtained in the compared systems were within clinically acceptable limits. Cementation causes a significant increase in the vertical marginal discrepancies of the test specimens.
Guenter Tulip Filter Retrieval Experience: Predictors of Successful Retrieval
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turba, Ulku Cenk, E-mail: uct5d@virginia.edu; Arslan, Bulent, E-mail: ba6e@virginia.edu; Meuse, Michael, E-mail: mm5tz@virginia.edu
We report our experience with Guenter Tulip filter placement indications, retrievals, and procedural problems, with emphasis on alternative retrieval techniques. We have identified 92 consecutive patients in whom a Guenter Tulip filter was placed and filter removal attempted. We recorded patient demographic information, filter placement and retrieval indications, procedures, standard and nonstandard filter retrieval techniques, complications, and clinical outcomes. The mean time to retrieval for those who experienced filter strut penetration was statistically significant [F(1,90) = 8.55, p = 0.004]. Filter strut(s) IVC penetration and successful retrieval were found to be statistically significant (p = 0.043). The filter hook-IVC relationshipmore » correlated with successful retrieval. A modified guidewire loop technique was applied in 8 of 10 cases where the hook appeared to penetrate the IVC wall and could not be engaged with a loop snare catheter, providing additional technical success in 6 of 8 (75%). Therefore, the total filter retrieval success increased from 88 to 95%. In conclusion, the Guenter Tulip filter has high successful retrieval rates with low rates of complication. Additional maneuvers such as a guidewire loop method can be used to improve retrieval success rates when the filter hook is endothelialized.« less
Brandmaier, Andreas M.; von Oertzen, Timo; Ghisletta, Paolo; Lindenberger, Ulman; Hertzog, Christopher
2018-01-01
Latent Growth Curve Models (LGCM) have become a standard technique to model change over time. Prediction and explanation of inter-individual differences in change are major goals in lifespan research. The major determinants of statistical power to detect individual differences in change are the magnitude of true inter-individual differences in linear change (LGCM slope variance), design precision, alpha level, and sample size. Here, we show that design precision can be expressed as the inverse of effective error. Effective error is determined by instrument reliability and the temporal arrangement of measurement occasions. However, it also depends on another central LGCM component, the variance of the latent intercept and its covariance with the latent slope. We derive a new reliability index for LGCM slope variance—effective curve reliability (ECR)—by scaling slope variance against effective error. ECR is interpretable as a standardized effect size index. We demonstrate how effective error, ECR, and statistical power for a likelihood ratio test of zero slope variance formally relate to each other and how they function as indices of statistical power. We also provide a computational approach to derive ECR for arbitrary intercept-slope covariance. With practical use cases, we argue for the complementary utility of the proposed indices of a study's sensitivity to detect slope variance when making a priori longitudinal design decisions or communicating study designs. PMID:29755377
Performance evaluation of spectral vegetation indices using a statistical sensitivity function
Ji, Lei; Peters, Albert J.
2007-01-01
A great number of spectral vegetation indices (VIs) have been developed to estimate biophysical parameters of vegetation. Traditional techniques for evaluating the performance of VIs are regression-based statistics, such as the coefficient of determination and root mean square error. These statistics, however, are not capable of quantifying the detailed relationship between VIs and biophysical parameters because the sensitivity of a VI is usually a function of the biophysical parameter instead of a constant. To better quantify this relationship, we developed a “sensitivity function” for measuring the sensitivity of a VI to biophysical parameters. The sensitivity function is defined as the first derivative of the regression function, divided by the standard error of the dependent variable prediction. The function elucidates the change in sensitivity over the range of the biophysical parameter. The Student's t- or z-statistic can be used to test the significance of VI sensitivity. Additionally, we developed a “relative sensitivity function” that compares the sensitivities of two VIs when the biophysical parameters are unavailable.
Comparison of Estimation Techniques for the Four Parameter Beta Distribution.
1981-12-01
Gnanadesikan , Pinkham, and Hughes in 1967 (Ref 11). It dealt with the standard beta, and performed the estimation using smallest order statistics. 22 22 The...convergence of the iterative scheme" (Ref 11:611). Beckman and Tietjen picked up on Gnanadesikan , et al., and developed a solution method which is "fast...zia (Second Edition). Reading, Massachusetts: Addison-Wesley Pub- lishing Company, 1978. 11. Gnanadesikan , R., R. S. Pinkham and L. P. Hughes. "Maximum
NASA Technical Reports Server (NTRS)
Merceret, Francis J.
1995-01-01
This document presents results of a field study of the effect of sheltering of wind sensors by nearby foliage on the validity of wind measurements at the Space Shuttle Landing Facility (SLF). Standard measurements are made at one second intervals from 30-feet (9.1-m) towers located 500 feet (152 m) from the SLF centerline. The centerline winds are not exactly the same as those measured by the towers. A companion study, Merceret (1995), quantifies the differences as a function of statistics of the observed winds and distance between the measurements and points of interest. This work examines the effect of nearby foliage on the accuracy of the measurements made by any one sensor, and the effects of averaging on interpretation of the measurements. The field program used logarithmically spaced portable wind towers to measure wind speed and direction over a range of conditions as a function of distance from the obstructing foliage. Appropriate statistics were computed. The results suggest that accurate measurements require foliage be cut back to OFCM standards. Analysis of averaging techniques showed that there is no significant difference between vector and scalar averages. Longer averaging periods reduce measurement error but do not otherwise change the measurement in reasonably steady flow regimes. In rapidly changing conditions, shorter averaging periods may be required to capture trends.
Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens
NASA Astrophysics Data System (ADS)
Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl
2016-01-01
As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.
Wavelet-Bayesian inference of cosmic strings embedded in the cosmic microwave background
NASA Astrophysics Data System (ADS)
McEwen, J. D.; Feeney, S. M.; Peiris, H. V.; Wiaux, Y.; Ringeval, C.; Bouchet, F. R.
2017-12-01
Cosmic strings are a well-motivated extension to the standard cosmological model and could induce a subdominant component in the anisotropies of the cosmic microwave background (CMB), in addition to the standard inflationary component. The detection of strings, while observationally challenging, would provide a direct probe of physics at very high-energy scales. We develop a framework for cosmic string inference from observations of the CMB made over the celestial sphere, performing a Bayesian analysis in wavelet space where the string-induced CMB component has distinct statistical properties to the standard inflationary component. Our wavelet-Bayesian framework provides a principled approach to compute the posterior distribution of the string tension Gμ and the Bayesian evidence ratio comparing the string model to the standard inflationary model. Furthermore, we present a technique to recover an estimate of any string-induced CMB map embedded in observational data. Using Planck-like simulations, we demonstrate the application of our framework and evaluate its performance. The method is sensitive to Gμ ∼ 5 × 10-7 for Nambu-Goto string simulations that include an integrated Sachs-Wolfe contribution only and do not include any recombination effects, before any parameters of the analysis are optimized. The sensitivity of the method compares favourably with other techniques applied to the same simulations.
Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed
2017-11-01
There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Bond strength and microleakage of current dentin adhesives.
Fortin, D; Swift, E J; Denehy, G E; Reinhardt, J W
1994-07-01
The purpose of this in vitro study was to evaluate shear bond strengths and microleakage of seven current-generation dentin adhesive systems. Standard box-type Class V cavity preparations were made at the cemento-enamel junction on the buccal surfaces of eighty extracted human molars. These preparations were restored using a microfill composite following application of either All-Bond 2 (Bisco), Clearfil Liner Bond (Kuraray), Gluma 2000 (Miles), Imperva Bond (Shofu), OptiBond (Kerr), Prisma Universal Bond 3 (Caulk), Scotchbond Multi-Purpose (3M), or Scotchbond Dual-Cure (3M) (control). Lingual dentin of these same teeth was exposed and polished to 600-grit. Adhesives were applied and composite was bonded to the dentin using a gelatin capsule technique. Specimens were thermocycled 500 times. Shear bond strengths were determined using a universal testing machine, and microleakage was evaluated using a standard silver nitrate staining technique. Clearfill Liner Bond and OptiBond, adhesive systems that include low-viscosity, low-modulus intermediate resins, had the highest shear bond strengths (13.3 +/- 2.3 MPa and 12.9 +/- 1.5 MPa, respectively). Along with Prisma Universal Bond 3, they also had the least microleakage at dentin margins of Class V restorations. No statistically significant correlation between shear bond strength and microleakage was observed in this study. Adhesive systems that include a low-viscosity intermediate resin produced the high bond strengths and low microleakage. Similarly, two materials with bond strengths in the intermediate range had significantly increased microleakage, and one material with a bond strength in the low end of the spectrum exhibited microleakage that was statistically greater. Thus, despite the lack of statistical correlation, there were observable trends.
NASA Astrophysics Data System (ADS)
Karali, Anna; Giannakopoulos, Christos; Frias, Maria Dolores; Hatzaki, Maria; Roussos, Anargyros; Casanueva, Ana
2013-04-01
Forest fires have always been present in the Mediterranean ecosystems, thus they constitute a major ecological and socio-economic issue. The last few decades though, the number of forest fires has significantly increased, as well as their severity and impact on the environment. Local fire danger projections are often required when dealing with wild fire research. In the present study the application of statistical downscaling and spatial interpolation methods was performed to the Canadian Fire Weather Index (FWI), in order to assess forest fire risk in Greece. The FWI is used worldwide (including the Mediterranean basin) to estimate the fire danger in a generalized fuel type, based solely on weather observations. The meteorological inputs to the FWI System are noon values of dry-bulb temperature, air relative humidity, 10m wind speed and precipitation during the previous 24 hours. The statistical downscaling methods are based on a statistical model that takes into account empirical relationships between large scale variables (used as predictors) and local scale variables. In the framework of the current study the statistical downscaling portal developed by the Santander Meteorology Group (https://www.meteo.unican.es/downscaling) in the framework of the EU project CLIMRUN (www.climrun.eu) was used to downscale non standard parameters related to forest fire risk. In this study, two different approaches were adopted. Firstly, the analogue downscaling technique was directly performed to the FWI index values and secondly the same downscaling technique was performed indirectly through the meteorological inputs of the index. In both cases, the statistical downscaling portal was used considering the ERA-Interim reanalysis as predictands due to the lack of observations at noon. Additionally, a three-dimensional (3D) interpolation method of position and elevation, based on Thin Plate Splines (TPS) was used, to interpolate the ERA-Interim data used to calculate the index. Results from this method were compared with the statistical downscaling results obtained from the portal. Finally, FWI was computed using weather observations obtained from the Hellenic National Meteorological Service, mainly in the south continental part of Greece and a comparison with the previous results was performed.
Design of surface-water data networks for regional information
Moss, Marshall E.; Gilroy, E.J.; Tasker, Gary D.; Karlinger, M.R.
1982-01-01
This report describes a technique, Network Analysis of Regional Information (NARI), and the existing computer procedures that have been developed for the specification of the regional information-cost relation for several statistical parameters of streamflow. The measure of information used is the true standard error of estimate of a regional logarithmic regression. The cost is a function of the number of stations at which hydrologic data are collected and the number of years for which the data are collected. The technique can be used to obtain either (1) a minimum cost network that will attain a prespecified accuracy and reliability or (2) a network that maximizes information given a set of budgetary and time constraints.
NASA Astrophysics Data System (ADS)
Martin, Jeffery
2016-09-01
The free neutron is an excellent laboratory for searches for physics beyond the standard model. Ultracold neutrons (UCN) are free neutrons that can be confined to material, magnetic, and gravitational traps. UCN are compelling for experiments requiring long observation times, high polarization, or low energies. The challenge of experiments has been to create enough UCN to reach the statistical precision required. Production techniques involving neutron interactions with condensed matter systems have resulted in some successes, and new UCN sources are being pursued worldwide to exploit higher UCN densities offered by these techniques. I will review the physics of how the UCN sources work, along with the present status of the world's efforts. research supported by NSERC, CFI, and CRC.
Tweedell, Andrew J.; Haynes, Courtney A.
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897
Efforts to improve international migration statistics: a historical perspective.
Kraly, E P; Gnanasekaran, K S
1987-01-01
During the past decade, the international statistical community has made several efforts to develop standards for the definition, collection and publication of statistics on international migration. This article surveys the history of official initiatives to standardize international migration statistics by reviewing the recommendations of the International Statistical Institute, International Labor Organization, and the UN, and reports a recently proposed agenda for moving toward comparability among national statistical systems. Heightening awareness of the benefits of exchange and creating motivation to implement international standards requires a 3-pronged effort from the international statistical community. 1st, it is essential to continue discussion about the significance of improvement, specifically standardization, of international migration statistics. The move from theory to practice in this area requires ongoing focus by migration statisticians so that conformity to international standards itself becomes a criterion by which national statistical practices are examined and assessed. 2nd, the countries should be provided with technical documentation to support and facilitate the implementation of the recommended statistical systems. Documentation should be developed with an understanding that conformity to international standards for migration and travel statistics must be achieved within existing national statistical programs. 3rd, the call for statistical research in this area requires more efforts by the community of migration statisticians, beginning with the mobilization of bilateral and multilateral resources to undertake the preceding list of activities.
NASA Astrophysics Data System (ADS)
Majerek, Dariusz; Guz, Łukasz; Suchorab, Zbigniew; Łagód, Grzegorz; Sobczuk, Henryk
2017-07-01
Mold that develops on moistened building barriers is a major cause of the Sick Building Syndrome (SBS). Fungal contamination is normally evaluated using standard biological methods which are time-consuming and require a lot of manual labor. Fungi emit Volatile Organic Compounds (VOC) that can be detected in the indoor air using several techniques of detection e.g. chromatography. VOCs can be also detected using gas sensors arrays. All array sensors generate particular voltage signals that ought to be analyzed using properly selected statistical methods of interpretation. This work is focused on the attempt to apply statistical classifying models in evaluation of signals from gas sensors matrix to analyze the air sampled from the headspace of various types of the building materials at different level of contamination but also clean reference materials.
75 FR 37245 - 2010 Standards for Delineating Metropolitan and Micropolitan Statistical Areas
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-28
... Micropolitan Statistical Areas; Notice #0;#0;Federal Register / Vol. 75, No. 123 / Monday, June 28, 2010... and Micropolitan Statistical Areas AGENCY: Office of Information and Regulatory Affairs, Office of... Statistical Areas. The 2010 standards replace and supersede the 2000 Standards for Defining Metropolitan and...
NASA Astrophysics Data System (ADS)
Collados-Lara, Antonio-Juan; Pulido-Velazquez, David; Pardo-Iguzquiza, Eulogio
2017-04-01
Assessing impacts of potential future climate change scenarios in precipitation and temperature is essential to design adaptive strategies in water resources systems. The objective of this work is to analyze the possibilities of different statistical downscaling methods to generate future potential scenarios in an Alpine Catchment from historical data and the available climate models simulations performed in the frame of the CORDEX EU project. The initial information employed to define these downscaling approaches are the historical climatic data (taken from the Spain02 project for the period 1971-2000 with a spatial resolution of 12.5 Km) and the future series provided by climatic models in the horizon period 2071-2100 . We have used information coming from nine climate model simulations (obtained from five different Regional climate models (RCM) nested to four different Global Climate Models (GCM)) from the European CORDEX project. In our application we have focused on the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC). For each RCM we have generated future climate series for the period 2071-2100 by applying two different approaches, bias correction and delta change, and five different transformation techniques (first moment correction, first and second moment correction, regression functions, quantile mapping using distribution derived transformation and quantile mapping using empirical quantiles) for both of them. Ensembles of the obtained series were proposed to obtain more representative potential future climate scenarios to be employed to study potential impacts. In this work we propose a non-equifeaseble combination of the future series giving more weight to those coming from models (delta change approaches) or combination of models and techniques that provides better approximation to the basic and drought statistic of the historical data. A multi-objective analysis using basic statistics (mean, standard deviation and asymmetry coefficient) and droughts statistics (duration, magnitude and intensity) has been performed to identify which models are better in terms of goodness of fit to reproduce the historical series. The drought statistics have been obtained from the Standard Precipitation index (SPI) series using the Theory of Runs. This analysis allows discriminate the best RCM and the best combination of model and correction technique in the bias-correction method. We have also analyzed the possibilities of using different Stochastic Weather Generators to approximate the basic and droughts statistics of the historical series. These analyses have been performed in our case study in a lumped and in a distributed way in order to assess its sensibility to the spatial scale. The statistic of the future temperature series obtained with different ensemble options are quite homogeneous, but the precipitation shows a higher sensibility to the adopted method and spatial scale. The global increment in the mean temperature values are 31.79 %, 31.79 %, 31.03 % and 31.74 % for the distributed bias-correction, distributed delta-change, lumped bias-correction and lumped delta-change ensembles respectively and in the precipitation they are -25.48 %, -28.49 %, -26.42 % and -27.35% respectively. Acknowledgments: This research work has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02 and CORDEX projects for the data provided for this study and the R package qmap.
NASA Technical Reports Server (NTRS)
Smedes, H. W.; Linnerud, H. J.; Woolaver, L. B.; Su, M. Y.; Jayroe, R. R.
1972-01-01
Two clustering techniques were used for terrain mapping by computer of test sites in Yellowstone National Park. One test was made with multispectral scanner data using a composite technique which consists of (1) a strictly sequential statistical clustering which is a sequential variance analysis, and (2) a generalized K-means clustering. In this composite technique, the output of (1) is a first approximation of the cluster centers. This is the input to (2) which consists of steps to improve the determination of cluster centers by iterative procedures. Another test was made using the three emulsion layers of color-infrared aerial film as a three-band spectrometer. Relative film densities were analyzed using a simple clustering technique in three-color space. Important advantages of the clustering technique over conventional supervised computer programs are (1) human intervention, preparation time, and manipulation of data are reduced, (2) the computer map, gives unbiased indication of where best to select the reference ground control data, (3) use of easy to obtain inexpensive film, and (4) the geometric distortions can be easily rectified by simple standard photogrammetric techniques.
Cosmographic analysis with Chebyshev polynomials
NASA Astrophysics Data System (ADS)
Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando
2018-05-01
The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.
Confidence of compliance: a Bayesian approach for percentile standards.
McBride, G B; Ellis, J C
2001-04-01
Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.
High-coverage quantitative proteomics using amine-specific isotopic labeling.
Melanson, Jeremy E; Avery, Steven L; Pinto, Devanand M
2006-08-01
Peptide dimethylation with isotopically coded formaldehydes was evaluated as a potential alternative to techniques such as the iTRAQ method for comparative proteomics. The isotopic labeling strategy and custom-designed protein quantitation software were tested using protein standards and then applied to measure proteins levels associated with Alzheimer's disease (AD). The method provided high accuracy (10% error), precision (14% RSD) and coverage (70%) when applied to the analysis of a standard solution of BSA by LC-MS/MS. The technique was then applied to measure protein abundance levels in brain tissue afflicted with AD relative to normal brain tissue. 2-D LC-MS analysis identified 548 unique proteins (p<0.05). Of these, 349 were quantified with two or more peptides that met the statistical criteria used in this study. Several classes of proteins exhibited significant changes in abundance. For example, elevated levels of antioxidant proteins and decreased levels of mitochondrial electron transport proteins were observed. The results demonstrate the utility of the labeling method for high-throughput quantitative analysis.
NASA Technical Reports Server (NTRS)
Lattman, L. H. (Principal Investigator)
1977-01-01
The author has identified the following significant results. Standard photogeologic techniques were applied to LANDSAT imagery of the basin and range province of Utah and Nevada to relate linear, tonal, textural, drainage, and geomorphic features to known mineralized areas in an attempt to develop criteria for the location of mineral deposits. No consistent correlation was found between lineaments, mapped according to specified criteria, and locations of mines, mining districts, or intrusive outcrops. Tonal and textural patterns were more closely related to geologic outcrop patterns than to mineralization. A statistical study of drainage azimuths of various length classes as measured on LANDSAT showed significant correlation with mineralized districts in the length class of 3-6 km. Alignments of outcrops of basalt, a rock type highly visible on LANDSAT imagery, appear to be colinear with acidic and intermediate intrusive centers in some areas and may assist on the recognition of regional fracture systems for mineral exploration.
NASA Astrophysics Data System (ADS)
Jolivet, S.; Mezghani, S.; El Mansori, M.
2016-09-01
The replication of topography has been generally restricted to optimizing material processing technologies in terms of statistical and single-scale features such as roughness. By contrast, manufactured surface topography is highly complex, irregular, and multiscale. In this work, we have demonstrated the use of multiscale analysis on replicates of surface finish to assess the precise control of the finished replica. Five commercial resins used for surface replication were compared. The topography of five standard surfaces representative of common finishing processes were acquired both directly and by a replication technique. Then, they were characterized using the ISO 25178 standard and multiscale decomposition based on a continuous wavelet transform, to compare the roughness transfer quality at different scales. Additionally, atomic force microscope force modulation mode was used in order to compare the resins’ stiffness properties. The results showed that less stiff resins are able to replicate the surface finish along a larger wavelength band. The method was then tested for non-destructive quality control of automotive gear tooth surfaces.
Assessment of and standardization for quantitative nondestructive test
NASA Technical Reports Server (NTRS)
Neuschaefer, R. W.; Beal, J. B.
1972-01-01
Present capabilities and limitations of nondestructive testing (NDT) as applied to aerospace structures during design, development, production, and operational phases are assessed. It will help determine what useful structural quantitative and qualitative data may be provided from raw materials to vehicle refurbishment. This assessment considers metal alloys systems and bonded composites presently applied in active NASA programs or strong contenders for future use. Quantitative and qualitative data has been summarized from recent literature, and in-house information, and presented along with a description of those structures or standards where the information was obtained. Examples, in tabular form, of NDT technique capabilities and limitations have been provided. NDT techniques discussed and assessed were radiography, ultrasonics, penetrants, thermal, acoustic, and electromagnetic. Quantitative data is sparse; therefore, obtaining statistically reliable flaw detection data must be strongly emphasized. The new requirements for reusable space vehicles have resulted in highly efficient design concepts operating in severe environments. This increases the need for quantitative NDT evaluation of selected structural components, the end item structure, and during refurbishment operations.
An Independent Filter for Gene Set Testing Based on Spectral Enrichment.
Frost, H Robert; Li, Zhigang; Asselbergs, Folkert W; Moore, Jason H
2015-01-01
Gene set testing has become an indispensable tool for the analysis of high-dimensional genomic data. An important motivation for testing gene sets, rather than individual genomic variables, is to improve statistical power by reducing the number of tested hypotheses. Given the dramatic growth in common gene set collections, however, testing is often performed with nearly as many gene sets as underlying genomic variables. To address the challenge to statistical power posed by large gene set collections, we have developed spectral gene set filtering (SGSF), a novel technique for independent filtering of gene set collections prior to gene set testing. The SGSF method uses as a filter statistic the p-value measuring the statistical significance of the association between each gene set and the sample principal components (PCs), taking into account the significance of the associated eigenvalues. Because this filter statistic is independent of standard gene set test statistics under the null hypothesis but dependent under the alternative, the proportion of enriched gene sets is increased without impacting the type I error rate. As shown using simulated and real gene expression data, the SGSF algorithm accurately filters gene sets unrelated to the experimental outcome resulting in significantly increased gene set testing power.
Robust estimation approach for blind denoising.
Rabie, Tamer
2005-11-01
This work develops a new robust statistical framework for blind image denoising. Robust statistics addresses the problem of estimation when the idealized assumptions about a system are occasionally violated. The contaminating noise in an image is considered as a violation of the assumption of spatial coherence of the image intensities and is treated as an outlier random variable. A denoised image is estimated by fitting a spatially coherent stationary image model to the available noisy data using a robust estimator-based regression method within an optimal-size adaptive window. The robust formulation aims at eliminating the noise outliers while preserving the edge structures in the restored image. Several examples demonstrating the effectiveness of this robust denoising technique are reported and a comparison with other standard denoising filters is presented.
A simple test of association for contingency tables with multiple column responses.
Decady, Y J; Thomas, D R
2000-09-01
Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.
Predictive onboard flow control for packet switching satellites
NASA Technical Reports Server (NTRS)
Bobinsky, Eric A.
1992-01-01
We outline two alternate approaches to predicting the onset of congestion in a packet switching satellite, and argue that predictive, rather than reactive, flow control is necessary for the efficient operation of such a system. The first method discussed is based on standard, statistical techniques which are used to periodically calculate a probability of near-term congestion based on arrival rate statistics. If this probability exceeds a present threshold, the satellite would transmit a rate-reduction signal to all active ground stations. The second method discussed would utilize a neural network to periodically predict the occurrence of buffer overflow based on input data which would include, in addition to arrival rates, the distributions of packet lengths, source addresses, and destination addresses.
NASA Astrophysics Data System (ADS)
García-Resúa, Carlos; Pena-Verdeal, Hugo; Miñones, Mercedes; Gilino, Jorge; Giraldez, Maria J.; Yebra-Pimentel, Eva
2013-11-01
High tear fluid osmolarity is a feature common to all types of dry eye. This study was designed to establish the accuracy of two osmometers, a freezing point depression osmometer (Fiske 110) and an electrical impedance osmometer (TearLab™) by using standard samples. To assess the accuracy of the measurements provided by the two instruments we used 5 solutions of known osmolarity/osmolality; 50, 290 and 850 mOsm/kg and 292 and 338 mOsm/L. Fiske 110 is designed to be used in samples of 20 μl, so measurements were made on 1:9, 1:4, 1:1 and 1:0 dilutions of the standards. Tear Lab is addressed to be used in tear film and only a sample of 0.05 μl is required, so no dilutions were employed. Due to the smaller measurement range of the TearLab, the 50 and 850 mOsm/kg standards were not included. 20 measurements per standard sample were used and differences with the reference value was analysed by one sample t-test. Fiske 110 showed that osmolarity measurements differed statistically from standard values except those recorded for 290 mOsm/kg standard diluted 1:1 (p = 0.309), the 292 mOsm/L H2O sample (1:1) and 338 mOsm/L H2O standard (1:4). The more diluted the sample, the higher the error rate. For the TearLab measurements, one-sample t-test indicated that all determinations differed from the theoretical values (p = 0.001), though differences were always small. For undiluted solutions, Fiske 110 shows similar performance than TearLab. However, for the diluted standards, Fiske 110 worsens.
Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T
2016-12-20
Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Decision trees in epidemiological research.
Venkatasubramaniam, Ashwini; Wolfson, Julian; Mitchell, Nathan; Barnes, Timothy; JaKa, Meghan; French, Simone
2017-01-01
In many studies, it is of interest to identify population subgroups that are relatively homogeneous with respect to an outcome. The nature of these subgroups can provide insight into effect mechanisms and suggest targets for tailored interventions. However, identifying relevant subgroups can be challenging with standard statistical methods. We review the literature on decision trees, a family of techniques for partitioning the population, on the basis of covariates, into distinct subgroups who share similar values of an outcome variable. We compare two decision tree methods, the popular Classification and Regression tree (CART) technique and the newer Conditional Inference tree (CTree) technique, assessing their performance in a simulation study and using data from the Box Lunch Study, a randomized controlled trial of a portion size intervention. Both CART and CTree identify homogeneous population subgroups and offer improved prediction accuracy relative to regression-based approaches when subgroups are truly present in the data. An important distinction between CART and CTree is that the latter uses a formal statistical hypothesis testing framework in building decision trees, which simplifies the process of identifying and interpreting the final tree model. We also introduce a novel way to visualize the subgroups defined by decision trees. Our novel graphical visualization provides a more scientifically meaningful characterization of the subgroups identified by decision trees. Decision trees are a useful tool for identifying homogeneous subgroups defined by combinations of individual characteristics. While all decision tree techniques generate subgroups, we advocate the use of the newer CTree technique due to its simplicity and ease of interpretation.
Jacobson, Magdalena; Wallgren, Per; Nordengrahn, Ann; Merza, Malik; Emanuelson, Ulf
2011-04-01
Lawsonia intracellularis is a common cause of chronic diarrhoea and poor performance in young growing pigs. Diagnosis of this obligate intracellular bacterium is based on the demonstration of the microbe or microbial DNA in tissue specimens or faecal samples, or the demonstration of L. intracellularis-specific antibodies in sera. The aim of the present study was to evaluate a blocking ELISA in the detection of serum antibodies to L. intracellularis, by comparison to the previously widely used immunofluorescent antibody test (IFAT). Sera were collected from 176 pigs aged 8-12 weeks originating from 24 herds with or without problems with diarrhoea and poor performance in young growing pigs. Sera were analyzed by the blocking ELISA and by IFAT. Bayesian modelling techniques were used to account for the absence of a gold standard test and the results of the blocking ELISA was modelled against the IFAT test with a "2 dependent tests, 2 populations, no gold standard" model. At the finally selected cut-off value of percent inhibition (PI) 35, the diagnostic sensitivity of the blocking ELISA was 72% and the diagnostic specificity was 93%. The positive predictive value was 0.82 and the negative predictive value was 0.89, at the observed prevalence of 33.5%. The sensitivity and specificity as evaluated by Bayesian statistic techniques differed from that previously reported. Properties of diagnostic tests may well vary between countries, laboratories and among populations of animals. In the absence of a true gold standard, the importance of validating new methods by appropriate statistical methods and with respect to the target population must be emphasized.
Efficacy of Liposuction as a Delay Method for Improving Flap Survival.
Orhan, Erkan; Erol, Yağmur Reyyan; Deren, Orgun; Altun, Serdar; Erdoğan, Bülent
2016-12-01
Flaps are often used in repairing tissue defects and partial or full flap loss is still an important morbidity cause. Several techniques have been tried to increase flap circulation but none of these could replace the delay technique. Our goal in this study is to show the efficacy of liposuction in delay of dorsal rat cutaneous flaps and improvement in flap survival. Twenty-four Wistar rats were used. The rats in group 1 received 9 × 3-sized caudally-based random pattern skin flaps. In group 2, liposuction was done under the tissue island spotted as the flap and after 14 days, standard flap surgery was done. In group 3, surgical delay was done and after 14 days, standard flap surgery was done. In group 4, liposuction was done under the tissue island spotted as the flap and standard flap surgery was done right after the liposuction. The rate of necrotic tissue in group 3 (surgical delay; mean % 13.7) was less than the rate in group 2 (liposuction delay; mean % 15.1), although the difference was not statistically significant. The necrosis rates in group 3 (surgical delay) and group 2 (liposuction delay) were less than the rates in both group 1 (only flap; mean % 41.5) and group 4 (liposuction flap; mean % 40.0) and this difference was statistically significant (p < 0.0001). Liposuction can be an alternative to surgical delay as a less invasive method in the clinic. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Multi-scale pixel-based image fusion using multivariate empirical mode decomposition.
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P; McDonald-Maier, Klaus D
2015-05-08
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.
Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition
Rehman, Naveed ur; Ehsan, Shoaib; Abdullah, Syed Muhammad Umer; Akhtar, Muhammad Jehanzaib; Mandic, Danilo P.; McDonald-Maier, Klaus D.
2015-01-01
A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences. PMID:26007714
Pini, Giovannalberto; Goezen, Ali Serdar; Schulze, Michael; Hruza, Marcel; Klein, Jan; Rassweiler, Jens Jochen
2012-10-01
To present small-incision access retroperitoneoscopic technique pyeloplasty (SMARTp), a novel mini-laparoscopic approach for management of uretero-pelvic junction obstruction (UPJO) in adults including comparison with the standard retroperitoneoscopic technique (SRTp). In a non-randomised study, we matched 12 adult patients treated from August to November 2010 by SMARTp with 12 patients treated with SRTp from January to November 2010. Mini-laparoscopic retroperitoneal space was created with a home-made 6-mm balloon trocar. One 6-mm (for 5-mm 30° telescope) and two 3.5-mm trocars (for 3-mm working instrument) were used. SRTp was performed with 11- and 6-mm trocar. Primary endpoints included evaluation of cosmetic appearance and post-operative pain evaluated respectively by the patient and observer scar assessment scale (POSAS) and analogue visual scale (VAS). Secondary endpoints were comparison between operative and functional parameters. Cosmetic cumulative results were statistically significant in favour of SMARTp (POSAS: 37.9 vs. 52.4; P = 0.002). A better trend has been shown by post-operative pain (first to fourth day VAS), although not statistically significant (4.2 vs. 4.9, P = 0.891). No differences were recorded in terms of operative time, pre- and post-operative Hb difference, DJ-stent removal and resistive index (RI) improvement. The SMARTp group showed a faster drain removal (2.4 vs. 3.4 day, P = 0.004) and discharge (4.5 vs. 5.4 day P = 0.017). Preliminary data support SMARTp as safe procedures in experienced hands, providing better cosmetic results compared to SRTp. Further studies and clinical randomised trial performed in a larger population sample are requested.
Extragalactic counterparts to Einstein slew survey sources
NASA Technical Reports Server (NTRS)
Schachter, Jonathan F.; Elvis, Martin; Plummer, David; Remillard, Ron
1992-01-01
The Einstein slew survey consists of 819 bright X-ray sources, of which 636 (or 78 percent) are identified with counterparts in standard catalogs. The importance of bright X-ray surveys is stressed, and the slew survey is compared to the Rosat all sky survey. Statistical techniques for minimizing confusion in arcminute error circles in digitized data are discussed. The 238 slew survey active galactic nuclei, clusters, and BL Lacertae objects identified to date and their implications for logN-logS and source evolution studies are described.
Sequential neural text compression.
Schmidhuber, J; Heil, S
1996-01-01
The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.
Trends in modeling Biomedical Complex Systems
Milanesi, Luciano; Romano, Paolo; Castellani, Gastone; Remondini, Daniel; Liò, Petro
2009-01-01
In this paper we provide an introduction to the techniques for multi-scale complex biological systems, from the single bio-molecule to the cell, combining theoretical modeling, experiments, informatics tools and technologies suitable for biological and biomedical research, which are becoming increasingly multidisciplinary, multidimensional and information-driven. The most important concepts on mathematical modeling methodologies and statistical inference, bioinformatics and standards tools to investigate complex biomedical systems are discussed and the prominent literature useful to both the practitioner and the theoretician are presented. PMID:19828068
Fusion of multiscale wavelet-based fractal analysis on retina image for stroke prediction.
Che Azemin, M Z; Kumar, Dinesh K; Wong, T Y; Wang, J J; Kawasaki, R; Mitchell, P; Arjunan, Sridhar P
2010-01-01
In this paper, we present a novel method of analyzing retinal vasculature using Fourier Fractal Dimension to extract the complexity of the retinal vasculature enhanced at different wavelet scales. Logistic regression was used as a fusion method to model the classifier for 5-year stroke prediction. The efficacy of this technique has been tested using standard pattern recognition performance evaluation, Receivers Operating Characteristics (ROC) analysis and medical prediction statistics, odds ratio. Stroke prediction model was developed using the proposed system.
Clinical validation of robot simulation of toothbrushing - comparative plaque removal efficacy
2014-01-01
Background Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Methods Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33–47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33–47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. Results The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. Conclusions The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing. This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning. PMID:24996973
Clinical validation of robot simulation of toothbrushing--comparative plaque removal efficacy.
Lang, Tomas; Staufer, Sebastian; Jennes, Barbara; Gaengler, Peter
2014-07-04
Clinical validation of laboratory toothbrushing tests has important advantages. It was, therefore, the aim to demonstrate correlation of tooth cleaning efficiency of a new robot brushing simulation technique with clinical plaque removal. Clinical programme: 27 subjects received dental cleaning prior to 3-day-plaque-regrowth-interval. Plaque was stained, photographically documented and scored using planimetrical index. Subjects brushed teeth 33-47 with three techniques (horizontal, rotating, vertical), each for 20s buccally and for 20s orally in 3 consecutive intervals. The force was calibrated, the brushing technique was video supported. Two different brushes were randomly assigned to the subject. Robot programme: Clinical brushing programmes were transfered to a 6-axis-robot. Artificial teeth 33-47 were covered with plaque-simulating substrate. All brushing techniques were repeated 7 times, results were scored according to clinical planimetry. All data underwent statistical analysis by t-test, U-test and multivariate analysis. The individual clinical cleaning patterns are well reproduced by the robot programmes. Differences in plaque removal are statistically significant for the two brushes, reproduced in clinical and robot data. Multivariate analysis confirms the higher cleaning efficiency for anterior teeth and for the buccal sites. The robot tooth brushing simulation programme showed good correlation with clinically standardized tooth brushing.This new robot brushing simulation programme can be used for rapid, reproducible laboratory testing of tooth cleaning.
NASA Astrophysics Data System (ADS)
Rauscher, Bernard J.; Arendt, Richard G.; Fixsen, D. J.; Greenhouse, Matthew A.; Lander, Matthew; Lindler, Don; Loose, Markus; Moseley, S. H.; Mott, D. Brent; Wen, Yiting; Wilson, Donna V.; Xenophontos, Christos
2017-10-01
Near-infrared array detectors, like the James Webb Space Telescope (JWST) NIRSpec’s Teledyne’s H2RGs, often provide reference pixels and a reference output. These are used to remove correlated noise. Improved reference sampling and subtraction (IRS2) is a statistical technique for using this reference information optimally in a least-squares sense. Compared with the traditional H2RG readout, IRS2 uses a different clocking pattern to interleave many more reference pixels into the data than is otherwise possible. Compared with standard reference correction techniques, IRS2 subtracts the reference pixels and reference output using a statistically optimized set of frequency-dependent weights. The benefits include somewhat lower noise variance and much less obvious correlated noise. NIRSpec’s IRS2 images are cosmetically clean, with less 1/f banding than in traditional data from the same system. This article describes the IRS2 clocking pattern and presents the equations needed to use IRS2 in systems other than NIRSpec. For NIRSpec, applying these equations is already an option in the calibration pipeline. As an aid to instrument builders, we provide our prototype IRS2 calibration software and sample JWST NIRSpec data. The same techniques are applicable to other detector systems, including those based on Teledyne’s H4RG arrays. The H4RG’s interleaved reference pixel readout mode is effectively one IRS2 pattern.
Chou, C P; Bentler, P M; Satorra, A
1991-11-01
Research studying robustness of maximum likelihood (ML) statistics in covariance structure analysis has concluded that test statistics and standard errors are biased under severe non-normality. An estimation procedure known as asymptotic distribution free (ADF), making no distributional assumption, has been suggested to avoid these biases. Corrections to the normal theory statistics to yield more adequate performance have also been proposed. This study compares the performance of a scaled test statistic and robust standard errors for two models under several non-normal conditions and also compares these with the results from ML and ADF methods. Both ML and ADF test statistics performed rather well in one model and considerably worse in the other. In general, the scaled test statistic seemed to behave better than the ML test statistic and the ADF statistic performed the worst. The robust and ADF standard errors yielded more appropriate estimates of sampling variability than the ML standard errors, which were usually downward biased, in both models under most of the non-normal conditions. ML test statistics and standard errors were found to be quite robust to the violation of the normality assumption when data had either symmetric and platykurtic distributions, or non-symmetric and zero kurtotic distributions.
Kuhn, M A; Burch, M; Chinnock, R E; Fenton, M J
2017-10-01
Intravascular ultrasound (IVUS) has been routinely used in some centers to investigate cardiac allograft vasculopathy in pediatric heart transplant recipients. We present an alternative method using more sophisticated imaging software. This study presents a comparison of this method with an established standard method. All patients who had IVUS performed in 2014 were retrospectively evaluated. The standard technique consisted of analysis of 10 operator-selected segments along the vessel. Each study was re-evaluated using a longitudinal technique, taken at every third cardiac cycle, along the entire vessel. Semiautomatic edge detection software was used to detect vessel imaging planes. Measurements included outer and inner diameter, total and luminal area, maximal intimal thickness (MIT), and intimal index. Each IVUS was graded for severity using the Stanford classification. All results were given as mean ± standard deviation (SD). Groups were compared using Student t test. A P value <.05 was considered significant. There were 59 IVUS studies performed on 58 patients. There was no statistically significant difference between outer diameter, inner diameter, or total area. In the longitudinal group, there was a significantly smaller luminal area, higher MIT, and higher intimal index. Using the longitudinal technique, there was an increase in Stanford classification in 20 patients. The longitudinal technique appeared more sensitive in assessing the degree of cardiac allograft vasculopathy and may play a role in the increase in the degree of thickening seen. It may offer an alternative way of grading severity of cardiac allograft vasculopathy in pediatric heart transplant recipients. Copyright © 2017 Elsevier Inc. All rights reserved.
Rajput, Akhil; Ataide, Ida; Lambor, Rajan; Monteiro, Jeanne; Tar, Malika; Wadhawan, Neeraj
2010-01-01
Reattachment of the fractured fragment of a traumatized tooth (whenever available and usable) has become the treatment of choice in cases of uncomplicated crown fractures. Despite the presence of various bonding materials and techniques, laboratory data evaluating the biomechanical aspects of such procedures is largely lacking in the literature. The objective of this in vitro study was to evaluate the fracture strength recovery of incisors, following fragment restoration with three different techniques. A total of 90 extracted human maxillary central incisors were subjected to crown fractured under standard conditions. This was carried out by applying a compressive force from the buccal aspect of the clinical crown using a universal strength testing machine. The fractured teeth were equality distributed in three groups, defined on the basis of the technique used for reattachment: i) overcontour, ii) internal dentinal groove and iii) direct buildup. Each group was further subdivided into three subgroups on the basis of the intermediate restorative material used for reattachment, namely: i) hybrid composite (Filtek Z100 Universal Restorative, ii) nanocomposite (Filtek Z350) and iii) Ormocer (Voco Admira). Following reattachment, the crowns were re-fractured under standard conditions. The force required for fracture was recorded and was expressed as a percentage of the fracture strength of the intact tooth. The data was expressed as a percentage of the fracture strength of the intact tooth. The data was analyzed using two-way ANOVA and Bonferroni tests for pair-wise comparison. The results showed no statistically significant differences in fractures strength between the three groups (P > 0.05). However, comparison of the subgroups revealed statistically significant higher strength recovery percentages for the hybrid and the nanocomposite compared with the Ormocer material (P < 0.05). It was concluded that material properties have a significant influence on the success of reattachment procedures.
Woo, Jason R; Shikanov, Sergey; Zorn, Kevin C; Shalhav, Arieh L; Zagaja, Gregory P
2009-12-01
Posterior rhabdosphincter (PR) reconstruction during robot-assisted radical prostatectomy (RARP) was introduced in an attempt to improve postoperative continence. In the present study, we evaluate time to achieve continence in patients who are undergoing RARP with and without PR reconstruction. A prospective RARP database was searched for most recent cases that were accomplished with PR reconstruction (group 1, n = 69) or with standard technique (group 2, n = 63). We performed the analysis applying two definitions of continence: 0 pads per day or 0-1 security pad per day. Patients were evaluated by telephone interview. Statistical analysis was carried out using the Kaplan-Meier method and log-rank test. With PR reconstruction, continence was improved when defined as 0-1 security pad per day (median time of 90 vs 150 days; P = 0.01). This difference did not achieve statistical significance when continence was defined as 0 pads per day (P = 0.12). A statistically significant improvement in continence rate and time to achieve continence is seen in patients who are undergoing PR reconstruction during RARP, with continence defined as 0-1 security/safety pad per day. A larger, prospective and randomized study is needed to better understand the impact of this technique on postoperative continence.
Bayesian Orbit Computation Tools for Objects on Geocentric Orbits
NASA Astrophysics Data System (ADS)
Virtanen, J.; Granvik, M.; Muinonen, K.; Oszkiewicz, D.
2013-08-01
We consider the space-debris orbital inversion problem via the concept of Bayesian inference. The methodology has been put forward for the orbital analysis of solar system small bodies in early 1990's [7] and results in a full solution of the statistical inverse problem given in terms of a posteriori probability density function (PDF) for the orbital parameters. We demonstrate the applicability of our statistical orbital analysis software to Earth orbiting objects, both using well-established Monte Carlo (MC) techniques (for a review, see e.g. [13] as well as recently developed Markov-chain MC (MCMC) techniques (e.g., [9]). In particular, we exploit the novel virtual observation MCMC method [8], which is based on the characterization of the phase-space volume of orbital solutions before the actual MCMC sampling. Our statistical methods and the resulting PDFs immediately enable probabilistic impact predictions to be carried out. Furthermore, this can be readily done also for very sparse data sets and data sets of poor quality - providing that some a priori information on the observational uncertainty is available. For asteroids, impact probabilities with the Earth from the discovery night onwards have been provided, e.g., by [11] and [10], the latter study includes the sampling of the observational-error standard deviation as a random variable.
Ladny, Jerzy R; Smereka, Jacek; Rodríguez-Núñez, Antonio; Leung, Steve; Ruetzler, Kurt; Szarpak, Lukasz
2018-02-01
Pediatric cardiac arrest is a fatal emergent condition that is associated with high mortality, permanent neurological injury, and is a socioeconomic burden at both the individual and national levels. The aim of this study was to test in an infant manikin a new chest compression (CC) technique ("2 thumbs-fist" or nTTT) in comparison with standard 2-finger (TFT) and 2-thumb-encircling hands techniques (TTEHT). This was prospective, randomized, crossover manikin study. Sixty-three nurses who performed a randomized sequence of 2-minute continuous CC with the 3 techniques in random order. Simulated systolic (SBP), diastolic (DBP), mean arterial pressure (MAP), and pulse pressures (PP, SBP-DBP) in mm Hg were measured. The nTTT resulted in a higher median SBP value (69 [IQR, 63-74] mm Hg) than TTEHT (41.5 [IQR, 39-42] mm Hg), (P < .001) and TFT (26.5 [IQR, 25.5-29] mm Hg), (P <.001). The simulated median value of DBP was 20 (IQR, 19-20) mm Hg with nTTT, 18 (IQR, 17-19) mm Hg with TTEHT and 23.5 (IQR, 22-25.5) mm Hg with TFT. DBP was significantly higher with TFT than with TTEHT (P <.001), as well as with TTEHT than nTTT (P <.001). Median values of simulated MAP were 37 (IQR, 34.5-38) mm Hg with nTTT, 26 (IQR, 25-26) mm Hg with TTEHT and 24.5 (IQR,23.5-26.5) mm Hg with TFT. A statistically significant difference was noticed between nTTT and TFT (P <.001), nTTT and TTEHT (P <.001), and between TTEHT and TFT (P <.001). Sixty-one subjects (96.8%) preferred the nTTT over the 2 standard methods. The new nTTT technique achieved higher SBP and MAP compared to the standard CC techniques in our infant manikin model. nTTT appears to be a suitable alternative or complementary to the TFT and TTEHT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez, Tammy Ann
Technical Area-18 (TA-18), also known as Pajarito Site, is located on Los Alamos National Laboratory property and has historic buildings that will be included in the Manhattan Project National Historic Park. Characterization studies of metal contamination were needed in two of the four buildings that are on the historic registry in this area, a “battleship” bunker building (TA-18-0002) and the Pond cabin (TA-18-0029). However, these two buildings have been exposed to the elements, are decades old, and have porous and rough surfaces (wood and concrete). Due to these conditions, it was questioned whether standard wipe sampling would be adequate tomore » detect surface dust metal contamination in these buildings. Thus, micro-vacuum and surface wet wipe sampling techniques were performed side-by-side at both buildings and results were compared statistically. A two-tail paired t-test revealed that the micro-vacuum and wet wipe techniques were statistically different for both buildings. Further mathematical analysis revealed that the wet wipe technique picked up more metals from the surface than the microvacuum technique. Wet wipes revealed concentrations of beryllium and lead above internal housekeeping limits; however, using an yttrium normalization method with linear regression analysis between beryllium and yttrium revealed a correlation indicating that the beryllium levels were likely due to background and not operational contamination. PPE and administrative controls were implemented for National Park Service (NPS) and Department of Energy (DOE) tours as a result of this study. Overall, this study indicates that the micro-vacuum technique may not be an efficient technique to sample for metal dust contamination.« less
A New Femtosecond Laser-Based Three-Dimensional Tomography Technique
NASA Astrophysics Data System (ADS)
Echlin, McLean P.
2011-12-01
Tomographic imaging has dramatically changed science, most notably in the fields of medicine and biology, by producing 3D views of structures which are too complex to understand in any other way. Current tomographic techniques require extensive time both for post-processing and data collection. Femtosecond laser based tomographic techniques have been developed in both standard atmosphere (femtosecond laser-based serial sectioning technique - FSLSS) and in vacuum (Tri-Beam System) for the fast collection (10 5mum3/s) of mm3 sized 3D datasets. Both techniques use femtosecond laser pulses to selectively remove layer-by-layer areas of material with low collateral damage and a negligible heat affected zone. To the authors knowledge, femtosecond lasers have never been used to serial section and these techniques have been entirely and uniquely developed by the author and his collaborators at the University of Michigan and University of California Santa Barbara. The FSLSS was applied to measure the 3D distribution of TiN particles in a 4330 steel. Single pulse ablation morphologies and rates were measured and collected from literature. Simultaneous two-phase ablation of TiN and steel matrix was shown to occur at fluences of 0.9-2 J/cm2. Laser scanning protocols were developed minimizing surface roughness to 0.1-0.4 mum for laser-based sectioning. The FSLSS technique was used to section and 3D reconstruct titanium nitride (TiN) containing 4330 steel. Statistical analysis of 3D TiN particle sizes, distribution parameters, and particle density were measured. A methodology was developed to use the 3D datasets to produce statistical volume elements (SVEs) for toughness modeling. Six FSLSS TiN datasets were sub-sampled into 48 SVEs for statistical analysis and toughness modeling using the Rice-Tracey and Garrison-Moody models. A two-parameter Weibull analysis was performed and variability in the toughness data agreed well with Ruggieri et al. bulk toughness measurements. The Tri-Beam system combines the benefits of laser based material removal (speed, low-damage, automated) with detectors that collect chemical, structural, and topological information. Multi-modal sectioning information was collected after many laser scanning passes demonstrating the capability of the Tri-Beam system.
Acar, Buket; Kamburoğlu, Kıvanç; Tatar, İlkan; Arıkan, Volkan; Çelik, Hakan Hamdi; Yüksel, Selcen; Özen, Tuncer
2015-12-01
This study was performed to compare the accuracy of micro-computed tomography (CT) and cone-beam computed tomography (CBCT) in detecting accessory canals in primary molars. Forty-one extracted human primary first and second molars were embedded in wax blocks and scanned using micro-CT and CBCT. After the images were taken, the samples were processed using a clearing technique and examined under a stereomicroscope in order to establish the gold standard for this study. The specimens were classified into three groups: maxillary molars, mandibular molars with three canals, and mandibular molars with four canals. Differences between the gold standard and the observations made using the imaging methods were calculated using Spearman's rho correlation coefficient test. The presence of accessory canals in micro-CT images of maxillary and mandibular root canals showed a statistically significant correlation with the stereomicroscopic images used as a gold standard. No statistically significant correlation was found between the CBCT findings and the stereomicroscopic images. Although micro-CT is not suitable for clinical use, it provides more detailed information about minor anatomical structures. However, CBCT is convenient for clinical use but may not be capable of adequately analyzing the internal anatomy of primary teeth.
Development and Validation of Instruments to Measure Learning of Expert-Like Thinking
NASA Astrophysics Data System (ADS)
Adams, Wendy K.; Wieman, Carl E.
2011-06-01
This paper describes the process for creating and validating an assessment test that measures the effectiveness of instruction by probing how well that instruction causes students in a class to think like experts about specific areas of science. The design principles and process are laid out and it is shown how these align with professional standards that have been established for educational and psychological testing and the elements of assessment called for in a recent National Research Council study on assessment. The importance of student interviews for creating and validating the test is emphasized, and the appropriate interview procedures are presented. The relevance and use of standard psychometric statistical tests are discussed. Additionally, techniques for effective test administration are presented.
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
NASA Astrophysics Data System (ADS)
Van de Casteele, Elke; Parizel, Paul; Sijbers, Jan
2012-03-01
Adaptive statistical iterative reconstruction (ASiR) is a new reconstruction algorithm used in the field of medical X-ray imaging. This new reconstruction method combines the idealized system representation, as we know it from the standard Filtered Back Projection (FBP) algorithm, and the strength of iterative reconstruction by including a noise model in the reconstruction scheme. It studies how noise propagates through the reconstruction steps, feeds this model back into the loop and iteratively reduces noise in the reconstructed image without affecting spatial resolution. In this paper the effect of ASiR on the contrast to noise ratio is studied using the low contrast module of the Catphan phantom. The experiments were done on a GE LightSpeed VCT system at different voltages and currents. The results show reduced noise and increased contrast for the ASiR reconstructions compared to the standard FBP method. For the same contrast to noise ratio the images from ASiR can be obtained using 60% less current, leading to a reduction in dose of the same amount.
Wang, Hongmei; Feng, Qing; Li, Ning; Xu, Sheng
2016-12-01
Limited information is available regarding the metal-ceramic bond strength of dental Co-Cr alloys fabricated by casting (CAST), computer numerical control (CNC) milling, and selective laser melting (SLM). The purpose of this in vitro study was to evaluate the metal-ceramic bond characteristics of 3 dental Co-Cr alloys fabricated by casting, computer numerical control milling, and selective laser melting techniques using the 3-point bend test (International Organization for Standardization [ISO] standard 9693). Forty-five specimens (25×3×0.5 mm) made of dental Co-Cr alloys were prepared by CAST, CNC milling, and SLM techniques. The morphology of the oxidation surface of metal specimens was evaluated by scanning electron microscopy (SEM). After porcelain application, the interfacial characterization was evaluated by SEM equipped with energy-dispersive spectrometry (EDS) analysis, and the metal-ceramic bond strength was assessed with the 3-point bend test. Failure type and elemental composition on the debonding interface were assessed by SEM/EDS. The bond strength was statistically analyzed by 1-way ANOVA and Tukey honest significant difference test (α=.05). The oxidation surfaces of the CAST, CNC, and SLM groups were different. They were porous in the CAST group but compact and irregular in the CNC and SLM groups. The metal-ceramic interfaces of the SLM and CNC groups showed excellent combination compared with those of the CAST group. The bond strength was 37.7 ±6.5 MPa for CAST, 43.3 ±9.2 MPa for CNC, and 46.8 ±5.1 MPa for the SLM group. Statistically significant differences were found among the 3 groups tested (P=.028). The debonding surfaces of all specimens exhibited cohesive failure mode. The oxidation surface morphologies and thicknesses of dental Co-Cr alloys are dependent on the different fabrication techniques used. The bond strength of all 3 groups exceed the minimum acceptable value of 25 MPa recommended by ISO 9693; hence, dental Co-Cr alloy fabricated with the SLM techniques could be a promising alternative for metal ceramic restorations. Copyright © 2016 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Defining ischemic burden after traumatic brain injury using 15O PET imaging of cerebral physiology.
Coles, Jonathan P; Fryer, Tim D; Smielewski, Peter; Rice, Kenneth; Clark, John C; Pickard, John D; Menon, David K
2004-02-01
Whereas postmortem ischemic damage is common in head injury, antemortem demonstration of ischemia has proven to be elusive. Although 15O positron emission tomography may be useful in this area, the technique has traditionally analyzed data within regions of interest (ROIs) to improve statistical accuracy. In head injury, such techniques are limited because of the lack of a priori knowledge regarding the location of ischemia, coexistence of hyperaemia, and difficulty in defining ischemic cerebral blood flow (CBF) and cerebral oxygen metabolism (CMRO2) levels. We report a novel method for defining disease pathophysiology following head injury. Voxel-based approaches are used to define the distribution of oxygen extraction fraction (OEF) across the entire brain; the standard deviation of this distribution provides a measure of the variability of OEF. These data are also used to integrate voxels above a threshold OEF value to produce an ROI based upon coherent physiology rather than spatial contiguity (the ischemic brain volume; IBV). However, such approaches may suffer from poor statistical accuracy, particularly in regions with low blood flow. The magnitude of these errors has been assessed in modeling experiments using the Hoffman brain phantom and modified control datasets. We conclude that this technique is a valid and useful tool for quantifying ischemic burden after traumatic brain injury.
Detailed gravity anomalies from GEOS-3 satellite altimetry data
NASA Technical Reports Server (NTRS)
Gopalapillai, G. S.; Mourad, A. G.
1978-01-01
A technique for deriving mean gravity anomalies from dense altimetry data was developed. A combination of both deterministic and statistical techniques was used. The basic mathematical model was based on the Stokes' equation which describes the analytical relationship between mean gravity anomalies and geoid undulations at a point; this undulation is a linear function of the altimetry data at that point. The overdetermined problem resulting from the excessive altimetry data available was solved using Least-Squares principles. These principles enable the simultaneous estimation of the associated standard deviations reflecting the internal consistency based on the accuracy estimates provided for the altimetry data as well as for the terrestrial anomaly data. Several test computations were made of the anomalies and their accuracy estimates using GOES-3 data.
A BAYESIAN APPROACH TO DERIVING AGES OF INDIVIDUAL FIELD WHITE DWARFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Malley, Erin M.; Von Hippel, Ted; Van Dyk, David A., E-mail: ted.vonhippel@erau.edu, E-mail: dvandyke@imperial.ac.uk
2013-09-20
We apply a self-consistent and robust Bayesian statistical approach to determine the ages, distances, and zero-age main sequence (ZAMS) masses of 28 field DA white dwarfs (WDs) with ages of approximately 4-8 Gyr. Our technique requires only quality optical and near-infrared photometry to derive ages with <15% uncertainties, generally with little sensitivity to our choice of modern initial-final mass relation. We find that age, distance, and ZAMS mass are correlated in a manner that is too complex to be captured by traditional error propagation techniques. We further find that the posterior distributions of age are often asymmetric, indicating that themore » standard approach to deriving WD ages can yield misleading results.« less
Statistical analysis of texture in trunk images for biometric identification of tree species.
Bressane, Adriano; Roveda, José A F; Martins, Antônio C G
2015-04-01
The identification of tree species is a key step for sustainable management plans of forest resources, as well as for several other applications that are based on such surveys. However, the present available techniques are dependent on the presence of tree structures, such as flowers, fruits, and leaves, limiting the identification process to certain periods of the year. Therefore, this article introduces a study on the application of statistical parameters for texture classification of tree trunk images. For that, 540 samples from five Brazilian native deciduous species were acquired and measures of entropy, uniformity, smoothness, asymmetry (third moment), mean, and standard deviation were obtained from the presented textures. Using a decision tree, a biometric species identification system was constructed and resulted to a 0.84 average precision rate for species classification with 0.83accuracy and 0.79 agreement. Thus, it can be considered that the use of texture presented in trunk images can represent an important advance in tree identification, since the limitations of the current techniques can be overcome.
Demonstration of Wavelet Techniques in the Spectral Analysis of Bypass Transition Data
NASA Technical Reports Server (NTRS)
Lewalle, Jacques; Ashpis, David E.; Sohn, Ki-Hyeon
1997-01-01
A number of wavelet-based techniques for the analysis of experimental data are developed and illustrated. A multiscale analysis based on the Mexican hat wavelet is demonstrated as a tool for acquiring physical and quantitative information not obtainable by standard signal analysis methods. Experimental data for the analysis came from simultaneous hot-wire velocity traces in a bypass transition of the boundary layer on a heated flat plate. A pair of traces (two components of velocity) at one location was excerpted. A number of ensemble and conditional statistics related to dominant time scales for energy and momentum transport were calculated. The analysis revealed a lack of energy-dominant time scales inside turbulent spots but identified transport-dominant scales inside spots that account for the largest part of the Reynolds stress. Momentum transport was much more intermittent than were energetic fluctuations. This work is the first step in a continuing study of the spatial evolution of these scale-related statistics, the goal being to apply the multiscale analysis results to improve the modeling of transitional and turbulent industrial flows.
NASA Technical Reports Server (NTRS)
Pai, Shantaram S.; Gyekenyesi, John P.
1988-01-01
The calculation of shape and scale parameters of the two-parameter Weibull distribution is described using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. Detailed procedures are given for evaluating 90 percent confidence intervals for maximum likelihood estimates of shape and scale parameters, the unbiased estimates of the shape parameters, and the Weibull mean values and corresponding standard deviations. Furthermore, the necessary steps are described for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull distribution. It also shows how to calculate the Batdorf flaw-density constants by uing the Weibull distribution statistical parameters. The techniques described were verified with several example problems, from the open literature, and were coded. The techniques described were verified with several example problems from the open literature, and were coded in the Structural Ceramics Analysis and Reliability Evaluation (SCARE) design program.
Guelzow, A; Stamm, O; Martus, P; Kielbassa, A M
2005-10-01
To compare ex vivo various parameters of root canal preparation using a manual technique and six different rotary nickel-titanium (Ni-Ti) instruments (FlexMaster, System GT, HERO 642, K3, ProTaper, and RaCe). A total of 147 extracted mandibular molars were divided into seven groups (n = 21) with equal mean mesio-buccal root canal curvatures (up to 70 degrees), and embedded in a muffle system. All root canals were prepared to size 30 using a crown-down preparation technique for the rotary nickel-titanium instruments and a standardized preparation (using reamers and Hedströem files) for the manual technique. Length modifications and straightening were determined by standardized radiography and a computer-aided difference measurement for every instrument system. Post-operative cross-sections were evaluated by light-microscopic investigation and photographic documentation. Procedural errors, working time and time for instrumentation were recorded. The data were analysed statistically using the Kruskal-Wallis test and the Mann-Whitney U-test. No significant differences were detected between the rotary Ni-Ti instruments for alteration of working length. All Ni-Ti systems maintained the original curvature well, with minor mean degrees of straightening ranging from 0.45 degrees (System GT) to 1.17 degrees (ProTaper). ProTaper had the lowest numbers of irregular post-operative root canal diameters; the results were comparable between the other systems. Instrument fractures occurred with ProTaper in three root canals, whilst preparation with System GT, HERO 642, K3 and the manual technique resulted in one fracture each. Ni-Ti instruments prepared canals more rapidly than the manual technique. The shortest time for instrumentation was achieved with System GT (11.7 s). Under the conditions of this ex vivo study all Ni-Ti systems maintained the canal curvature, were associated with few instrument fractures and were more rapid than a standardized manual technique. ProTaper instruments created more regular canal diameters.
Ene-Obong, Henrietta Nkechi; Onuoha, Nne Ola; Eme, Paul Eze
2017-11-01
This study examined gender roles, family relationships, food security, and nutritional status of households in Ohafia: a matrilineal society in Nigeria. A cross-sectional descriptive study was conducted. Multistage sampling technique was used to select 287 households from three villages: Akanu, Amangwu, and Elu. Qualitative and quantitative data collection methods were adopted, namely, focus group discussions and questionnaires. Anthropometric measurements (height and weight for mothers and children and Mid-Upper Arm Circumference for young children) were taken using standard techniques. The body mass index of women was calculated. All nutritional indices were compared with reference standards. Food insecurity was assessed using the Household Hunger Scale and Dietary Diversity Score, then analysed using the Statistical Product for Service Solution version 21. Data analysis used descriptive statistics. Most (91.2%) of the respondents were female. The matrilineal system known as ikwu nne or iri ala a nne (inheritance through mothers' lineage) is still in place but is changing. One important benefit of the system is the access to land by women. Whereas women participated actively in agriculture, food preparation, and care of family, the men were moving to off-farm activities. High prevalence of household food insecurity (66%) and signs of malnutrition including moderate to severe stunting (48.4%) and wasting (31.7%) in children, household hunger (34.5%), and overweight (27.5%) and obesity (19.2%) among mothers were observed. These communities urgently need gender sensitive food and nutrition interventions. © 2018 John Wiley & Sons Ltd.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2015-02-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test-retest repeatability data for illustrative purposes. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Huang, Erich P; Wang, Xiao-Feng; Choudhury, Kingshuk Roy; McShane, Lisa M; Gönen, Mithat; Ye, Jingjing; Buckler, Andrew J; Kinahan, Paul E; Reeves, Anthony P; Jackson, Edward F; Guimaraes, Alexander R; Zahlmann, Gudrun
2017-01-01
Medical imaging serves many roles in patient care and the drug approval process, including assessing treatment response and guiding treatment decisions. These roles often involve a quantitative imaging biomarker, an objectively measured characteristic of the underlying anatomic structure or biochemical process derived from medical images. Before a quantitative imaging biomarker is accepted for use in such roles, the imaging procedure to acquire it must undergo evaluation of its technical performance, which entails assessment of performance metrics such as repeatability and reproducibility of the quantitative imaging biomarker. Ideally, this evaluation will involve quantitative summaries of results from multiple studies to overcome limitations due to the typically small sample sizes of technical performance studies and/or to include a broader range of clinical settings and patient populations. This paper is a review of meta-analysis procedures for such an evaluation, including identification of suitable studies, statistical methodology to evaluate and summarize the performance metrics, and complete and transparent reporting of the results. This review addresses challenges typical of meta-analyses of technical performance, particularly small study sizes, which often causes violations of assumptions underlying standard meta-analysis techniques. Alternative approaches to address these difficulties are also presented; simulation studies indicate that they outperform standard techniques when some studies are small. The meta-analysis procedures presented are also applied to actual [18F]-fluorodeoxyglucose positron emission tomography (FDG-PET) test–retest repeatability data for illustrative purposes. PMID:24872353
Herek, Duygu; Karabulut, Nevzat; Kocyıgıt, Ali; Yagcı, Ahmet Baki
2016-01-01
Our aim was to compare the apparent diffusion coefficient (ADC) values of normal abdominal parenchymal organs and signal-to-noise ratio (SNR) measurements in the same patients with breath hold (BH) and free breathing (FB) diffusion weighted imaging (DWI). Forty-eight patients underwent both BH and FB DWI. Spherical region of interest (ROI) was placed on the right hepatic lobe, spleen, pancreas, and renal cortices. ADC values were calculated for each organ on each sequence using an automated software. Image noise, defined as the standard deviation (SD) of the signal intensities in the most artifact-free area of the image background was measured by placing the largest possible ROI on either the left or the right side of the body outside the object in the recorded field of view. SNR was calculated using the formula: SNR=signal intensity (SI) (organ) /standard deviation (SD) (noise) . There were no statistically significant differences in ADC values of the abdominal organs between BH and FB DWI sequences ( p >0.05). There were statistically significant differences between SNR values of organs on BH and FB DWIs. SNRs were found to be better on FB DWI than BH DWI ( p <0.001). Free breathing DWI technique reduces image noise and increases SNR for abdominal examinations. Free breathing technique is therefore preferable to BH DWI in the evaluation of abdominal organs by DWI.
The Next-Generation PCR-Based Quantification Method for Ambient Waters: Digital PCR.
Cao, Yiping; Griffith, John F; Weisberg, Stephen B
2016-01-01
Real-time quantitative PCR (qPCR) is increasingly being used for ambient water monitoring, but development of digital polymerase chain reaction (digital PCR) has the potential to further advance the use of molecular techniques in such applications. Digital PCR refines qPCR by partitioning the sample into thousands to millions of miniature reactions that are examined individually for binary endpoint results, with DNA density calculated from the fraction of positives using Poisson statistics. This direct quantification removes the need for standard curves, eliminating the labor and materials associated with creating and running standards with each batch, and removing biases associated with standard variability and mismatching amplification efficiency between standards and samples. Confining reactions and binary endpoint measurements to small partitions also leads to other performance advantages, including reduced susceptibility to inhibition, increased repeatability and reproducibility, and increased capacity to measure multiple targets in one analysis. As such, digital PCR is well suited for ambient water monitoring applications and is particularly advantageous as molecular methods move toward autonomous field application.
Study on the criteria for assessing skull-face correspondence in craniofacial superimposition.
Ibáñez, Oscar; Valsecchi, Andrea; Cavalli, Fabio; Huete, María Isabel; Campomanes-Alvarez, Blanca Rosario; Campomanes-Alvarez, Carmen; Vicente, Ricardo; Navega, David; Ross, Ann; Wilkinson, Caroline; Jankauskas, Rimantas; Imaizumi, Kazuhiko; Hardiman, Rita; Jayaprakash, Paul Thomas; Ruiz, Elena; Molinero, Francisco; Lestón, Patricio; Veselovskaya, Elizaveta; Abramov, Alexey; Steyn, Maryna; Cardoso, Joao; Humpire, Daniel; Lusnig, Luca; Gibelli, Daniele; Mazzarelli, Debora; Gaudio, Daniel; Collini, Federica; Damas, Sergio
2016-11-01
Craniofacial superimposition has the potential to be used as an identification method when other traditional biological techniques are not applicable due to insufficient quality or absence of ante-mortem and post-mortem data. Despite having been used in many countries as a method of inclusion and exclusion for over a century it lacks standards. Thus, the purpose of this research is to provide forensic practitioners with standard criteria for analysing skull-face relationships. Thirty-seven experts from 16 different institutions participated in this study, which consisted of evaluating 65 criteria for assessing skull-face anatomical consistency on a sample of 24 different skull-face superimpositions. An unbiased statistical analysis established the most objective and discriminative criteria. Results did not show strong associations, however, important insights to address lack of standards were provided. In addition, a novel methodology for understanding and standardizing identification methods based on the observation of morphological patterns has been proposed. Crown Copyright © 2016. Published by Elsevier Ireland Ltd. All rights reserved.
Noninvasive fetal QRS detection using an echo state network and dynamic programming.
Lukoševičius, Mantas; Marozas, Vaidotas
2014-08-01
We address a classical fetal QRS detection problem from abdominal ECG recordings with a data-driven statistical machine learning approach. Our goal is to have a powerful, yet conceptually clean, solution. There are two novel key components at the heart of our approach: an echo state recurrent neural network that is trained to indicate fetal QRS complexes, and several increasingly sophisticated versions of statistics-based dynamic programming algorithms, which are derived from and rooted in probability theory. We also employ a standard technique for preprocessing and removing maternal ECG complexes from the signals, but do not take this as the main focus of this work. The proposed approach is quite generic and can be extended to other types of signals and annotations. Open-source code is provided.
NASA Technical Reports Server (NTRS)
Houston, A. G.; Feiveson, A. H.; Chhikara, R. S.; Hsu, E. M. (Principal Investigator)
1979-01-01
A statistical methodology was developed to check the accuracy of the products of the experimental operations throughout crop growth and to determine whether the procedures are adequate to accomplish the desired accuracy and reliability goals. It has allowed the identification and isolation of key problems in wheat area yield estimation, some of which have been corrected and some of which remain to be resolved. The major unresolved problem in accuracy assessment is that of precisely estimating the bias of the LACIE production estimator. Topics covered include: (1) evaluation techniques; (2) variance and bias estimation for the wheat production estimate; (3) the 90/90 evaluation; (4) comparison of the LACIE estimate with reference standards; and (5) first and second order error source investigations.
Sampling methods to the statistical control of the production of blood components.
Pereira, Paulo; Seghatchian, Jerard; Caldeira, Beatriz; Santos, Paula; Castro, Rosa; Fernandes, Teresa; Xavier, Sandra; de Sousa, Gracinda; de Almeida E Sousa, João Paulo
2017-12-01
The control of blood components specifications is a requirement generalized in Europe by the European Commission Directives and in the US by the AABB standards. The use of a statistical process control methodology is recommended in the related literature, including the EDQM guideline. The control reliability is dependent of the sampling. However, a correct sampling methodology seems not to be systematically applied. Commonly, the sampling is intended to comply uniquely with the 1% specification to the produced blood components. Nevertheless, on a purely statistical viewpoint, this model could be argued not to be related to a consistent sampling technique. This could be a severe limitation to detect abnormal patterns and to assure that the production has a non-significant probability of producing nonconforming components. This article discusses what is happening in blood establishments. Three statistical methodologies are proposed: simple random sampling, sampling based on the proportion of a finite population, and sampling based on the inspection level. The empirical results demonstrate that these models are practicable in blood establishments contributing to the robustness of sampling and related statistical process control decisions for the purpose they are suggested for. Copyright © 2017 Elsevier Ltd. All rights reserved.
CMB EB and TB cross-spectrum estimation via pseudospectrum techniques
NASA Astrophysics Data System (ADS)
Grain, J.; Tristram, M.; Stompor, R.
2012-10-01
We discuss methods for estimating EB and TB spectra of the cosmic microwave background anisotropy maps covering limited sky area. Such odd-parity correlations are expected to vanish whenever parity is not broken. As this is indeed the case in the standard cosmologies, any evidence to the contrary would have a profound impact on our theories of the early Universe. Such correlations could also become a sensitive diagnostic of some particularly insidious instrumental systematics. In this work we introduce three different unbiased estimators based on the so-called standard and pure pseudo-spectrum techniques and later assess their performance by means of extensive Monte Carlo simulations performed for different experimental configurations. We find that a hybrid approach combining a pure estimate of B-mode multipoles with a standard one for E-mode (or T) multipoles, leads to the smallest error bars for both EB (or TB respectively) spectra as well as for the three other polarization-related angular power spectra (i.e., EE, BB, and TE). However, if both E and B multipoles are estimated using the pure technique, the loss of precision for the EB spectrum is not larger than ˜30%. Moreover, for the experimental configurations considered here, the statistical uncertainties-due to sampling variance and instrumental noise-of the pseudo-spectrum estimates is at most a factor ˜1.4 for TT, EE, and TE spectra and a factor ˜2 for BB, TB, and EB spectra, higher than the most optimistic Fisher estimate of the variance.
Wendl, Christina M; Eiglsperger, Johannes; Dendl, Lena-Marie; Brodoefel, Harald; Schebesch, Karl-Michael; Stroszczynski, Christian; Fellner, Claudia
2018-05-01
The aim of our study was to systematically compare two-point Dixon fat suppression (FS) and spectral FS techniques in contrast enhanced imaging of the head and neck region. Three independent readers analysed coronal T 1 weighted images recorded after contrast medium injection with Dixon and spectral FS techniques with regard to FS homogeneity, motion artefacts, lesion contrast, image sharpness and overall image quality. 85 patients were prospectively enrolled in the study. Images generated with Dixon-FS technique were of higher overall image quality and had a more homogenous FS over the whole field of view compared with the standard spectral fat-suppressed images (p < 0.001). Concerning motion artefacts, flow artefacts, lesion contrast and image sharpness no statistically significant difference was observed. The Dixon-FS technique is superior to the spectral technique due to improved homogeneity of FS and overall image quality while maintaining lesion contrast. Advances in knowledge: T 1 with Dixon FS technique offers, compared to spectral FS, significantly improved FS homogeneity and over all image quality in imaging of the head and neck region.
Parameter Estimation in Astronomy with Poisson-Distributed Data. 1; The (CHI)2(gamma) Statistic
NASA Technical Reports Server (NTRS)
Mighell, Kenneth J.
1999-01-01
Applying the standard weighted mean formula, [Sigma (sub i)n(sub i)ssigma(sub i, sup -2)], to determine the weighted mean of data, n(sub i), drawn from a Poisson distribution, will, on average, underestimate the true mean by approx. 1 for all true mean values larger than approx.3 when the common assumption is made that the error of the i th observation is sigma(sub i) = max square root of n(sub i), 1).This small, but statistically significant offset, explains the long-known observation that chi-square minimization techniques which use the modified Neyman'chi(sub 2) statistic, chi(sup 2, sub N) equivalent Sigma(sub i)((n(sub i) - y(sub i)(exp 2)) / max(n(sub i), 1), to compare Poisson - distributed data with model values, y(sub i), will typically predict a total number of counts that underestimates the true total by about 1 count per bin. Based on my finding that weighted mean of data drawn from a Poisson distribution can be determined using the formula [Sigma(sub i)[n(sub i) + min(n(sub i), 1)](n(sub i) + 1)(exp -1)] / [Sigma(sub i)(n(sub i) + 1)(exp -1))], I propose that a new chi(sub 2) statistic, chi(sup 2, sub gamma) equivalent, should always be used to analyze Poisson- distributed data in preference to the modified Neyman's chi(exp 2) statistic. I demonstrated the power and usefulness of,chi(sub gamma, sup 2) minimization by using two statistical fitting techniques and five chi(exp 2) statistics to analyze simulated X-ray power - low 15 - channel spectra with large and small counts per bin. I show that chi(sub gamma, sup 2) minimization with the Levenberg - Marquardt or Powell's method can produce excellent results (mean slope errors approx. less than 3%) with spectra having as few as 25 total counts.
Singla, Sanjeev; Mittal, Geeta; Raghav; Mittal, Rajinder K
2014-02-01
Abdominal pain and shoulder tip pain after laparoscopic cholecystectomy are distressing for the patient. Various causes of this pain are peritoneal stretching and diaphragmatic irritation by high intra-abdominal pressure caused by pneumoperitoneum . We designed a study to compare the post operative pain after laparoscopic cholecystectomy at low pressure (7-8 mm of Hg) and standard pressure technique (12-14 mm of Hg). Aim : To compare the effect of low pressure and standard pressure pneumoperitoneum in post laparoscopic cholecystectomy pain . Further to study the safety of low pressure pneumoperitoneum in laparoscopic cholecystectomy. A prospective randomised double blind study. A prospective randomised double blind study was done in 100 ASA grade I & II patients. They were divided into two groups -50 each. Group A patients underwent laparoscopic cholecystectomy with low pressure pneumoperitoneum (7-8 mm Hg) while group B underwent laparoscopic cholecystectomy with standard pressure pneumoperitoneum (12-13 mm Hg). Both the groups were compared for pain intensity, analgesic requirement and complications. Demographic data and intraoperative complications were analysed using chi-square test. Frequency of pain, intensity of pain and analgesics consumption was compared by applying ANOVA test. Post-operative pain score was significantly less in low pressure group as compared to standard pressure group. Number of patients requiring rescue analgesic doses was more in standard pressure group . This was statistically significant. Also total analgesic consumption was more in standard pressure group. There was no difference in intraoperative complications. This study demonstrates the use of simple expedient of reducing the pressure of pneumoperitoneum to 8 mm results in reduction in both intensity and frequency of post-operative pain and hence early recovery and better outcome.This study also shows that low pressure technique is safe with comparable rate of intraoperative complications.
Study Methods to Standardize Thermography NDE
NASA Technical Reports Server (NTRS)
Walker, James L.; Workman, Gary L.
1998-01-01
The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include various graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Keviar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.
Study Methods to Standardize Thermography NDE
NASA Technical Reports Server (NTRS)
Walker, James L.; Workman, Gary L.
1998-01-01
The purpose of this work is to develop thermographic inspection methods and standards for use in evaluating structural composites and aerospace hardware. Qualification techniques and calibration methods are investigated to standardize the thermographic method for use in the field. Along with the inspections of test standards structural hardware, support hardware is designed and fabricated to aid in the thermographic process. Also, a standard operating procedure is developed for performing inspections with the Bales Thermal Image Processor (TIP). Inspections are performed on a broad range of structural composites. These materials include graphite/epoxies, graphite/cyanide-ester, graphite/silicon-carbide, graphite phenolic and Kevlar/epoxy. Also metal honeycomb (titanium and aluminum faceplates over an aluminum honeycomb core) structures are investigated. Various structural shapes are investigated and the thickness of the structures vary from as few as 3 plies to as many as 80 plies. Special emphasis is placed on characterizing defects in attachment holes and bondlines, in addition to those resulting from impact damage and the inclusion of foreign matter. Image processing through statistical analysis and digital filtering is investigated to enhance the quality and quantify the NDE thermal images when necessary.
Powerful Statistical Inference for Nested Data Using Sufficient Summary Statistics
Dowding, Irene; Haufe, Stefan
2018-01-01
Hierarchically-organized data arise naturally in many psychology and neuroscience studies. As the standard assumption of independent and identically distributed samples does not hold for such data, two important problems are to accurately estimate group-level effect sizes, and to obtain powerful statistical tests against group-level null hypotheses. A common approach is to summarize subject-level data by a single quantity per subject, which is often the mean or the difference between class means, and treat these as samples in a group-level t-test. This “naive” approach is, however, suboptimal in terms of statistical power, as it ignores information about the intra-subject variance. To address this issue, we review several approaches to deal with nested data, with a focus on methods that are easy to implement. With what we call the sufficient-summary-statistic approach, we highlight a computationally efficient technique that can improve statistical power by taking into account within-subject variances, and we provide step-by-step instructions on how to apply this approach to a number of frequently-used measures of effect size. The properties of the reviewed approaches and the potential benefits over a group-level t-test are quantitatively assessed on simulated data and demonstrated on EEG data from a simulated-driving experiment. PMID:29615885
Survey of statistical techniques used in validation studies of air pollution prediction models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bornstein, R D; Anderson, S F
1979-03-01
Statistical techniques used by meteorologists to validate predictions made by air pollution models are surveyed. Techniques are divided into the following three groups: graphical, tabular, and summary statistics. Some of the practical problems associated with verification are also discussed. Characteristics desired in any validation program are listed and a suggested combination of techniques that possesses many of these characteristics is presented.
Deriving Global Convection Maps From SuperDARN Measurements
NASA Astrophysics Data System (ADS)
Gjerloev, J. W.; Waters, C. L.; Barnes, R. J.
2018-04-01
A new statistical modeling technique for determining the global ionospheric convection is described. The principal component regression (PCR)-based technique is based on Super Dual Auroral Radar Network (SuperDARN) observations and is an advanced version of the PCR technique that Waters et al. (https//:doi.org.10.1002/2015JA021596) used for the SuperMAG data. While SuperMAG ground magnetic field perturbations are vector measurements, SuperDARN provides line-of-sight measurements of the ionospheric convection flow. Each line-of-sight flow has a known azimuth (or direction), which must be converted into the actual vector flow. However, the component perpendicular to the azimuth direction is unknown. Our method uses historical data from the SuperDARN database and PCR to determine a fill-in model convection distribution for any given universal time. The fill-in data process is driven by a list of state descriptors (magnetic indices and the solar zenith angle). The final solution is then derived from a spherical cap harmonic fit to the SuperDARN measurements and the fill-in model. When compared with the standard SuperDARN fill-in model, we find that our fill-in model provides improved solutions, and the final solutions are in better agreement with the SuperDARN measurements. Our solutions are far less dynamic than the standard SuperDARN solutions, which we interpret as being due to a lack of magnetosphere-ionosphere inertia and communication delays in the standard SuperDARN technique while it is inherently included in our approach. Rather, we argue that the magnetosphere-ionosphere system has inertia that prevents the global convection from changing abruptly in response to an interplanetary magnetic field change.
Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.
Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W
2017-08-01
Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.
High intensity click statistics from a 10 × 10 avalanche photodiode array
NASA Astrophysics Data System (ADS)
Kröger, Johannes; Ahrens, Thomas; Sperling, Jan; Vogel, Werner; Stolz, Heinrich; Hage, Boris
2017-11-01
Photon-number measurements are a fundamental technique for the discrimination and characterization of quantum states of light. Beyond the abilities of state-of-the-art devices, we present measurements with an array of 100 avalanche photodiodes exposed to photon-numbers ranging from well below to significantly above one photon per diode. Despite each single diode only discriminating between zero and non-zero photon-numbers we were able to extract a second order moment, which acts as a nonclassicality indicator. We demonstrate a vast enhancement of the applicable intensity range by two orders of magnitude relative to the standard application of such devices. It turns out that the probabilistic mapping of arbitrary photon-numbers on a finite number of registered clicks is not per se a disadvantage compared with true photon counters. Such detector arrays can bridge the gap between single-photon and linear detection, by investigation of the click statistics, without the necessity of photon statistics reconstruction.
Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira
2014-08-01
This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
PyMVPA: A python toolbox for multivariate pattern analysis of fMRI data.
Hanke, Michael; Halchenko, Yaroslav O; Sederberg, Per B; Hanson, Stephen José; Haxby, James V; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability.
PyMVPA: A Python toolbox for multivariate pattern analysis of fMRI data
Hanke, Michael; Halchenko, Yaroslav O.; Sederberg, Per B.; Hanson, Stephen José; Haxby, James V.; Pollmann, Stefan
2009-01-01
Decoding patterns of neural activity onto cognitive states is one of the central goals of functional brain imaging. Standard univariate fMRI analysis methods, which correlate cognitive and perceptual function with the blood oxygenation-level dependent (BOLD) signal, have proven successful in identifying anatomical regions based on signal increases during cognitive and perceptual tasks. Recently, researchers have begun to explore new multivariate techniques that have proven to be more flexible, more reliable, and more sensitive than standard univariate analysis. Drawing on the field of statistical learning theory, these new classifier-based analysis techniques possess explanatory power that could provide new insights into the functional properties of the brain. However, unlike the wealth of software packages for univariate analyses, there are few packages that facilitate multivariate pattern classification analyses of fMRI data. Here we introduce a Python-based, cross-platform, and open-source software toolbox, called PyMVPA, for the application of classifier-based analysis techniques to fMRI datasets. PyMVPA makes use of Python's ability to access libraries written in a large variety of programming languages and computing environments to interface with the wealth of existing machine-learning packages. We present the framework in this paper and provide illustrative examples on its usage, features, and programmability. PMID:19184561
Fish: A New Computer Program for Friendly Introductory Statistics Help
ERIC Educational Resources Information Center
Brooks, Gordon P.; Raffle, Holly
2005-01-01
All introductory statistics students must master certain basic descriptive statistics, including means, standard deviations and correlations. Students must also gain insight into such complex concepts as the central limit theorem and standard error. This article introduces and describes the Friendly Introductory Statistics Help (FISH) computer…
An entropy-based statistic for genomewide association studies.
Zhao, Jinying; Boerwinkle, Eric; Xiong, Momiao
2005-07-01
Efficient genotyping methods and the availability of a large collection of single-nucleotide polymorphisms provide valuable tools for genetic studies of human disease. The standard chi2 statistic for case-control studies, which uses a linear function of allele frequencies, has limited power when the number of marker loci is large. We introduce a novel test statistic for genetic association studies that uses Shannon entropy and a nonlinear function of allele frequencies to amplify the differences in allele and haplotype frequencies to maintain statistical power with large numbers of marker loci. We investigate the relationship between the entropy-based test statistic and the standard chi2 statistic and show that, in most cases, the power of the entropy-based statistic is greater than that of the standard chi2 statistic. The distribution of the entropy-based statistic and the type I error rates are validated using simulation studies. Finally, we apply the new entropy-based test statistic to two real data sets, one for the COMT gene and schizophrenia and one for the MMP-2 gene and esophageal carcinoma, to evaluate the performance of the new method for genetic association studies. The results show that the entropy-based statistic obtained smaller P values than did the standard chi2 statistic.
Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation
NASA Technical Reports Server (NTRS)
Locicero, J. L.; Schilling, D. L.
1977-01-01
An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.
77 FR 34044 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-08
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS); Subcommittee on Standards. Time and Date: June 20, 2012, 9 a.m.-5 p.m. EST..., Executive Secretary, NCVHS, National Center for Health Statistics, Centers for Disease Control and...
Comparison of dialysis membrane diffusion samplers and two purging methods in bedrock wells
Imbrigiotta, T.E.; Ehlke, T.A.; Lacombe, P.J.; Dale, J.M.; ,
2002-01-01
Collection of ground-water samples from bedrock wells using low-flow purging techniques is problematic because of the random spacing, variable hydraulic conductivity, and variable contamination of contributing fractures in each well's open interval. To test alternatives to this purging method, a field comparison of three ground-water-sampling techniques was conducted on wells in fractured bedrock at a site contaminated primarily with volatile organic compounds. Constituent concentrations in samples collected with a diffusion sampler constructed from dialysis membrane material were compared to those in samples collected from the same wells with a standard low-flow purging technique and a hybrid (high-flow/low-flow) purging technique. Concentrations of trichloroethene, cis-1,2-dichloroethene, vinyl chloride, calcium, chloride, and alkalinity agreed well among samples collected with all three techniques in 9 of the 10 wells tested. Iron concentrations varied more than those of the other parameters, but their pattern of variation was not consistent. Overall, the results of nonparametric analysis of variance testing on the nine wells sampled twice showed no statistically significant difference at the 95-percent confidence level among the concentrations of volatile organic compounds or inorganic constituents recovered by use of any of the three sampling techniques.
Antweiler, Ronald C.; Taylor, Howard E.
2008-01-01
The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.
Yuzbasioglu, Emir; Kurt, Hanefi; Turunc, Rana; Bilir, Halenur
2014-01-30
The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects' attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques.
Total Ossicular Replacement Prosthesis: A New Fat Interposition Technique.
Saliba, Issam; Sabbah, Valérie; Poirier, Jackie Bibeau
2018-01-01
To compare audiometric results between the standard total ossicular replacement prosthesis (TORP-S) and a new fat interposition total ossicular replacement prosthesis (TORP-F) in pediatric and adult patients and to assess the complication and the undesirable outcome. This is a retrospective study. This study included 104 patients who had undergone titanium implants with TORP-F and 54 patients who had undergone the procedure with TORP-S between 2008 and 2013 in our tertiary care centers. The new technique consists of interposing a fat graft between the 4 legs of the universal titanium prosthesis (Medtronic Xomed Inc, Jacksonville, FL, USA) to provide a more stable TORP in the ovale window niche. Normally, this prosthesis is designed to fit on the stapes' head as a partial ossicular replacement prosthesis. The postoperative air-bone gap less than 25 dB for the combined cohort was 69.2% and 41.7% for the TORP-F and the TORP-S groups, respectively. The mean follow-up was 17 months postoperatively. By stratifying data, the pediatric cohort shows 56.5% in the TORP-F group (n = 52) compared with 40% in the TORP-S group (n = 29). However, the adult cohort shows 79.3% in the TORP-F group (n = 52) compared with 43.75% in the TORP-S group (n = 25). These improvements in hearing were statistically significant. There were no statistically significant differences in the speech discrimination scores. The only undesirable outcome that was statistically different between the 2 groups was the prosthesis displacement: 7% in the TORP-F group compared with 19% in the TORP-S group ( P = .03). The interposition of a fat graft between the legs of the titanium implants (TORP-F) provides superior hearing results compared with a standard procedure (TORP-S) in pediatric and adult populations because of its better stability in the oval window niche.
Total Ossicular Replacement Prosthesis: A New Fat Interposition Technique
Saliba, Issam; Sabbah, Valérie; Poirier, Jackie Bibeau
2018-01-01
Objective: To compare audiometric results between the standard total ossicular replacement prosthesis (TORP-S) and a new fat interposition total ossicular replacement prosthesis (TORP-F) in pediatric and adult patients and to assess the complication and the undesirable outcome. Study design: This is a retrospective study. Methods: This study included 104 patients who had undergone titanium implants with TORP-F and 54 patients who had undergone the procedure with TORP-S between 2008 and 2013 in our tertiary care centers. The new technique consists of interposing a fat graft between the 4 legs of the universal titanium prosthesis (Medtronic Xomed Inc, Jacksonville, FL, USA) to provide a more stable TORP in the ovale window niche. Normally, this prosthesis is designed to fit on the stapes’ head as a partial ossicular replacement prosthesis. Results: The postoperative air-bone gap less than 25 dB for the combined cohort was 69.2% and 41.7% for the TORP-F and the TORP-S groups, respectively. The mean follow-up was 17 months postoperatively. By stratifying data, the pediatric cohort shows 56.5% in the TORP-F group (n = 52) compared with 40% in the TORP-S group (n = 29). However, the adult cohort shows 79.3% in the TORP-F group (n = 52) compared with 43.75% in the TORP-S group (n = 25). These improvements in hearing were statistically significant. There were no statistically significant differences in the speech discrimination scores. The only undesirable outcome that was statistically different between the 2 groups was the prosthesis displacement: 7% in the TORP-F group compared with 19% in the TORP-S group (P = .03). Conclusions: The interposition of a fat graft between the legs of the titanium implants (TORP-F) provides superior hearing results compared with a standard procedure (TORP-S) in pediatric and adult populations because of its better stability in the oval window niche. PMID:29326537
The estimation of the measurement results with using statistical methods
NASA Astrophysics Data System (ADS)
Velychko, O.; Gordiyenko, T.
2015-02-01
The row of international standards and guides describe various statistical methods that apply for a management, control and improvement of processes with the purpose of realization of analysis of the technical measurement results. The analysis of international standards and guides on statistical methods estimation of the measurement results recommendations for those applications in laboratories is described. For realization of analysis of standards and guides the cause-and-effect Ishikawa diagrams concerting to application of statistical methods for estimation of the measurement results are constructed.
Hatch, Kathryn M; Schultz, Tim; Talamo, Jonathan H; Dick, H Burkhard
2015-09-01
To compare effective phacoemulsification time (EPT) for the removal of brunescent cataracts treated with femtosecond laser-assisted cataract surgery with standard cataract phacoemulsification techniques. Ruhr University Eye Hospital, Bochum, Germany. Comparative prospective case study. The Lens Opacities Classification System III (LOCS III) grading system was used to measure eyes divided into 4 groups having cataract surgery. Groups 1 and 2 contained eyes with LOCS III grade nuclear opalescence (NO) 3 cataracts treated with standard cataract surgery and femtosecond laser-assisted cataract surgery, respectively. Groups 3 and 4 contained brunescent cataracts, LOCS III grades NO5, treated with standard cataract surgery and femtosecond laser-assisted cataract surgery, respectively. There were 240 eyes, with 60 eyes in each group. The EPT in Group 1 ranged from 0.46 to 3.10 (mean 1.38); the EPT in all eyes in Group 2 was 0 (P < .001). The EPT in Groups 3 and 4 was 2.12 to 19.29 (mean 6.85) and 0 to 6.75 (mean 1.35), respectively (P < .001). A comparison between EPT in Groups 1 and 4 showed that EPT in Group 4 was also lower than in Group 1 (P = .013). Groups 4 and 1 were the most statistically similar of all groups compared, suggesting that EPT for a femtosecond laser-treated grade 5 cataract was most similar to that of a standard-treated grade 3 cataract. Femtosecond laser pretreatment for brunescent cataracts allowed for a significant reduction in EPT compared with manual standard phacoemulsification techniques. Drs. Hatch, Talamo, and Dick are consultants to Abbott Medical Optics, Inc. Dr. Schultz has no financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
CAPSAS: Computer Assisted Program for the Selection of Appropriate Statistics.
ERIC Educational Resources Information Center
Shermis, Mark D.; Albert, Susan L.
A computer-assisted program has been developed for the selection of statistics or statistical techniques by both students and researchers. Based on Andrews, Klem, Davidson, O'Malley and Rodgers "A Guide for Selecting Statistical Techniques for Analyzing Social Science Data," this FORTRAN-compiled interactive computer program was…
Ullattuthodi, Sujana; Cherian, Kandathil Phillip; Anandkumar, R; Nambiar, M Sreedevi
2017-01-01
This in vitro study seeks to evaluate and compare the marginal and internal fit of cobalt-chromium copings fabricated using the conventional and direct metal laser sintering (DMLS) techniques. A master model of a prepared molar tooth was made using cobalt-chromium alloy. Silicone impression of the master model was made and thirty standardized working models were then produced; twenty working models for conventional lost-wax technique and ten working models for DMLS technique. A total of twenty metal copings were fabricated using two different production techniques: conventional lost-wax method and DMLS; ten samples in each group. The conventional and DMLS copings were cemented to the working models using glass ionomer cement. Marginal gap of the copings were measured at predetermined four points. The die with the cemented copings are standardized-sectioned with a heavy duty lathe. Then, each sectioned samples were analyzed for the internal gap between the die and the metal coping using a metallurgical microscope. Digital photographs were taken at ×50 magnification and analyzed using measurement software. Statistical analysis was done by unpaired t -test and analysis of variance (ANOVA). The results of this study reveal that no significant difference was present in the marginal gap of conventional and DMLS copings ( P > 0.05) by means of ANOVA. The mean values of internal gap of DMLS copings were significantly greater than that of conventional copings ( P < 0.05). Within the limitations of this in vitro study, it was concluded that the internal fit of conventional copings was superior to that of the DMLS copings. Marginal fit of the copings fabricated by two different techniques had no significant difference.
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
Approaches for estimating minimal clinically important differences in systemic lupus erythematosus.
Rai, Sharan K; Yazdany, Jinoos; Fortin, Paul R; Aviña-Zubieta, J Antonio
2015-06-03
A minimal clinically important difference (MCID) is an important concept used to determine whether a medical intervention improves perceived outcomes in patients. Prior to the introduction of the concept in 1989, studies focused primarily on statistical significance. As most recent clinical trials in systemic lupus erythematosus (SLE) have failed to show significant effects, determining a clinically relevant threshold for outcome scores (that is, the MCID) of existing instruments may be critical for conducting and interpreting meaningful clinical trials as well as for facilitating the establishment of treatment recommendations for patients. To that effect, methods to determine the MCID can be divided into two well-defined categories: distribution-based and anchor-based approaches. Distribution-based approaches are based on statistical characteristics of the obtained samples. There are various methods within the distribution-based approach, including the standard error of measurement, the standard deviation, the effect size, the minimal detectable change, the reliable change index, and the standardized response mean. Anchor-based approaches compare the change in a patient-reported outcome to a second, external measure of change (that is, one that is more clearly understood, such as a global assessment), which serves as the anchor. Finally, the Delphi technique can be applied as an adjunct to defining a clinically important difference. Despite an abundance of methods reported in the literature, little work in MCID estimation has been done in the context of SLE. As the MCID can help determine the effect of a given therapy on a patient and add meaning to statistical inferences made in clinical research, we believe there ought to be renewed focus on this area. Here, we provide an update on the use of MCIDs in clinical research, review some of the work done in this area in SLE, and propose an agenda for future research.
78 FR 65317 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-31
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: November 12, 2013 8:30 a.m.-5:30 p.m. EST. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...
78 FR 54470 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-04
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards Time and Date: September 18, 2013 8:30 p.m.--5:00 p.m. EDT. Place: Centers for Disease Control and Prevention, National Center for Health Statistics, 3311...
78 FR 942 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-07
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: February 27, 2013 9:30 a.m.-5:00 p.m... electronic claims attachments. The National Committee on Vital Health Statistics is the public advisory body...
78 FR 34100 - National Committee on Vital and Health Statistics: Meeting Standards Subcommittee
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... DEPARTMENT OF HEALTH AND HUMAN SERVICES National Committee on Vital and Health Statistics: Meeting... Health Statistics (NCVHS) Subcommittee on Standards. Time and Date: June 17, 2013 1:00 p.m.-5:00 p.m. e.d..., National Center for Health Statistics, 3311 Toledo Road, Auditorium B & C, Hyattsville, Maryland 20782...
Determination of Acidity in Donor Milk.
Escuder-Vieco, Diana; Vázquez-Román, Sara; Sánchez-Pallás, Juan; Ureta-Velasco, Noelia; Mosqueda-Peña, Rocío; Pallás-Alonso, Carmen Rosa
2016-11-01
There is no uniformity among milk banks on milk acceptance criteria. The acidity obtained by the Dornic titration technique is a widely used quality control in donor milk. However, there are no comparative data with other acidity-measuring techniques, such as the pH meter. The objective of this study was to assess the correlation between the Dornic technique and the pH measure to determine the pH cutoff corresponding to the Dornic degree limit value used as a reference for donor milk quality control. Fifty-two human milk samples were obtained from 48 donors. Acidity was measured using the Dornic method and pH meter in triplicate. Statistical data analysis to estimate significant correlations between variables was carried out. The Dornic acidity value that led to rejecting donor milk was ≥ 8 Dornic degrees (°D). In the evaluated sample size, Dornic acidity measure and pH values showed a statistically significant negative correlation (τ = -0.780; P = .000). A pH value of 6.57 corresponds to 8°D and of 7.12 to 4°D. Donor milk with a pH over 6.57 may be accepted for subsequent processing in the milk bank. Moreover, the pH measurement seems to be more useful due to certain advantages over the Dornic method, such as objectivity, accuracy, standardization, the lack of chemical reagents required, and the fact that it does not destroy the milk sample.
Russell, Thomas A; Mir, Hassan R; Stoneback, Jason; Cohen, Jose; Downs, Brandon
2008-07-01
To determine our rate of malalignment in proximal femoral shaft fractures treated with intramedullary (IM) nails, with and without the use of a minimally invasive nail insertion technique (MINIT). Retrospective study. Level 1 trauma center. Between July 1, 2003, and June 31, 2005, 100 consecutive proximal femoral shaft fractures (97 patients) were treated with IM nails. The average age of the 56 men and 41 women was 43.5 years (range, 17 to 96 years). There were 92 closed fractures and 8 open fractures. Fractures were classified according to the Russell-Taylor classification (69 type 1A, 11 type 1B, 3 type 2A, 17 type 2B). All patients underwent antegrade IM nailing using a fracture table in the supine (83) or lateral (17) position. A total of 72 entry portals were trochanteric, and 28 were piriformis. Seventy-seven percent of the femurs were opened with MINIT, a technique that uses a percutaneous cannulated channel reamer over a guide pin as opposed to the standard method of Kuntscher, which employs a femoral awl. Nails were locked proximally using standard locking in 37 fractures, and recon mode in 63. Fracture reduction was examined on immediate postoperative films to determine angulation in the coronal and sagittal planes. Criteria for acceptable reduction were less than 5 degrees angulation in any plane. In addition, surgical position, entry portal, mechanism of injury, Russell-Taylor classification, OTA classification, open or closed fracture, open or closed reduction, and type of implant used were analyzed for significance. The frequency of malalignment was 10% for the entire group of patients. Malalignment occurred in 26% of fractures treated without the use of the MINIT and in 5.2% when the MINIT was used (P < 0.01). There was no statistically significant difference between the different Russell-Taylor fracture types, although there was a trend towards more malalignment in type 2A and 2B fractures (P = 0.06). None of the other factors studied had a statistically significant effect on malalignment. A whole-model test of the factors that were surgeon-controlled (use of the MINIT, surgical position, open or closed reduction, type of implant used, and entry portal) found that only use of the MINIT had a statistically significant effect on malalignment (P < 0.01). The results indicate that use of the minimally invasive nail insertion technique (MINIT) significantly decreases the occurrence of malalignment in proximal femoral shaft fractures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fossaceca, Rita, E-mail: rfossaceca@hotmail.com; Guzzardi, Giuseppe, E-mail: guz@libero.it; Cerini, Paolo, E-mail: cerini84@hotmail.it
Purpose. To evaluate the efficacy of percutaneous transluminal angioplasty (PTA) in a selected population of diabetic patients with below-the-knee (BTK) disease and to analyze the reliability of the angiosome model. Methods. We made a retrospective analysis of the results of PTA performed in 201 diabetic patients with BTK-only disease treated at our institute from January 2005 to December 2011. We evaluated the postoperative technical success, and at 1, 6, and 12 months' follow-up, we assessed the rates and values of partial and complete ulcer healing, restenosis, major and minor amputation, limb salvage, and percutaneous oximetry (TcPO{sub 2}) (Student's t test).more » We used the angiosome model to compare different clinicolaboratory outcomes in patients treated by direct revascularization (DR) from patients treated with indirect revascularization (IR) technique by Student's t test and the {chi}{sup 2} test. Results. At a mean {+-} standard deviation follow-up of 17.5 {+-} 12 months, we observed a mortality rate of 3.5 %, a major amputation rate of 9.4 %, and a limb salvage rate of 87 % with a statistically significant increase of TcPO{sub 2} values at follow-up compared to baseline (p < 0.05). In 34 patients, treatment was performed with the IR technique and in 167 by DR; in both groups, there was a statistically significant increase of TcPO{sub 2} values at follow-up compared to baseline (p < 0.05), without statistically significant differences in therapeutic efficacy. Conclusion. PTA of the BTK-only disease is a safe and effective option. The DR technique is the first treatment option; we believe, however, that IR is similarly effective, with good results over time.« less
Nelson, Michael A; Bedner, Mary; Lang, Brian E; Toman, Blaza; Lippa, Katrice A
2015-11-01
Given the critical role of pure, organic compound primary reference standards used to characterize and certify chemical Certified Reference Materials (CRMs), it is essential that associated mass purity assessments be fit-for-purpose, represented by an appropriate uncertainty interval, and metrologically sound. The mass fraction purities (% g/g) of 25-hydroxyvitamin D (25(OH)D) reference standards used to produce and certify values for clinical vitamin D metabolite CRMs were investigated by multiple orthogonal quantitative measurement techniques. Quantitative (1)H-nuclear magnetic resonance spectroscopy (qNMR) was performed to establish traceability of these materials to the International System of Units (SI) and to directly assess the principal analyte species. The 25(OH)D standards contained volatile and water impurities, as well as structurally-related impurities that are difficult to observe by chromatographic methods or to distinguish from the principal 25(OH)D species by one-dimensional NMR. These impurities have the potential to introduce significant biases to purity investigations in which a limited number of measurands are quantified. Combining complementary information from multiple analytical methods, using both direct and indirect measurement techniques, enabled mitigation of these biases. Purities of 25(OH)D reference standards and associated uncertainties were determined using frequentist and Bayesian statistical models to combine data acquired via qNMR, liquid chromatography with UV absorbance and atmospheric pressure-chemical ionization mass spectrometric detection (LC-UV, LC-ACPI-MS), thermogravimetric analysis (TGA), and Karl Fischer (KF) titration.
On Teaching about the Coefficient of Variation in Introductory Statistics Courses
ERIC Educational Resources Information Center
Trafimow, David
2014-01-01
The standard deviation is related to the mean by virtue of the coefficient of variation. Teachers of statistics courses can make use of that fact to make the standard deviation more comprehensible for statistics students.
Metamodels for Computer-Based Engineering Design: Survey and Recommendations
NASA Technical Reports Server (NTRS)
Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.
1997-01-01
The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.
Villamonte-Chevalier, A; van Bree, H; Broeckx, Bjg; Dingemanse, W; Soler, M; Van Ryssen, B; Gielen, I
2015-09-25
Diagnostic imaging is essential to assess the lame patient; lesions of the elbow joint have traditionally been evaluated radiographically, however computed tomography (CT) has been suggested as a useful technique to diagnose various elbow pathologies. The primary objective of this study was to determine the sensitivity and specificity of CT to assess medial coronoid disease (MCD), using arthroscopy as gold standard. The secondary objective was to ascertain the radiographic sensitivity and specificity for MCD compared with CT. For this study 180 elbow joints were assessed, of which 141 had been examined with radiography, CT and arthroscopy; and 39 joints, had radiographic and CT assessment. Sensitivity and specificity were calculated for CT and radiographic findings using available statistical software. Sensitivity and specificity of CT using arthroscopy as gold standard resulted in high values for sensitivity (100 %) and specificity (93 %) for the assessment of MCD. For the radiographic evaluation, a sensitivity of 98 % and specificity of 64 - 69 % using CT as the technique of reference, were found. These results suggest that in case of doubt during radiographic assessment, CT could be used as a non-invasive technique to assess the presence of MCD. Based on the high sensitivity and specificity obtained in this study it has been considered that CT, rather than arthroscopy, is the preferred noninvasive technique to assess MCD lesions of the canine elbow joint.
Drinking water quality assessment.
Aryal, J; Gautam, B; Sapkota, N
2012-09-01
Drinking water quality is the great public health concern because it is a major risk factor for high incidence of diarrheal diseases in Nepal. In the recent years, the prevalence rate of diarrhoea has been found the highest in Myagdi district. This study was carried out to assess the quality of drinking water from different natural sources, reservoirs and collection taps at Arthunge VDC of Myagdi district. A cross-sectional study was carried out using random sampling method in Arthunge VDC of Myagdi district from January to June,2010. 84 water samples representing natural sources, reservoirs and collection taps from the study area were collected. The physico-chemical and microbiological analysis was performed following standards technique set by APHA 1998 and statistical analysis was carried out using SPSS 11.5. The result was also compared with national and WHO guidelines. Out of 84 water samples (from natural source, reservoirs and tap water) analyzed, drinking water quality parameters (except arsenic and total coliform) of all water samples was found to be within the WHO standards and national standards.15.48% of water samples showed pH (13) higher than the WHO permissible guideline values. Similarly, 85.71% of water samples showed higher Arsenic value (72) than WHO value. Further, the statistical analysis showed no significant difference (P<0.05) of physico-chemical parameters and total coliform count of drinking water for collection taps water samples of winter (January, 2010) and summer (June, 2010). The microbiological examination of water samples revealed the presence of total coliform in 86.90% of water samples. The results obtained from physico-chemical analysis of water samples were within national standard and WHO standards except arsenic. The study also found the coliform contamination to be the key problem with drinking water.
Phantom experiments using soft-prior regularization EIT for breast cancer imaging.
Murphy, Ethan K; Mahara, Aditya; Wu, Xiaotian; Halter, Ryan J
2017-06-01
A soft-prior regularization (SR) electrical impedance tomography (EIT) technique for breast cancer imaging is described, which shows an ability to accurately reconstruct tumor/inclusion conductivity values within a dense breast model investigated using a cylindrical and a breast-shaped tank. The SR-EIT method relies on knowing the spatial location of a suspicious lesion initially detected from a second imaging modality. Standard approaches (using Laplace smoothing and total variation regularization) without prior structural information are unable to accurately reconstruct or detect the tumors. The soft-prior approach represents a very significant improvement to these standard approaches, and has the potential to improve conventional imaging techniques, such as automated whole breast ultrasound (AWB-US), by providing electrical property information of suspicious lesions to improve AWB-US's ability to discriminate benign from cancerous lesions. Specifically, the best soft-regularization technique found average absolute tumor/inclusion errors of 0.015 S m -1 for the cylindrical test and 0.055 S m -1 and 0.080 S m -1 for the breast-shaped tank for 1.8 cm and 2.5 cm inclusions, respectively. The standard approaches were statistically unable to distinguish the tumor from the mammary gland tissue. An analysis of false tumors (benign suspicious lesions) provides extra insight into the potential and challenges EIT has for providing clinically relevant information. The ability to obtain accurate conductivity values of a suspicious lesion (>1.8 cm) detected from another modality (e.g. AWB-US) could significantly reduce false positives and result in a clinically important technology.
A randomized trial of specialized versus standard neck physiotherapy in cervical dystonia.
Counsell, Carl; Sinclair, Hazel; Fowlie, Jillian; Tyrrell, Elaine; Derry, Natalie; Meager, Peter; Norrie, John; Grosset, Donald
2016-02-01
Anecdotal reports suggested that a specialized physiotherapy technique developed in France (the Bleton technique) improved primary cervical dystonia. We evaluated the technique in a randomized trial. A parallel-group, single-blind, two-centre randomized trial compared the specialized outpatient physiotherapy programme given by trained physiotherapists up to once a week for 24 weeks with standard physiotherapy advice for neck problems. Randomization was by a central telephone service. The primary outcome was the change in the total Toronto Western Spasmodic Torticollis Rating (TWSTR) scale, measured before any botulinum injections that were due, between baseline and 24 weeks evaluated by a clinician masked to treatment. Analysis was by intention-to-treat. 110 patients were randomized (55 in each group) with 24 week outcomes available for 84. Most (92%) were receiving botulinum toxin injections. Physiotherapy adherence was good. There was no difference between the groups in the change in TWSTR score over 24 weeks (mean adjusted difference 1.44 [95% CI -3.63, 6.51]) or 52 weeks (mean adjusted difference 2.47 [-2.72, 7.65]) nor in any of the secondary outcome measures (Cervical Dystonia Impact Profile-58, clinician and patient-rated global impression of change, mean botulinum toxin dose). Both groups showed large sustained improvements compared to baseline in the TWSTR, most of which occurred in the first four weeks. There were no major adverse events. Subgroup analysis suggested a centre effect. There was no statistically or clinically significant benefit from the specialized physiotherapy compared to standard neck physiotherapy advice but further trials are warranted. Copyright © 2015 Elsevier Ltd. All rights reserved.
Jefferies, Edward R; Cresswell, Joanne; McGrath, John S; Miller, Catherine; Hounsome, Luke; Fowler, Sarah; Rowe, Edward W
2018-06-01
To establish the current standard for open radical cystectomy (ORC) in England, as data entry by surgeons performing RC to the British Association of Urological Surgeons (BAUS) database was mandated in 2013 and combining this with Hospital Episodes Statistics (HES) data has allowed comprehensive outcome analysis for the first time. All patients were included in this analysis if they were uploaded to the BAUS data registry and reported to have been performed in the 2 years between 1 January 2014 and 31 December 2015 in England (from mandate onwards) and had been documented as being performed in an open fashion (not laparoscopic, robot assisted or the technique field left blank). The HES data were accessed via the HES website. Office of Population Censuses and Surveys Classification of Surgical Operations and Procedures version 4 (OPCS-4) Code M34 was searched during the same 2-year time frame (not including M34.4 for simple cystectomy or with additional minimal access codes Y75.1-9 documenting a laparoscopic or robotic approach was used) to assess data capture. A total of 2 537 ORCs were recorded in the BAUS registry and 3 043 in the HES data. This indicates a capture rate of 83.4% of all cases. The median operative time was 5 h, harvesting a median of 11-20 lymph nodes, with a median blood loss of 500-1 000 mL, and a transfusion rate of 21.8%. The median length of stay was 11 days, with a 30-day mortality rate of 1.58%. This is the largest, contemporary cohort of ORCs in England, encompassing >80% of all performed operations. We now know the current standard for ORC in England. This provides the basis for individual surgeons and units to compare their outcomes and a standard with which future techniques and modifications can be compared. © 2018 The Authors BJU International © 2018 BJU International Published by John Wiley & Sons Ltd.
Bladder cancer mapping in Libya based on standardized morbidity ratio and log-normal model
NASA Astrophysics Data System (ADS)
Alhdiri, Maryam Ahmed; Samat, Nor Azah; Mohamed, Zulkifley
2017-05-01
Disease mapping contains a set of statistical techniques that detail maps of rates based on estimated mortality, morbidity, and prevalence. A traditional approach to measure the relative risk of the disease is called Standardized Morbidity Ratio (SMR). It is the ratio of an observed and expected number of accounts in an area, which has the greatest uncertainty if the disease is rare or if geographical area is small. Therefore, Bayesian models or statistical smoothing based on Log-normal model are introduced which might solve SMR problem. This study estimates the relative risk for bladder cancer incidence in Libya from 2006 to 2007 based on the SMR and log-normal model, which were fitted to data using WinBUGS software. This study starts with a brief review of these models, starting with the SMR method and followed by the log-normal model, which is then applied to bladder cancer incidence in Libya. All results are compared using maps and tables. The study concludes that the log-normal model gives better relative risk estimates compared to the classical method. The log-normal model has can overcome the SMR problem when there is no observed bladder cancer in an area.
Sherman, Seth L; Copeland, Marilyn E; Milles, Jeffrey L; Flood, David A; Pfeiffer, Ferris M
2016-06-01
To evaluate the biomechanical fixation strength of suture anchor and transosseous tunnel repair of the quadriceps tendon in a standardized cadaveric repair model. Twelve "patella-only" specimens were used. Dual-energy X-ray absorptiometry measurement was performed to ensure equal bone quality amongst groups. Specimens were randomly assigned to either a suture anchor repair of quadriceps tendon group (n = 6) or a transosseous tunnel repair group (n = 6). Suture type and repair configuration were equivalent. After the respective procedures were performed, each patella was mounted into a gripping jig. Tensile load was applied at a rate of 0.1 mm/s up to 100 N after which cyclic loading was applied at a rate of 1 Hz between magnitudes of 50 to 150 N, 50 to 200 N, 50 to 250 N, and tensile load at a rate of 0.1 mm/s until failure. Outcome measures included load to failure, displacement at 1st 100 N load, and displacement after each 10th cycle of loading. The measured cyclic displacement to the first 100 N, 50 to 150 N, 50 to 200 N, and 50 to 250 N was significantly less for suture anchors than transosseous tunnels. There was no statistically significant difference in ultimate load to failure between the 2 groups (P = .40). Failure mode for all suture anchors except one was through the soft tissue. Failure mode for all transosseous specimens but one was pulling the repair through the transosseous tunnel. Suture anchor quadriceps tendon repairs had significantly decreased gapping during cyclic loading, but no statistically significant difference in ultimate load to failure when compared with transosseous tunnel repairs. Although suture anchor quadriceps tendon repair appears to be a biomechanically superior construct, a clinical study is needed to confirm this technique as a viable alternative to gold standard transosseous techniques. Although in vivo studies are needed, these results support the suture anchor technique as a viable alternative to transosseous repair of the quadriceps tendon. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.
Al-Moraissi, E A; Elmansi, Y A; Al-Sharaee, Y A; Alrmali, A E; Alkhutari, A S
2016-03-01
A systematic review and meta-analysis was conducted to answer the clinical question "Does the piezoelectric surgical technique produce fewer postoperative sequelae after lower third molar surgery than conventional rotary instruments?" A systematic and electronic search of several databases with specific key words, a reference search, and a manual search were performed from respective dates of inception through November 2014. The inclusion criteria were clinical human studies, including randomized controlled trials (RCTs), controlled clinical trials (CCTs), and retrospective studies, with the aim of comparing the piezoelectric surgical osteotomy technique to the standard rotary instrument technique in lower third molar surgery. Postoperative sequelae (oedema, trismus, and pain), the total number of analgesics taken, and the duration of surgery were analyzed. A total of nine articles were included, six RCTs, two CCTs, and one retrospective study. Six studies had a low risk of bias and three had a moderate risk of bias. A statistically significant difference was found between piezoelectric surgery and conventional rotary instrument surgery for lower third molar extraction with regard to postoperative sequelae (oedema, trismus, and pain) and the total number of analgesics taken (P=0.0001, P=0.0001, P<0.00001, and P<0.0001, respectively). However, a statistically significant increased surgery time was required in the piezoelectric osteotomy group (P<0.00001). The results of the meta-analysis showed that piezoelectric surgery significantly reduced the occurrence of postoperative sequelae (oedema, trismus, and pain) and the total number of analgesics taken compared to the conventional rotary instrument technique in lower third molar surgery, but required a longer surgery time. Copyright © 2015 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ebrahimi, R.; Zohren, S.
2018-03-01
In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.
Ippolito, Davide; Fior, Davide; Franzesi, Cammillo Talei; Riva, Luca; Casiraghi, Alessandra; Sironi, Sandro
2017-12-01
Effective radiation dose in coronary CT angiography (CTCA) for coronary artery bypass graft (CABG) evaluation is remarkably high because of long scan lengths. Prospective electrocardiographic gating with iterative reconstruction can reduce effective radiation dose. To evaluate the diagnostic performance of low-kV CT angiography protocol with prospective ecg-gating technique and iterative reconstruction (IR) algorithm in follow-up of CABG patients compared with standard retrospective protocol. Seventy-four non-obese patients with known coronary disease treated with artery bypass grafting were prospectively enrolled. All the patients underwent 256 MDCT (Brilliance iCT, Philips) CTCA using low-dose protocol (100 kV; 800 mAs; rotation time: 0.275 s) combined with prospective ECG-triggering acquisition and fourth-generation IR technique (iDose 4 ; Philips); all the lengths of the bypass graft were included in the evaluation. A control group of 42 similar patients was evaluated with a standard retrospective ECG-gated CTCA (100 kV; 800 mAs).On both CT examinations, ROIs were placed to calculate standard deviation of pixel values and intra-vessel density. Diagnostic quality was also evaluated using a 4-point quality scale. Despite the statistically significant reduction of radiation dose evaluated with DLP (study group mean DLP: 274 mGy cm; control group mean DLP: 1224 mGy cm; P value < 0.001). No statistical differences were found between PGA group and RGH group regarding intra-vessel density absolute values and SNR. Qualitative analysis, evaluated by two radiologists in "double blind", did not reveal any significant difference in diagnostic quality of the two groups. The development of high-speed MDCT scans combined with modern IR allows an accurate evaluation of CABG with prospective ECG-gating protocols in a single breath hold, obtaining a significant reduction in radiation dose.
Hayat, Matthew J
2014-04-01
Statistics coursework is usually a core curriculum requirement for nursing students at all degree levels. The American Association of Colleges of Nursing (AACN) establishes curriculum standards for academic nursing programs. However, the AACN provides little guidance on statistics education and does not offer standardized competency guidelines or recommendations about course content or learning objectives. Published standards may be used in the course development process to clarify course content and learning objectives. This article includes suggestions for implementing and integrating recommendations given in the Guidelines for Assessment and Instruction in Statistics Education (GAISE) report into statistics education for nursing students. Copyright 2014, SLACK Incorporated.
The role of laparoscopic Heller myotomy in the treatment of achalasia.
Zonca, P; Cambal, M; Labas, P; Hrbaty, B; Jacobi, C A
2014-01-01
To evaluate the results of laparoscopic Heller myotomy in our group of patients. A retrospective clinical trial was carried out to evaluate the indication, technique and controversies of laparoscopic Heller myotomy in the achalasia treatment. The following symptoms were evaluated prior and after Heller myotomy: dysphagia, heartburn, nausea/vomiting after meal and asthma/coughing. The patients were evaluated by the use of Likert score. Statistical analysis was performed by using Student t test. The intra-operative (operation time, intraoperative complications, blood loss, conversion rate), and peri-operative parameters (morbidity, mortality, hospital stay) were evaluated as well. The patients who underwent laparoscopic Heller myotomy were included in the trial. All patients were perioperatively managed by a multidisciplinary team. The evaluation of fourteen patients was performed (average age: 53.2 yrs., eleven men, two women, BMI 23.6 kg/m(2)). The patients were indicated for surgery in all of the stages (I-III). Previous semiconservative therapeutic modalities were performed in thirteen patients. The standard laparoscopic technique for Heller myotomy with semifundoplication was applied. All the observed symptoms were statistically improved after the surgery (p=0.05). The average operating time was 89 minutes. Intraoperative blood loss was below 20 ml. There was no conversion to open surgery. An average hospital stay was 4.3 days. Morbidity was 14.3 % and mortality 0 %. In one patient esophageal mucosa perforation was intra-operatively identified and sutured. Post-operative course in this patient was without any complications. The laparoscopic Heller myotomy has become the "gold standard" procedure for achalasia. It is an excellent method allowing precise operation technique with good visualization of the esophagogastric junction. The operation with this approach is safe, efficient, and with excellent reproducible operative results. The correct and early indication for surgery is crucial. The delayed diagnosis with a late indication for surgery is not an exemption (Tab. 2, Fig. 2, Ref. 36).
A process-based standard for the Solar Energetic Particle Event Environment
NASA Astrophysics Data System (ADS)
Gabriel, Stephen
For 10 years or more, there has been a lack of concensus on what the ISO standard model for the Solar Energetic Particle Event (SEPE) environment should be. Despite many technical discussions between the world experts in this field, it has been impossible to agree on which of the several models available should be selected as the standard. Most of these discussions at the ISO WG4 meetings and conferences, etc have centred around the differences in modelling approach between the MSU model and the several remaining models from elsewhere worldwide (mainly the USA and Europe). The topic is considered timely given the inclusion of a session on reference data sets at the Space Weather Workshop in Boulder in April 2014. The original idea of a ‘process-based’ standard was conceived by Dr Kent Tobiska as a way of getting round the problems associated with not only the presence of different models, which in themselves could have quite distinct modelling approaches but could also be based on different data sets. In essence, a process based standard approach overcomes these issues by allowing there to be more than one model and not necessarily a single standard model; however, any such model has to be completely transparent in that the data set and the modelling techniques used have to be not only to be clearly and unambiguously defined but also subject to peer review. If the model meets all of these requirements then it should be acceptable as a standard model. So how does this process-based approach resolve the differences between the existing modelling approaches for the SEPE environment and remove the impasse? In a sense, it does not remove all of the differences but only some of them; however, most importantly it will allow something which so far has been impossible without ambiguities and disagreement and that is a comparison of the results of the various models. To date one of the problems (if not the major one) in comparing the results of the various different SEPE statistical models has been caused by two things: 1) the data set and 2) the definition of an event Because unravelling the dependencies of the outputs of different statistical models on these two parameters is extremely difficult if not impossible, currently comparison of the results from the different models is also extremely difficult and can lead to controversies, especially over which model is the correct one; hence, when it comes to using these models for engineering purposes to calculate, for example, the radiation dose for a particular mission, the user, who is in all likelihood not an expert in this field, could be given two( or even more) very different environments and find it impossible to know how to select one ( or even how to compare them). What is proposed then, is a process-based standard, which in common with nearly all of the current models is composed of 3 elements, a standard data set, a standard event definition and a resulting standard event list. A standard event list is the output of this standard and can then be used with any of the existing (or indeed future) models that are based on events. This standard event list is completely traceable and transparent and represents a reference event list for all the community. When coupled with a statistical model, the results when compared will only be dependent on the statistical model and not on the data set or event definition.
Faires, Meredith C; Pearl, David L; Ciccotelli, William A; Berke, Olaf; Reid-Smith, Richard J; Weese, J Scott
2014-07-08
In healthcare facilities, conventional surveillance techniques using rule-based guidelines may result in under- or over-reporting of methicillin-resistant Staphylococcus aureus (MRSA) outbreaks, as these guidelines are generally unvalidated. The objectives of this study were to investigate the utility of the temporal scan statistic for detecting MRSA clusters, validate clusters using molecular techniques and hospital records, and determine significant differences in the rate of MRSA cases using regression models. Patients admitted to a community hospital between August 2006 and February 2011, and identified with MRSA>48 hours following hospital admission, were included in this study. Between March 2010 and February 2011, MRSA specimens were obtained for spa typing. MRSA clusters were investigated using a retrospective temporal scan statistic. Tests were conducted on a monthly scale and significant clusters were compared to MRSA outbreaks identified by hospital personnel. Associations between the rate of MRSA cases and the variables year, month, and season were investigated using a negative binomial regression model. During the study period, 735 MRSA cases were identified and 167 MRSA isolates were spa typed. Nine different spa types were identified with spa type 2/t002 (88.6%) the most prevalent. The temporal scan statistic identified significant MRSA clusters at the hospital (n=2), service (n=16), and ward (n=10) levels (P ≤ 0.05). Seven clusters were concordant with nine MRSA outbreaks identified by hospital staff. For the remaining clusters, seven events may have been equivalent to true outbreaks and six clusters demonstrated possible transmission events. The regression analysis indicated years 2009-2011, compared to 2006, and months March and April, compared to January, were associated with an increase in the rate of MRSA cases (P ≤ 0.05). The application of the temporal scan statistic identified several MRSA clusters that were not detected by hospital personnel. The identification of specific years and months with increased MRSA rates may be attributable to several hospital level factors including the presence of other pathogens. Within hospitals, the incorporation of the temporal scan statistic to standard surveillance techniques is a valuable tool for healthcare workers to evaluate surveillance strategies and aid in the identification of MRSA clusters.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
Anderson, Carl A; McRae, Allan F; Visscher, Peter M
2006-07-01
Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.
Singhal, Sakshi; Gurtu, Anuraag; Singhal, Anurag; Bansal, Rashmi; Mohan, Sumit
2017-08-01
This study was conducted to assess the effect of different composite materials on the cuspal deflection of premolars restored with bulk placement of resin composite in comparison to horizontal incremental placement and modified tangential incremental placement. The aim of this study was to evaluate the cuspal deflection caused by different composite materials when different insertion techniques were used. Two different composite materials were used that is Tetric N Ceram (Ivoclar Vivadent marketing, India) and SonicFill TM (Kerr Sybron Dental). Forty standardized Mesio-Occluso-Distal (MOD) preparations were prepared on maxillary first premolars. Each group was divided according to composite insertion technique (n=10), as follows: Group I - bulk insertion using Tetric N Ceram, Group II - Horizontal incremental insertion technique using Tetric N Ceram, Group III- Modified tangential incremental technique using Tetric N Ceram, and Group IV- bulk insertion using SonicFill TM . Preparations were acid-etched, and bonded with adhesive resin to provide micro mechanical attachment before restoration using a uniform etching and bonding protocol in all the groups. All groups received the same total photo-polymerization time. Cuspal deflection was measured during the restorative procedure using customized digital micrometer assembly. One-way ANOVA test was applied for the analysis of significant difference between the groups, p-value less than 0.05 was considered statistically significant. The average cuspal deflections for the different groups were as follows: Group I 0.045±0.018, Group II 0.029±0.009, Group III 0.018±0.005 and Group IV 0.017±0.004. The intergroup comparison revealed statistically significant difference. A measurable amount of cuspal deflection was present in all the four studied groups. In general, bulkfill restoration technique with conventional composite showed significantly highest cusp deflection. There were no significant differences in cuspal deflection among sonicFill TM and modified tangential incremental insertion techniques.
Pradhan, Raunak; Kulkarni, Deepak
2017-01-01
Introduction Fear of dental pain is one of the most common reasons for delaying dental treatment. Local Anaesthesia (LA) is the most commonly employed technique of achieving pain control in dentistry. Pterygomandibular Nerve Block (PNB), for achieving mandibular anaesthesia has been the traditional technique used and is associated with a few set of complications which include pain, nerve injury, trismus, and rarely facial nerve palsy, and sustained soft tissue anaesthesia. These complications have resulted in a rapid need for research on alternative local anaesthetic techniques. Aim This study was undertaken with the objective to determine pain, duration, profoundness and complications associated with administration of Intraligamentary Injection Technique (ILT). Materials and Methods This study was conducted on 194 patients (male=122, female=72) who reported for dental extractions in mandibular posteriors. The ILT was administered with ligajet intraligamentary jet injector using cartridge containing lignocaine hydrochloride 2% with adrenaline 1:80000 and a 30 gauge needle at buccal (mesiobuccal), lingual, mesial and distal aspect of the mandibular molars. The data was analyzed by using statistical computer software SPSS 11.0 (Statistical package for social sciences 11.O version of SPSS Inc.). Median was derived for Pain on Injection (PI) and Pain during Procedure (PP). Mean and standard deviation was derived for Duration of Anaesthesia (DA). Results Various advantages were seen such as, localized soft tissue anaesthesia, decreased PI (SD=0.83), and minimal PP (SD=0.94). The DA (SD=4.62) and mean value of 24.06 minutes. Conclusion This study is one of its kinds where intraligamentary injection has been used for extraction of mandibular molars. It was also successfully used in patients with exaggerated gag reflex and patients suffering from trismus due to oral submucous fibrosis. The intraligamentary injection technique can thus be used effectively to anaesthetize mandibular molars, as a primary technique for extraction of mandibular posterior teeth. PMID:28274058
Phylogeography Takes a Relaxed Random Walk in Continuous Space and Time
Lemey, Philippe; Rambaut, Andrew; Welch, John J.; Suchard, Marc A.
2010-01-01
Research aimed at understanding the geographic context of evolutionary histories is burgeoning across biological disciplines. Recent endeavors attempt to interpret contemporaneous genetic variation in the light of increasingly detailed geographical and environmental observations. Such interest has promoted the development of phylogeographic inference techniques that explicitly aim to integrate such heterogeneous data. One promising development involves reconstructing phylogeographic history on a continuous landscape. Here, we present a Bayesian statistical approach to infer continuous phylogeographic diffusion using random walk models while simultaneously reconstructing the evolutionary history in time from molecular sequence data. Moreover, by accommodating branch-specific variation in dispersal rates, we relax the most restrictive assumption of the standard Brownian diffusion process and demonstrate increased statistical efficiency in spatial reconstructions of overdispersed random walks by analyzing both simulated and real viral genetic data. We further illustrate how drawing inference about summary statistics from a fully specified stochastic process over both sequence evolution and spatial movement reveals important characteristics of a rabies epidemic. Together with recent advances in discrete phylogeographic inference, the continuous model developments furnish a flexible statistical framework for biogeographical reconstructions that is easily expanded upon to accommodate various landscape genetic features. PMID:20203288
Simon, Heather; Baker, Kirk R; Akhtar, Farhan; Napelenok, Sergey L; Possiel, Norm; Wells, Benjamin; Timin, Brian
2013-03-05
In setting primary ambient air quality standards, the EPA's responsibility under the law is to establish standards that protect public health. As part of the current review of the ozone National Ambient Air Quality Standard (NAAQS), the US EPA evaluated the health exposure and risks associated with ambient ozone pollution using a statistical approach to adjust recent air quality to simulate just meeting the current standard level, without specifying emission control strategies. One drawback of this purely statistical concentration rollback approach is that it does not take into account spatial and temporal heterogeneity of ozone response to emissions changes. The application of the higher-order decoupled direct method (HDDM) in the community multiscale air quality (CMAQ) model is discussed here to provide an example of a methodology that could incorporate this variability into the risk assessment analyses. Because this approach includes a full representation of the chemical production and physical transport of ozone in the atmosphere, it does not require assumed background concentrations, which have been applied to constrain estimates from past statistical techniques. The CMAQ-HDDM adjustment approach is extended to measured ozone concentrations by determining typical sensitivities at each monitor location and hour of the day based on a linear relationship between first-order sensitivities and hourly ozone values. This approach is demonstrated by modeling ozone responses for monitor locations in Detroit and Charlotte to domain-wide reductions in anthropogenic NOx and VOCs emissions. As seen in previous studies, ozone response calculated using HDDM compared well to brute-force emissions changes up to approximately a 50% reduction in emissions. A new stepwise approach is developed here to apply this method to emissions reductions beyond 50% allowing for the simulation of more stringent reductions in ozone concentrations. Compared to previous rollback methods, this application of modeled sensitivities to ambient ozone concentrations provides a more realistic spatial response of ozone concentrations at monitors inside and outside the urban core and at hours of both high and low ozone concentrations.
Apically-extruded debris using the ProTaper system.
Azar, Nasim Gheshlaghi; Ebrahimi, Gholamreza
2005-04-01
The purpose of this in vitro study was to determine the quantity of debris and irrigant extruded apically using the ProTaper system compared to ProFiles and K-Flexofiles. Thirty-six mesio-buccal root canals of human mandibular molars were selected and divided into three groups of twelve canals. Two groups were instrumented with ProFiles and ProTapers according to the manufacturer's instructions. The other group was instrumented with K-Flexofiles using the step-back technique. A standard amount of irrigant was used for each canal. Apically-extruded debris and irrigant was collected in pre-weighed vials. The mean weight of extruded debris and irrigant for each group was statistically analysed using Student's t-test and one-way ANOVA. All instrumentation techniques produced extruded debris and irrigant. Although the mean amount of extrusion with the step-back technique was higher than the two rotary systems, there was no significant difference between the three groups (p > 0.05). NiTi rotary systems were associated with less apical extrusion, but were not significantly better than hand file instrumentation. All techniques extruded debris.
NASA Astrophysics Data System (ADS)
Stock, Michala K.; Stull, Kyra E.; Garvin, Heather M.; Klales, Alexandra R.
2016-10-01
Forensic anthropologists are routinely asked to estimate a biological profile (i.e., age, sex, ancestry and stature) from a set of unidentified remains. In contrast to the abundance of collections and techniques associated with adult skeletons, there is a paucity of modern, documented subadult skeletal material, which limits the creation and validation of appropriate forensic standards. Many are forced to use antiquated methods derived from small sample sizes, which given documented secular changes in the growth and development of children, are not appropriate for application in the medico-legal setting. Therefore, the aim of this project is to use multi-slice computed tomography (MSCT) data from a large, diverse sample of modern subadults to develop new methods to estimate subadult age and sex for practical forensic applications. The research sample will consist of over 1,500 full-body MSCT scans of modern subadult individuals (aged birth to 20 years) obtained from two U.S. medical examiner's offices. Statistical analysis of epiphyseal union scores, long bone osteometrics, and os coxae landmark data will be used to develop modern subadult age and sex estimation standards. This project will result in a database of information gathered from the MSCT scans, as well as the creation of modern, statistically rigorous standards for skeletal age and sex estimation in subadults. Furthermore, the research and methods developed in this project will be applicable to dry bone specimens, MSCT scans, and radiographic images, thus providing both tools and continued access to data for forensic practitioners in a variety of settings.
Measuring and monitoring biological diversity: Standard methods for mammals
Wilson, Don E.; Cole, F. Russell; Nichols, James D.; Rudran, Rasanayagam; Foster, Mercedes S.
1996-01-01
Measuring and Monitoring Biological Diversity: Standard Methods for Mammals provides a comprehensive manual for designing and implementing inventories of mammalian biodiversity anywhere in the world and for any group, from rodents to open-country grazers. The book emphasizes formal estimation approaches, which supply data that can be compared across habitats and over time. Beginning with brief natural histories of the twenty-six orders of living mammals, the book details the field techniques—observation, capture, and sign interpretation—appropriate to different species. The contributors provide guidelines for study design, discuss survey planning, describe statistical techniques, and outline methods of translating field data into electronic formats. Extensive appendixes address such issues as the ethical treatment of animals in research, human health concerns, preserving voucher specimens, and assessing age, sex, and reproductive condition in mammals.Useful in both developed and developing countries, this volume and the Biological Diversity Handbook Series as a whole establish essential standards for a key aspect of conservation biology and resource management.
NASA Technical Reports Server (NTRS)
Chadwick, C.
1984-01-01
This paper describes the development and use of an algorithm to compute approximate statistics of the magnitude of a single random trajectory correction maneuver (TCM) Delta v vector. The TCM Delta v vector is modeled as a three component Cartesian vector each of whose components is a random variable having a normal (Gaussian) distribution with zero mean and possibly unequal standard deviations. The algorithm uses these standard deviations as input to produce approximations to (1) the mean and standard deviation of the magnitude of Delta v, (2) points of the probability density function of the magnitude of Delta v, and (3) points of the cumulative and inverse cumulative distribution functions of Delta v. The approximates are based on Monte Carlo techniques developed in a previous paper by the author and extended here. The algorithm described is expected to be useful in both pre-flight planning and in-flight analysis of maneuver propellant requirements for space missions.
Manfredelli, Simone; Montalto, Gioacchino; Leonetti, Giovanni; Covotta, Marco; Amatucci, Chiara; Covotta, Alfredo; Forte, Angelo
2012-01-01
Interest about hemorrhoids is related to its high incidence and elevated social costs that derive from its treatment. Several comparative studies are reported in Literature to define a standard for ideal treatment of hemorrhoidal disease. Radical surgery is the only therapeutic option in case of III and IV stage haemorrhoids. Hemorrhoids surgical techniques are classified as Open, Closed and Stapled ones. We report our decennial experience on surgical treatment focusing on early, middle and late complications, indications and contraindications, satisfaction level of each surgical procedure for hemorrhoids. Four hundred forty-eight patients have been hospitalized in our department fom 1st January to 31st December 2008. Of these 241 underwent surgery with traditional open or closed technique and 207 with the SH technique according to Longo. This retrospective study includes only patients with symptomatic hemorrhoids at III or IV stage. There were no differences between CH and SH about both pre and post surgery hospitalization and intraoperative length. Pain is the most frequently observed early complication with a statistically significant difference in favour of SH. We obtain good results in CH group using anoderma sparing and perianal anaesthetic infiltration at the end of the surgery. In all cases, pain relief was obtained only with standard analgesic drugs (NSAIDs). We also observed that pain level influences the outcome after surgical treatment. No chronic pain cases were observed in both groups. Bleeding is another relevant early complication in particular after SH: we reported 2 cases of immediate surgical reintenvention and 2 cases treated with blood transfusion. Only in SH group we report also 5 cases of thrombosis of external haemorrhoids and 7 perianal hematoma both solved with medical therapy There were no statistical significant differences between two groups about fever, incontinence to flatus, urinary retention, fecal incontinence, substenosis and anal burning. No cases of anal stenosis were observed. About late complications, most frequently observed were rectal prolapse and hemorrhoidal recurrence, especially after SH. Our experience confirms the validity of both CH and SH. Failure may be related to wrong surgical indication or technical execution. Certainly CH procedure is more invasive and slightly more painfull in immediate postoperative period than SH surgery, which is slightly more expensive and has more complications. In our opinion the high risk of possible early and immediate complications after surgery requires at least a 24 hours hospitalization length. SH is the gold standard for III grade haemorrhoids with mucous prolapse while CH is suggested in IV grade cases. Hemorrhoidal arterial ligation operation (HALO) technique in III and IV degree needs further validations.
Laser Welding and Syncristallization Techniques Comparison: In Vitro Study
Fornaini, C.; Merigo, E.; Vescovi, P.; Meleti, M.; Nammour, S.
2012-01-01
Background. Laser welding was first reported in 1967 and for many years it has been used in dental laboratories with several advantages versus the conventional technique. Authors described, in previous works, the possibility of using also chair-side Nd : YAG laser device (Fotona Fidelis III, λ = 1064 nm) for welding metallic parts of prosthetic appliances directly in the dental office, extra- and also intra-orally. Syncristallisation is a soldering technique based on the creation of an electric arc between two electrodes and used to connect implants to bars intra-orally. Aim. The aim of this study was to compare two different laser welding devices with a soldering machine, all of these used in prosthetic dentistry. Material and Methods. In-lab Nd : YAG laser welding (group A = 12 samples), chair-side Nd : YAG laser welding (group B = 12 samples), and electrowelder (group C = 12 samples) were used. The tests were performed on 36 CrCoMo plates and the analysis consisted in evaluation, by microscopic observation, of the number of fissures in welded areas of groups A and B and in measurement of the welding strength in all the groups. The results were statistically analysed by means of one-way ANOVA and Tukey-Kramer multiple comparison tests. Results. The means and standard deviations for the number of fissures in welded areas were 8.12 ± 2.59 for group A and 5.20 ± 1.38 for group B. The difference was statistical significant (P = 0.0023 at the level 95%). On the other hand, the means and standard deviations for the traction tests were 1185.50 ± 288.56 N for group A, 896.41 ± 120.84 N for group B, and 283.58 ± 84.98 N for group C. The difference was statistical significant (P = 0.01 at the level 95%). Conclusion. The joint obtained by welding devices had a significant higher strength compared with that obtained by the electrowelder, and the comparison between the two laser devices used demonstrated that the chair-side Nd : YAG, even giving a lower strength to the joints, produced the lowest number of fissures in the welded area. PMID:22778737
Arora, Mansi; Kohli, Shivani; Kalsi, Rupali
2016-05-01
Dual arch impression technique signifies an essential improvement in fixed prosthodontics and has numerous benefits over conventional impression techniques. The accuracy of working dies fabricated from dual arch impression technique remains in question because there is little information available in the literature. This study was conducted to compare the accuracy of working dies fabricated from impressions made from two different viscosities of impression materials using metal, plastic dual arch trays and custom made acrylic trays. The study samples were grouped into two groups based on the viscosity of impression material used i.e. Group I (monophase), whereas Group II consisted of Dual Mix technique using a combination of light and heavy body material. These were further divided into three subgroups A, B and C depending on the type of impression tray used (metal dual arch tray, plastic dual arch tray and custom made tray). Measurements of the master cast were made using profile projector. Descriptive statistics like mean, Standard Deviation (SD) were calculated for all the groups. One way analysis of variance (ANOVA) was used for multiple group comparisons. A p-value of 0.05 or less was considered statistically significant. The gypsum dies obtained with the three types of impression trays using two groups of impression materials were smaller than the master models in dimensions. The plastic dual arch trays produced dies which were the least accurate of the three groups. There was no significant difference in the die dimensions obtained using the two viscosities of impression materials.
Automatic brain tumor detection in MRI: methodology and statistical validation
NASA Astrophysics Data System (ADS)
Iftekharuddin, Khan M.; Islam, Mohammad A.; Shaik, Jahangheer; Parra, Carlos; Ogg, Robert
2005-04-01
Automated brain tumor segmentation and detection are immensely important in medical diagnostics because it provides information associated to anatomical structures as well as potential abnormal tissue necessary to delineate appropriate surgical planning. In this work, we propose a novel automated brain tumor segmentation technique based on multiresolution texture information that combines fractal Brownian motion (fBm) and wavelet multiresolution analysis. Our wavelet-fractal technique combines the excellent multiresolution localization property of wavelets to texture extraction of fractal. We prove the efficacy of our technique by successfully segmenting pediatric brain MR images (MRIs) from St. Jude Children"s Research Hospital. We use self-organizing map (SOM) as our clustering tool wherein we exploit both pixel intensity and multiresolution texture features to obtain segmented tumor. Our test results show that our technique successfully segments abnormal brain tissues in a set of T1 images. In the next step, we design a classifier using Feed-Forward (FF) neural network to statistically validate the presence of tumor in MRI using both the multiresolution texture and the pixel intensity features. We estimate the corresponding receiver operating curve (ROC) based on the findings of true positive fractions and false positive fractions estimated from our classifier at different threshold values. An ROC, which can be considered as a gold standard to prove the competence of a classifier, is obtained to ascertain the sensitivity and specificity of our classifier. We observe that at threshold 0.4 we achieve true positive value of 1.0 (100%) sacrificing only 0.16 (16%) false positive value for the set of 50 T1 MRI analyzed in this experiment.
Understanding disparities among diagnostic technologies in glaucoma.
De Moraes, Carlos Gustavo V; Liebmann, Jeffrey M; Ritch, Robert; Hood, Donald C
2012-07-01
To investigate causes of disagreement among 3 glaucoma diagnostic techniques: standard automated achromatic perimetry (SAP), the multifocal visual evoked potential technique (mfVEP), and optical coherence tomography (OCT). In a prospective cross-sectional study, 138 eyes of 69 patients with glaucomatous optic neuropathy were tested using SAP, the mfVEP, and OCT. Eyes with the worse and better mean deviations (MDs) were analyzed separately. If the results of 2 tests were consistent for the presence of an abnormality in the same topographic site, that abnormality was considered a true glaucoma defect. If a third test missed that abnormality (false-negative result), the reasons for disparity were investigated. Eyes with worse MD (mean [SD], -6.8 [8.0] dB) had better agreements among tests than did eyes with better MD (-2.5 [3.5] dB, P<.01). For the 94 of 138 hemifields with abnormalities of the more advanced eyes, the 3 tests were consistent in showing the same hemifield abnormality in 50 hemifields (53%), and at least 2 tests were abnormal in 65 of the 94 hemifields (69%). The potential explanations for the false-negative results fell into 2 general categories: inherent limitations of each technique to detect distinct features of glaucoma and individual variability and the distribution of normative values used to define statistically significant abnormalities. All the cases of disparity could be explained by known limitations of each technique and interindividual variability, suggesting that the agreement among diagnostic tests may be better than summary statistics suggest and that disagreements between tests do not indicate discordance in the structure-function relationship.
Kohli, Shivani; Kalsi, Rupali
2016-01-01
Introduction Dual arch impression technique signifies an essential improvement in fixed prosthodontics and has numerous benefits over conventional impression techniques. The accuracy of working dies fabricated from dual arch impression technique remains in question because there is little information available in the literature. Aim This study was conducted to compare the accuracy of working dies fabricated from impressions made from two different viscosities of impression materials using metal, plastic dual arch trays and custom made acrylic trays. Materials and Methods The study samples were grouped into two groups based on the viscosity of impression material used i.e. Group I (monophase), whereas Group II consisted of Dual Mix technique using a combination of light and heavy body material. These were further divided into three subgroups A, B and C depending on the type of impression tray used (metal dual arch tray, plastic dual arch tray and custom made tray). Measurements of the master cast were made using profile projector. Descriptive statistics like mean, Standard Deviation (SD) were calculated for all the groups. One way analysis of variance (ANOVA) was used for multiple group comparisons. A p-value of 0.05 or less was considered statistically significant. Results The gypsum dies obtained with the three types of impression trays using two groups of impression materials were smaller than the master models in dimensions. Conclusion The plastic dual arch trays produced dies which were the least accurate of the three groups. There was no significant difference in the die dimensions obtained using the two viscosities of impression materials. PMID:27437342
Karain, Wael I
2017-11-28
Proteins undergo conformational transitions over different time scales. These transitions are closely intertwined with the protein's function. Numerous standard techniques such as principal component analysis are used to detect these transitions in molecular dynamics simulations. In this work, we add a new method that has the ability to detect transitions in dynamics based on the recurrences in the dynamical system. It combines bootstrapping and recurrence quantification analysis. We start from the assumption that a protein has a "baseline" recurrence structure over a given period of time. Any statistically significant deviation from this recurrence structure, as inferred from complexity measures provided by recurrence quantification analysis, is considered a transition in the dynamics of the protein. We apply this technique to a 132 ns long molecular dynamics simulation of the β-Lactamase Inhibitory Protein BLIP. We are able to detect conformational transitions in the nanosecond range in the recurrence dynamics of the BLIP protein during the simulation. The results compare favorably to those extracted using the principal component analysis technique. The recurrence quantification analysis based bootstrap technique is able to detect transitions between different dynamics states for a protein over different time scales. It is not limited to linear dynamics regimes, and can be generalized to any time scale. It also has the potential to be used to cluster frames in molecular dynamics trajectories according to the nature of their recurrence dynamics. One shortcoming for this method is the need to have large enough time windows to insure good statistical quality for the recurrence complexity measures needed to detect the transitions.
Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang
2018-01-01
Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.
Shojaee, Jalil; Moosazadeh, Mahmood
2014-02-01
Applying Prevention and Control of Infection (PCI) standards in hospitals reduces probable risks to patients, staff and visitors; it also increases efficiency, and ultimately improves productivity of hospitals. The current study aimed to determine the status quo of international standards of PCI in hospitals located in the north of Iran. This cross-sectional study was conducted in 23 hospitals. Data collection tool was a questionnaire with confirmed validity and reliability. . In this regard, 260 managers, section supervisors and infection control nurses participated in the study according to census basis. SPSS software version 16 was employed to analyze the data through descriptive and analytical statistics. Among the studied hospitals, 18 hospitals were public. Hospitals enjoyed 77.2% of leadership and programming, 80.8% of focus of programs, 67.4% of isolating methods, 88.2% of hand health and protection techniques, 78.8% of improving patient's safety and quality, 90.3% of training personnel, and 78.7% of the average status quo of PCI standards. This study revealed that PCI standards were significantly observed in the studied hospitals and that there were necessary conditions for full deployment of nosocomial infection surveillance.
Pulsating stars and the distance scale
NASA Astrophysics Data System (ADS)
Macri, Lucas
2017-09-01
I present an overview of the latest results from the SH0ES project, which obtained homogeneous Hubble Space Telescope (HST) photometry in the optical and near-infrared for ˜ 3500 and ˜ 2300 Cepheids, respectively, across 19 supernova hosts and 4 calibrators to determine the value of H0 with a total uncertainty of 2.4%. I discuss the current 3.4σ "tension" between this local measurement and predictions of H0 based on observations of the CMB and the assumption of "standard" ΛCDM. I review ongoing efforts to reach σ(H0) = 1%, including recent advances on the absolute calibration of Milky Way Cepheid period-luminosity relations (PLRs) using a novel astrometric technique with HST. Lastly, I highlight recent results from another collaboration on the development of new statistical techniques to detect, classify and phase extragalactic Miras using noisy and sparsely-sampled observations. I present preliminary Mira PLRs at various wavelengths based on the application of these techniques to a survey of M33.
Environmental assessment of Al-Hammar Marsh, Southern Iraq.
Al-Gburi, Hind Fadhil Abdullah; Al-Tawash, Balsam Salim; Al-Lafta, Hadi Salim
2017-02-01
(a) To determine the spatial distributions and levels of major and minor elements, as well as heavy metals, in water, sediment, and biota (plant and fish) in Al-Hammar Marsh, southern Iraq, and ultimately to supply more comprehensive information for policy-makers to manage the contaminants input into the marsh so that their concentrations do not reach toxic levels. (b) to characterize the seasonal changes in the marsh surface water quality. (c) to address the potential environmental risk of these elements by comparison with the historical levels and global quality guidelines (i.e., World Health Organization (WHO) standard limits). (d) to define the sources of these elements (i.e., natural and/or anthropogenic) using combined multivariate statistical techniques such as Principal Component Analysis (PCA) and Agglomerative Hierarchical Cluster Analysis (AHCA) along with pollution analysis (i.e., enrichment factor analysis). Water, sediment, plant, and fish samples were collected from the marsh, and analyzed for major and minor ions, as well as heavy metals, and then compared to historical levels and global quality guidelines (WHO guidelines). Then, multivariate statistical techniques, such as PCA and AHCA, were used to determine the element sourcing. Water analyses revealed unacceptable values for almost all physio-chemical and biological properties, according to WHO standard limits for drinking water. Almost all major ions and heavy metal concentrations in water showed a distinct decreasing trend at the marsh outlet station compared to other stations. In general, major and minor ions, as well as heavy metals exhibit higher concentrations in winter than in summer. Sediment analyses using multivariate statistical techniques revealed that Mg, Fe, S, P, V, Zn, As, Se, Mo, Co, Ni, Cu, Sr, Br, Cd, Ca, N, Mn, Cr, and Pb were derived from anthropogenic sources, while Al, Si, Ti, K, and Zr were primarily derived from natural sources. Enrichment factor analysis gave results compatible with multivariate statistical techniques findings. Analysis of heavy metals in plant samples revealed that there is no pollution in plants in Al-Hammar Marsh. However, the concentrations of heavy metals in fish samples showed that all samples were contaminated by Pb, Mn, and Ni, while some samples were contaminated by Pb, Mn, and Ni. Decreasing of Tigris and Euphrates discharges during the past decades due to drought conditions and upstream damming, as well as the increasing stress of wastewater effluents from anthropogenic activities, led to degradation of the downstream Al-Hammar Marsh water quality in terms of physical, chemical, and biological properties. As such properties were found to consistently exceed the historical and global quality objectives. However, element concentration decreasing trend at the marsh outlet station compared to other stations indicate that the marsh plays an important role as a natural filtration and bioremediation system. Higher element concentrations in winter were due to runoff from the washing of the surrounding Sabkha during flooding by winter rainstorms. Finally, the high concentrations of heavy metals in fish samples can be attributed to bioaccumulation and biomagnification processes.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
Recommendations for research design of telehealth studies.
Chumbler, Neale R; Kobb, Rita; Brennan, David M; Rabinowitz, Terry
2008-11-01
Properly designed randomized controlled trials (RCTs) are the gold standard to use when examining the effectiveness of telehealth interventions on clinical outcomes. Some published telehealth studies have employed well-designed RCTs. However, such methods are not always feasible and practical in particular settings. This white paper addresses not only the need for properly designed RCTs, but also offers alternative research designs, such as quasi-experimental designs, and statistical techniques that can be employed to rigorously assess the effectiveness of telehealth studies. This paper further offers design and measurement recommendations aimed at and relevant to administrative decision-makers, policymakers, and practicing clinicians.
Competing risks models and time-dependent covariates
Barnett, Adrian; Graves, Nick
2008-01-01
New statistical models for analysing survival data in an intensive care unit context have recently been developed. Two models that offer significant advantages over standard survival analyses are competing risks models and multistate models. Wolkewitz and colleagues used a competing risks model to examine survival times for nosocomial pneumonia and mortality. Their model was able to incorporate time-dependent covariates and so examine how risk factors that changed with time affected the chances of infection or death. We briefly explain how an alternative modelling technique (using logistic regression) can more fully exploit time-dependent covariates for this type of data. PMID:18423067
Rapid Vision Correction by Special Operations Forces.
Reynolds, Mark E
This report describes a rapid method of vision correction used by Special Operations Medics in multiple operational engagements. Between 2011 and 2015, Special Operations Medics used an algorithm- driven refraction technique. A standard block of instruction was provided to the medics, along with a packaged kit. The technique was used in multiple operational engagements with host nation military and civilians. Data collected for program evaluation were later analyzed to assess the utility of the technique. Glasses were distributed to 230 patients with complaints of either decreased distance or near (reading). Most patients (84%) with distance complaints achieved corrected binocular vision of 20/40 or better, and 97% of patients with near-vision complaints achieved corrected near-binocular vision of 20/40 or better. There was no statistically significant difference between the percentages of patients achieving 20/40 when medics used the technique under direct supervision versus independent use. A basic refraction technique using a designed kit allows for meaningful improvement in distance and/or near vision at austere locations. Special Operations Medics can leverage this approach after specific training with minimal time commitment. It can serve as a rapid, effective intervention with multiple applications in diverse operational environments. 2017.
Rohling, Martin L; Williamson, David J; Miller, L Stephen; Adams, Russell L
2003-11-01
The aim of this project was to validate an alternative global measure of neurocognitive impairment (Rohling Interpretive Method, or RIM) that could be generated from data gathered from a flexible battery approach. A critical step in this process is to establish the utility of the technique against current standards in the field. In this paper, we compared results from the Rohling Interpretive Method to those obtained from the General Neuropsychological Deficit Scale (GNDS; Reitan & Wolfson, 1988) and the Halstead-Russell Average Impairment Rating (AIR; Russell, Neuringer & Goldstein, 1970) on a large previously published sample of patients assessed with the Halstead-Reitan Battery (HRB). Findings support the use of the Rohling Interpretive Method in producing summary statistics similar in diagnostic sensitivity and specificity to the traditional HRB indices.
Curve fitting air sample filter decay curves to estimate transuranic content.
Hayes, Robert B; Chiou, Hung Cheng
2004-01-01
By testing industry standard techniques for radon progeny evaluation on air sample filters, a new technique is developed to evaluate transuranic activity on air filters by curve fitting the decay curves. The industry method modified here is simply the use of filter activity measurements at different times to estimate the air concentrations of radon progeny. The primary modification was to not look for specific radon progeny values but rather transuranic activity. By using a method that will provide reasonably conservative estimates of the transuranic activity present on a filter, some credit for the decay curve shape can then be taken. By carrying out rigorous statistical analysis of the curve fits to over 65 samples having no transuranic activity taken over a 10-mo period, an optimization of the fitting function and quality tests for this purpose was attained.
Enhance Video Film using Retnix method
NASA Astrophysics Data System (ADS)
Awad, Rasha; Al-Zuky, Ali A.; Al-Saleh, Anwar H.; Mohamad, Haidar J.
2018-05-01
An enhancement technique used to improve the studied video quality. Algorithms like mean and standard deviation are used as a criterion within this paper, and it applied for each video clip that divided into 80 images. The studied filming environment has different light intensity (315, 566, and 644Lux). This different environment gives similar reality to the outdoor filming. The outputs of the suggested algorithm are compared with the results before applying it. This method is applied into two ways: first, it is applied for the full video clip to get the enhanced film; second, it is applied for every individual image to get the enhanced image then compiler them to get the enhanced film. This paper shows that the enhancement technique gives good quality video film depending on a statistical method, and it is recommended to use it in different application.
The Highly Adaptive Lasso Estimator
Benkeser, David; van der Laan, Mark
2017-01-01
Estimation of a regression functions is a common goal of statistical learning. We propose a novel nonparametric regression estimator that, in contrast to many existing methods, does not rely on local smoothness assumptions nor is it constructed using local smoothing techniques. Instead, our estimator respects global smoothness constraints by virtue of falling in a class of right-hand continuous functions with left-hand limits that have variation norm bounded by a constant. Using empirical process theory, we establish a fast minimal rate of convergence of our proposed estimator and illustrate how such an estimator can be constructed using standard software. In simulations, we show that the finite-sample performance of our estimator is competitive with other popular machine learning techniques across a variety of data generating mechanisms. We also illustrate competitive performance in real data examples using several publicly available data sets. PMID:29094111
Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.
Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo
2015-11-01
The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.
A descriptive study of "being with woman" during labor and birth.
Hunter, Lauren P
2009-01-01
The objective of this study was to learn more about women's perceptions of the nurse-midwifery practice of "being with woman" during childbirth. The descriptive, correlational design used a convenience sample of 238 low-risk postpartum women in a hospital nurse-midwifery practice, with two childbirth settings: a standard labor and delivery unit and an in-hospital birth center. The main outcome measure was a 29-item seven-response Likert scale questionnaire, the Positive Presence Index (PPI), administered to women cared for during labor and birth by nurse-midwives to measure the concept of being with woman. Statistical analysis demonstrated women who gave birth in the in-hospital birth center or who began labor in the in-hospital birth center prior to an indicated transfer to the standard labor and delivery unit gave higher PPI scores than women who were admitted to and gave birth on the standard labor and delivery unit. Parity, ethnicity, number of midwives attending, presence of personal support persons, length of labor, and pain relief medications were unrelated to PPI scores. Two coping/comfort techniques, music therapy and breathing, were found to be correlated with reported higher PPI scores than those of women who did not use the techniques. These results can be used to encourage continued use of midwifery care and for low client to midwife caseloads during childbirth, and to modify hospital settings to include more in-hospital birth centers.
Lee, Choon-Hyun; Cho, Do-Sang; Jin, Sung-Chul; Kim, Sung-Hak; Park, Dong-Been
2007-10-01
We describe the use of a silicone elastomer sheet (SILASTIC) to prevent peridural fibrosis in patients who underwent a craniectomy and a subsequent cranioplasty. We performed a decompressive craniectomy and a subsequent cranioplasty with an autologous bone flap in 50 patients (mean age, 40 years) between 1996 and 2005 at our institution. Most of the craniectomies were performed as an emergency procedure for relief of brain swelling. The standard decompressive craniectomy technique that we performed included bone removal and a duroplasty in 26 of the 50 patients, however, a SILASTIC sheet was added to the standard decompressive craniectomy in the remaining patients in an attempt to prevent dural adhesions. The development of adhesion formation between the tissue layers was evaluated during the cranioplasty in terms of operative time and the amount of blood loss. During the cranioplasty, we observed that the SILASTIC sheet succeeded in creating a controlled dissection plane, which facilitated access to the epidural space, shortened the operative time by approximately 24.8% and diminished the intraoperative blood loss by 37.9% as compared with the group of patients who underwent the standard cranioplasty. These differences were statistically significant (p<0.05). The use of a SILASTIC sheet to prevent peridural scarring and to facilitate cranioplasty in patients who have previously undergone a craniectomy is a good technique, regardless of the procedural indication.
Tremblay, Patrice; Paquin, Réal
2007-01-24
Stable carbon isotope ratio mass spectrometry (delta13C IRMS) was used to detect maple syrup adulteration by exogenous sugar addition (beet and cane sugar). Malic acid present in maple syrup is proposed as an isotopic internal standard to improve actual adulteration detection levels. A lead precipitation method has been modified to isolate quantitatively malic acid from maple syrup using preparative reversed-phase liquid chromatography. The stable carbon isotopic ratio of malic acid isolated from this procedure shows an excellent accuracy and repeatability of 0.01 and 0.1 per thousand respectively, confirming that the modified lead precipitation method is an isotopic fractionation-free process. A new approach is proposed to detect adulteration based on the correlation existing between the delta13Cmalic acid and the delta13Csugars-delta13Cmalic acid (r = 0.704). This technique has been tested on a set of 56 authentic maple syrup samples. Additionally, authentic samples were spiked with exogeneous sugars. The mean theoretical detection level was statistically lowered using this technique in comparison with the usual two-standard deviation approach, especially when maple syrup is adulterated with beet sugar : 24 +/- 12% of adulteration detection versus 48 +/- 20% (t-test, p = 7.3 x 10-15). The method was also applied to published data for pineapple juices and honey with the same improvement.
Improving Focal Depth Estimates: Studies of Depth Phase Detection at Regional Distances
NASA Astrophysics Data System (ADS)
Stroujkova, A.; Reiter, D. T.; Shumway, R. H.
2006-12-01
The accurate estimation of the depth of small, regionally recorded events continues to be an important and difficult explosion monitoring research problem. Depth phases (free surface reflections) are the primary tool that seismologists use to constrain the depth of a seismic event. When depth phases from an event are detected, an accurate source depth is easily found by using the delay times of the depth phases relative to the P wave and a velocity profile near the source. Cepstral techniques, including cepstral F-statistics, represent a class of methods designed for the depth-phase detection and identification; however, they offer only a moderate level of success at epicentral distances less than 15°. This is due to complexities in the Pn coda, which can lead to numerous false detections in addition to the true phase detection. Therefore, cepstral methods cannot be used independently to reliably identify depth phases. Other evidence, such as apparent velocities, amplitudes and frequency content, must be used to confirm whether the phase is truly a depth phase. In this study we used a variety of array methods to estimate apparent phase velocities and arrival azimuths, including beam-forming, semblance analysis, MUltiple SIgnal Classification (MUSIC) (e.g., Schmidt, 1979), and cross-correlation (e.g., Cansi, 1995; Tibuleac and Herrin, 1997). To facilitate the processing and comparison of results, we developed a MATLAB-based processing tool, which allows application of all of these techniques (i.e., augmented cepstral processing) in a single environment. The main objective of this research was to combine the results of three focal-depth estimation techniques and their associated standard errors into a statistically valid unified depth estimate. The three techniques include: 1. Direct focal depth estimate from the depth-phase arrival times picked via augmented cepstral processing. 2. Hypocenter location from direct and surface-reflected arrivals observed on sparse networks of regional stations using a Grid-search, Multiple-Event Location method (GMEL; Rodi and Toksöz, 2000; 2001). 3. Surface-wave dispersion inversion for event depth and focal mechanism (Herrmann and Ammon, 2002). To validate our approach and provide quality control for our solutions, we applied the techniques to moderated- sized events (mb between 4.5 and 6.0) with known focal mechanisms. We illustrate the techniques using events observed at regional distances from the KSAR (Wonju, South Korea) teleseismic array and other nearby broadband three-component stations. Our results indicate that the techniques can produce excellent agreement between the various depth estimates. In addition, combining the techniques into a "unified" estimate greatly reduced location errors and improved robustness of the solution, even if results from the individual methods yielded large standard errors.
NASA Astrophysics Data System (ADS)
Lievens, Klaus; Van Nimmen, Katrien; Lombaert, Geert; De Roeck, Guido; Van den Broeck, Peter
2016-09-01
In civil engineering and architecture, the availability of high strength materials and advanced calculation techniques enables the construction of slender footbridges, generally highly sensitive to human-induced excitation. Due to the inherent random character of the human-induced walking load, variability on the pedestrian characteristics must be considered in the response simulation. To assess the vibration serviceability of the footbridge, the statistics of the stochastic dynamic response are evaluated by considering the instantaneous peak responses in a time range. Therefore, a large number of time windows are needed to calculate the mean value and standard deviation of the instantaneous peak values. An alternative method to evaluate the statistics is based on the standard deviation of the response and a characteristic frequency as proposed in wind engineering applications. In this paper, the accuracy of this method is evaluated for human-induced vibrations. The methods are first compared for a group of pedestrians crossing a lightly damped footbridge. Small differences of the instantaneous peak value were found by the method using second order statistics. Afterwards, a TMD tuned to reduce the peak acceleration to a comfort value, was added to the structure. The comparison between both methods in made and the accuracy is verified. It is found that the TMD parameters are tuned sufficiently and good agreements between the two methods are found for the estimation of the instantaneous peak response for a strongly damped structure.
RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy
NASA Astrophysics Data System (ADS)
Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.
2016-02-01
We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.
Investigation of Particle Sampling Bias in the Shear Flow Field Downstream of a Backward Facing Step
NASA Technical Reports Server (NTRS)
Meyers, James F.; Kjelgaard, Scott O.; Hepner, Timothy E.
1990-01-01
The flow field about a backward facing step was investigated to determine the characteristics of particle sampling bias in the various flow phenomena. The investigation used the calculation of the velocity:data rate correlation coefficient as a measure of statistical dependence and thus the degree of velocity bias. While the investigation found negligible dependence within the free stream region, increased dependence was found within the boundary and shear layers. Full classic correction techniques over-compensated the data since the dependence was weak, even in the boundary layer and shear regions. The paper emphasizes the necessity to determine the degree of particle sampling bias for each measurement ensemble and not use generalized assumptions to correct the data. Further, it recommends the calculation of the velocity:data rate correlation coefficient become a standard statistical calculation in the analysis of all laser velocimeter data.
Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.
Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar
2010-09-01
A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.
Cohn, T.A.; England, J.F.; Berenbrock, C.E.; Mason, R.R.; Stedinger, J.R.; Lamontagne, J.R.
2013-01-01
he Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as “less-than” values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.
Sobel, E.; Lange, K.
1996-01-01
The introduction of stochastic methods in pedigree analysis has enabled geneticists to tackle computations intractable by standard deterministic methods. Until now these stochastic techniques have worked by running a Markov chain on the set of genetic descent states of a pedigree. Each descent state specifies the paths of gene flow in the pedigree and the founder alleles dropped down each path. The current paper follows up on a suggestion by Elizabeth Thompson that genetic descent graphs offer a more appropriate space for executing a Markov chain. A descent graph specifies the paths of gene flow but not the particular founder alleles traveling down the paths. This paper explores algorithms for implementing Thompson's suggestion for codominant markers in the context of automatic haplotyping, estimating location scores, and computing gene-clustering statistics for robust linkage analysis. Realistic numerical examples demonstrate the feasibility of the algorithms. PMID:8651310
NASA Astrophysics Data System (ADS)
Abdellatef, Hisham E.
2007-04-01
Picric acid, bromocresol green, bromothymol blue, cobalt thiocyanate and molybdenum(V) thiocyanate have been tested as spectrophotometric reagents for the determination of disopyramide and irbesartan. Reaction conditions have been optimized to obtain coloured comoplexes of higher sensitivity and longer stability. The absorbance of ion-pair complexes formed were found to increases linearity with increases in concentrations of disopyramide and irbesartan which were corroborated by correction coefficient values. The developed methods have been successfully applied for the determination of disopyramide and irbesartan in bulk drugs and pharmaceutical formulations. The common excipients and additives did not interfere in their determination. The results obtained by the proposed methods have been statistically compared by means of student t-test and by the variance ratio F-test. The validity was assessed by applying the standard addition technique. The results were compared statistically with the official or reference methods showing a good agreement with high precision and accuracy.
NASA Astrophysics Data System (ADS)
Cohn, T. A.; England, J. F.; Berenbrock, C. E.; Mason, R. R.; Stedinger, J. R.; Lamontagne, J. R.
2013-08-01
The Grubbs-Beck test is recommended by the federal guidelines for detection of low outliers in flood flow frequency computation in the United States. This paper presents a generalization of the Grubbs-Beck test for normal data (similar to the Rosner (1983) test; see also Spencer and McCuen (1996)) that can provide a consistent standard for identifying multiple potentially influential low flows. In cases where low outliers have been identified, they can be represented as "less-than" values, and a frequency distribution can be developed using censored-data statistical techniques, such as the Expected Moments Algorithm. This approach can improve the fit of the right-hand tail of a frequency distribution and provide protection from lack-of-fit due to unimportant but potentially influential low flows (PILFs) in a flood series, thus making the flood frequency analysis procedure more robust.
Boxwala, Aziz A; Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs.
Kim, Jihoon; Grillo, Janice M; Ohno-Machado, Lucila
2011-01-01
Objective To determine whether statistical and machine-learning methods, when applied to electronic health record (EHR) access data, could help identify suspicious (ie, potentially inappropriate) access to EHRs. Methods From EHR access logs and other organizational data collected over a 2-month period, the authors extracted 26 features likely to be useful in detecting suspicious accesses. Selected events were marked as either suspicious or appropriate by privacy officers, and served as the gold standard set for model evaluation. The authors trained logistic regression (LR) and support vector machine (SVM) models on 10-fold cross-validation sets of 1291 labeled events. The authors evaluated the sensitivity of final models on an external set of 58 events that were identified as truly inappropriate and investigated independently from this study using standard operating procedures. Results The area under the receiver operating characteristic curve of the models on the whole data set of 1291 events was 0.91 for LR, and 0.95 for SVM. The sensitivity of the baseline model on this set was 0.8. When the final models were evaluated on the set of 58 investigated events, all of which were determined as truly inappropriate, the sensitivity was 0 for the baseline method, 0.76 for LR, and 0.79 for SVM. Limitations The LR and SVM models may not generalize because of interinstitutional differences in organizational structures, applications, and workflows. Nevertheless, our approach for constructing the models using statistical and machine-learning techniques can be generalized. An important limitation is the relatively small sample used for the training set due to the effort required for its construction. Conclusion The results suggest that statistical and machine-learning methods can play an important role in helping privacy officers detect suspicious accesses to EHRs. PMID:21672912
Selection vector filter framework
NASA Astrophysics Data System (ADS)
Lukac, Rastislav; Plataniotis, Konstantinos N.; Smolka, Bogdan; Venetsanopoulos, Anastasios N.
2003-10-01
We provide a unified framework of nonlinear vector techniques outputting the lowest ranked vector. The proposed framework constitutes a generalized filter class for multichannel signal processing. A new class of nonlinear selection filters are based on the robust order-statistic theory and the minimization of the weighted distance function to other input samples. The proposed method can be designed to perform a variety of filtering operations including previously developed filtering techniques such as vector median, basic vector directional filter, directional distance filter, weighted vector median filters and weighted directional filters. A wide range of filtering operations is guaranteed by the filter structure with two independent weight vectors for angular and distance domains of the vector space. In order to adapt the filter parameters to varying signal and noise statistics, we provide also the generalized optimization algorithms taking the advantage of the weighted median filters and the relationship between standard median filter and vector median filter. Thus, we can deal with both statistical and deterministic aspects of the filter design process. It will be shown that the proposed method holds the required properties such as the capability of modelling the underlying system in the application at hand, the robustness with respect to errors in the model of underlying system, the availability of the training procedure and finally, the simplicity of filter representation, analysis, design and implementation. Simulation studies also indicate that the new filters are computationally attractive and have excellent performance in environments corrupted by bit errors and impulsive noise.
Dan, Michael; Phillips, Alfred; Simonian, Marcus; Flannagan, Scott
2015-06-01
We provide a review of literature on reduction techniques for posterior hip dislocations and present our experience with a novel technique for the reduction of acute posterior hip dislocations in the ED, 'the rocket launcher' technique. We present our results with six patients with prosthetic posterior hip dislocation treated in our rural ED. We recorded patient demographics. The technique involves placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, in an ergonomically friendly manner for the reducer. We used Fisher's t-test for cohort analysis between reduction techniques. Of our patients, the mean age was 74 years (range 66 to 85 years). We had a 83% success rate. The one patient who the 'rocket launcher' failed in, was a hemi-arthroplasty patient who also failed all other closed techniques and needed open reduction. When compared with Allis (62% success rate), Whistler (60% success rate) and Captain Morgan (92% success rate) techniques, there was no statistically significant difference in the successfulness of the reduction techniques. There were no neurovascular or periprosthetic complications. We have described a reduction technique for posterior hip dislocations. Placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, thus mechanically and ergonomically superior to standard techniques. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Tang, Qi-Yi; Zhang, Chuan-Xi
2013-04-01
A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.
A Streamflow Statistics (StreamStats) Web Application for Ohio
Koltun, G.F.; Kula, Stephanie P.; Puskas, Barry M.
2006-01-01
A StreamStats Web application was developed for Ohio that implements equations for estimating a variety of streamflow statistics including the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year peak streamflows, mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and 25th-, 50th-, and 75th-percentile streamflows. StreamStats is a Web-based geographic information system application designed to facilitate the estimation of streamflow statistics at ungaged locations on streams. StreamStats can also serve precomputed streamflow statistics determined from streamflow-gaging station data. The basic structure, use, and limitations of StreamStats are described in this report. To facilitate the level of automation required for Ohio's StreamStats application, the technique used by Koltun (2003)1 for computing main-channel slope was replaced with a new computationally robust technique. The new channel-slope characteristic, referred to as SL10-85, differed from the National Hydrography Data based channel slope values (SL) reported by Koltun (2003)1 by an average of -28.3 percent, with the median change being -13.2 percent. In spite of the differences, the two slope measures are strongly correlated. The change in channel slope values resulting from the change in computational method necessitated revision of the full-model equations for flood-peak discharges originally presented by Koltun (2003)1. Average standard errors of prediction for the revised full-model equations presented in this report increased by a small amount over those reported by Koltun (2003)1, with increases ranging from 0.7 to 0.9 percent. Mean percentage changes in the revised regression and weighted flood-frequency estimates relative to regression and weighted estimates reported by Koltun (2003)1 were small, ranging from -0.72 to -0.25 percent and -0.22 to 0.07 percent, respectively.
Imaging of neural oscillations with embedded inferential and group prevalence statistics.
Donhauser, Peter W; Florin, Esther; Baillet, Sylvain
2018-02-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.
Imaging of neural oscillations with embedded inferential and group prevalence statistics
2018-01-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902
Computed tomography-based volumetric tool for standardized measurement of the maxillary sinus
Giacomini, Guilherme; Pavan, Ana Luiza Menegatti; Altemani, João Mauricio Carrasco; Duarte, Sergio Barbosa; Fortaleza, Carlos Magno Castelo Branco; Miranda, José Ricardo de Arruda
2018-01-01
Volume measurements of maxillary sinus may be useful to identify diseases affecting paranasal sinuses. However, literature shows a lack of consensus in studies measuring the volume. This may be attributable to different computed tomography data acquisition techniques, segmentation methods, focuses of investigation, among other reasons. Furthermore, methods for volumetrically quantifying the maxillary sinus are commonly manual or semiautomated, which require substantial user expertise and are time-consuming. The purpose of the present study was to develop an automated tool for quantifying the total and air-free volume of the maxillary sinus based on computed tomography images. The quantification tool seeks to standardize maxillary sinus volume measurements, thus allowing better comparisons and determinations of factors that influence maxillary sinus size. The automated tool utilized image processing techniques (watershed, threshold, and morphological operators). The maxillary sinus volume was quantified in 30 patients. To evaluate the accuracy of the automated tool, the results were compared with manual segmentation that was performed by an experienced radiologist using a standard procedure. The mean percent differences between the automated and manual methods were 7.19% ± 5.83% and 6.93% ± 4.29% for total and air-free maxillary sinus volume, respectively. Linear regression and Bland-Altman statistics showed good agreement and low dispersion between both methods. The present automated tool for maxillary sinus volume assessment was rapid, reliable, robust, accurate, and reproducible and may be applied in clinical practice. The tool may be used to standardize measurements of maxillary volume. Such standardization is extremely important for allowing comparisons between studies, providing a better understanding of the role of the maxillary sinus, and determining the factors that influence maxillary sinus size under normal and pathological conditions. PMID:29304130
Standardization of Clinical Assessment and Sample Collection Across All PERCH Study Sites
Prosperi, Christine; Baggett, Henry C.; Brooks, W. Abdullah; Deloria Knoll, Maria; Hammitt, Laura L.; Howie, Stephen R. C.; Kotloff, Karen L.; Levine, Orin S.; Madhi, Shabir A.; Murdoch, David R.; O’Brien, Katherine L.; Thea, Donald M.; Awori, Juliet O.; Bunthi, Charatdao; DeLuca, Andrea N.; Driscoll, Amanda J.; Ebruke, Bernard E.; Goswami, Doli; Hidgon, Melissa M.; Karron, Ruth A.; Kazungu, Sidi; Kourouma, Nana; Mackenzie, Grant; Moore, David P.; Mudau, Azwifari; Mwale, Magdalene; Nahar, Kamrun; Park, Daniel E.; Piralam, Barameht; Seidenberg, Phil; Sylla, Mamadou; Feikin, Daniel R.; Scott, J. Anthony G.; O’Brien, Katherine L.; Levine, Orin S.; Knoll, Maria Deloria; Feikin, Daniel R.; DeLuca, Andrea N.; Driscoll, Amanda J.; Fancourt, Nicholas; Fu, Wei; Hammitt, Laura L.; Higdon, Melissa M.; Kagucia, E. Wangeci; Karron, Ruth A.; Li, Mengying; Park, Daniel E.; Prosperi, Christine; Wu, Zhenke; Zeger, Scott L.; Watson, Nora L.; Crawley, Jane; Murdoch, David R.; Brooks, W. Abdullah; Endtz, Hubert P.; Zaman, Khalequ; Goswami, Doli; Hossain, Lokman; Jahan, Yasmin; Ashraf, Hasan; Howie, Stephen R. C.; Ebruke, Bernard E.; Antonio, Martin; McLellan, Jessica; Machuka, Eunice; Shamsul, Arifin; Zaman, Syed M.A.; Mackenzie, Grant; Scott, J. Anthony G.; Awori, Juliet O.; Morpeth, Susan C.; Kamau, Alice; Kazungu, Sidi; Kotloff, Karen L.; Tapia, Milagritos D.; Sow, Samba O.; Sylla, Mamadou; Tamboura, Boubou; Onwuchekwa, Uma; Kourouma, Nana; Toure, Aliou; Madhi, Shabir A.; Moore, David P.; Adrian, Peter V.; Baillie, Vicky L.; Kuwanda, Locadiah; Mudau, Azwifarwi; Groome, Michelle J.; Baggett, Henry C.; Thamthitiwat, Somsak; Maloney, Susan A.; Bunthi, Charatdao; Rhodes, Julia; Sawatwong, Pongpun; Akarasewi, Pasakorn; Thea, Donald M.; Mwananyanda, Lawrence; Chipeta, James; Seidenberg, Phil; Mwansa, James; wa Somwe, Somwe; Kwenda, Geoffrey
2017-01-01
Abstract Background. Variable adherence to standardized case definitions, clinical procedures, specimen collection techniques, and laboratory methods has complicated the interpretation of previous multicenter pneumonia etiology studies. To circumvent these problems, a program of clinical standardization was embedded in the Pneumonia Etiology Research for Child Health (PERCH) study. Methods. Between March 2011 and August 2013, standardized training on the PERCH case definition, clinical procedures, and collection of laboratory specimens was delivered to 331 clinical staff at 9 study sites in 7 countries (The Gambia, Kenya, Mali, South Africa, Zambia, Thailand, and Bangladesh), through 32 on-site courses and a training website. Staff competency was assessed throughout 24 months of enrollment with multiple-choice question (MCQ) examinations, a video quiz, and checklist evaluations of practical skills. Results. MCQ evaluation was confined to 158 clinical staff members who enrolled PERCH cases and controls, with scores obtained for >86% of eligible staff at each time-point. Median scores after baseline training were ≥80%, and improved by 10 percentage points with refresher training, with no significant intersite differences. Percentage agreement with the clinical trainer on the presence or absence of clinical signs on video clips was high (≥89%), with interobserver concordance being substantial to high (AC1 statistic, 0.62–0.82) for 5 of 6 signs assessed. Staff attained median scores of >90% in checklist evaluations of practical skills. Conclusions. Satisfactory clinical standardization was achieved within and across all PERCH sites, providing reassurance that any etiological or clinical differences observed across the study sites are true differences, and not attributable to differences in application of the clinical case definition, interpretation of clinical signs, or in techniques used for clinical measurements or specimen collection. PMID:28575355
Improving estimates of streamflow characteristics by using Landsat-1 imagery
Hollyday, Este F.
1976-01-01
Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.
Visconti, Giuseppe; Tomaselli, Federica; Monda, Anna; Barone-Adesi, Liliana; Salgarello, Marzia
2015-01-01
In deep inferior epigastric artery perforator (DIEP) flap breast reconstruction, abdominal donor-site cosmetic and sensibility outcomes and the closure technique have drawn little attention in the literature, with many surgeons still following the principles of standard abdominoplasty. In this article, the authors report their experience with the cannula-assisted, limited undermining, and progressive high-tension suture ("CALP") technique of DIEP donor-site closure compared with standard abdominoplasty. Between December of 2008 and January of 2013, 137 consecutive women underwent DIEP flap breast reconstruction. Of these, 82 patients (between December of 2008 and November of 2011) underwent DIEP flap donor-site closure by means of standard abdominoplasty (control group) and 55 patients (from December of 2011 to January of 2013) by means of cannula-assisted, limited undermining, and progressive high-tension suture (study group). The abdominal drainage daily output, donor-site complications, abdominal skin sensitivity at 1-year follow-up, cosmetic outcomes, and patient satisfaction were recorded and analyzed statistically. Daily drainage output was significantly lower in the study group. Donor-site complications were significantly higher in the control group (37.8 percent versus 9 percent). Seroma and wound healing problems were experienced in the control group. Abdominal skin sensibility was better preserved in the study group. Overall, abdominal wall aesthetic outcomes were similar in both groups, except for scar quality (better in the study group). According to the authors' experience, cannula-assisted, limited undermining, and progressive high-tension suture should be always preferred to standard abdominoplasty for DIEP donor-site closure to reduce the complication rate to improve abdominal skin sensitivity and scar quality. Therapeutic, II.
2014-01-01
Background The purpose of this study was to compare two impression techniques from the perspective of patient preferences and treatment comfort. Methods Twenty-four (12 male, 12 female) subjects who had no previous experience with either conventional or digital impression participated in this study. Conventional impressions of maxillary and mandibular dental arches were taken with a polyether impression material (Impregum, 3 M ESPE), and bite registrations were made with polysiloxane bite registration material (Futar D, Kettenbach). Two weeks later, digital impressions and bite scans were performed using an intra-oral scanner (CEREC Omnicam, Sirona). Immediately after the impressions were made, the subjects’ attitudes, preferences and perceptions towards impression techniques were evaluated using a standardized questionnaire. The perceived source of stress was evaluated using the State-Trait Anxiety Scale. Processing steps of the impression techniques (tray selection, working time etc.) were recorded in seconds. Statistical analyses were performed with the Wilcoxon Rank test, and p < 0.05 was considered significant. Results There were significant differences among the groups (p < 0.05) in terms of total working time and processing steps. Patients stated that digital impressions were more comfortable than conventional techniques. Conclusions Digital impressions resulted in a more time-efficient technique than conventional impressions. Patients preferred the digital impression technique rather than conventional techniques. PMID:24479892
Costa, Francesco; Ortolina, Alessandro; Galbusera, Fabio; Cardia, Andrea; Sala, Giuseppe; Ronchi, Franco; Uccelli, Carlo; Grosso, Rossella; Fornari, Maurizio
2016-02-01
Pedicle screws with polymethyl methacrylate (PMMA) cement augmentation have been shown to significantly improve the fixation strength in a severely osteoporotic spine. However, the efficacy of screw fixation for different cement augmentation techniques remains unknown. This study aimed to determine the difference in pullout strength between different cement augmentation techniques. Uniform synthetic bones simulating severe osteoporosis were used to provide a platform for each augmentation technique. In all cases a polyaxial screw and acrylic cement (PMMA) at medium viscosity were used. Five groups were analyzed: I) only screw without PMMA (control group); II) retrograde cement pre-filling of the tapped area; III) cannulated and fenestrate screw with cement injection through perforation; IV) injection using a standard trocar of PMMA (vertebroplasty) and retrograde pre-filling of the tapped area; V) injection through a fenestrated trocar and retrograde pre-filling of the tapped area. Standard X-rays were taken in order to visualize cement distribution in each group. Pedicle screws at full insertion were then tested for axial pullout failure using a mechanical testing machine. A total of 30 screws were tested. The results of pullout analysis revealed better results of all groups with respect to the control group. In particular the statistical analysis showed a difference of Group V (p = 0.001) with respect to all other groups. These results confirm that the cement augmentation grants better results in pullout axial forces. Moreover they suggest better load resistance to axial forces when the distribution of the PMMA is along all the screw combining fenestration and pre-filling augmentation technique. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R
2015-09-01
Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.
Booth, Jonathan; Vazquez, Saulo; Martinez-Nunez, Emilio; Marks, Alison; Rodgers, Jeff; Glowacki, David R; Shalashilin, Dmitrii V
2014-08-06
In this paper, we briefly review the boxed molecular dynamics (BXD) method which allows analysis of thermodynamics and kinetics in complicated molecular systems. BXD is a multiscale technique, in which thermodynamics and long-time dynamics are recovered from a set of short-time simulations. In this paper, we review previous applications of BXD to peptide cyclization, solution phase organic reaction dynamics and desorption of ions from self-assembled monolayers (SAMs). We also report preliminary results of simulations of diamond etching mechanisms and protein unfolding in atomic force microscopy experiments. The latter demonstrate a correlation between the protein's structural motifs and its potential of mean force. Simulations of these processes by standard molecular dynamics (MD) is typically not possible, because the experimental time scales are very long. However, BXD yields well-converged and physically meaningful results. Compared with other methods of accelerated MD, our BXD approach is very simple; it is easy to implement, and it provides an integrated approach for simultaneously obtaining both thermodynamics and kinetics. It also provides a strategy for obtaining statistically meaningful dynamical results in regions of configuration space that standard MD approaches would visit only very rarely.
Descriptive Statistical Techniques for Librarians. 2nd Edition.
ERIC Educational Resources Information Center
Hafner, Arthur W.
A thorough understanding of the uses and applications of statistical techniques is integral in gaining support for library funding or new initiatives. This resource is designed to help practitioners develop and manipulate descriptive statistical information in evaluating library services, tracking and controlling limited resources, and analyzing…
ERIC Educational Resources Information Center
Williams, Immanuel James; Williams, Kelley Kim
2016-01-01
Understanding summary statistics and graphical techniques are building blocks to comprehending concepts beyond basic statistics. It's known that motivated students perform better in school. Using examples that students find engaging allows them to understand the concepts at a deeper level.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-07
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the..., Medical Systems Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo...
NASA Technical Reports Server (NTRS)
Mcdade, Ian C.
1991-01-01
Techniques were developed for recovering two-dimensional distributions of auroral volume emission rates from rocket photometer measurements made in a tomographic spin scan mode. These tomographic inversion procedures are based upon an algebraic reconstruction technique (ART) and utilize two different iterative relaxation techniques for solving the problems associated with noise in the observational data. One of the inversion algorithms is based upon a least squares method and the other on a maximum probability approach. The performance of the inversion algorithms, and the limitations of the rocket tomography technique, were critically assessed using various factors such as (1) statistical and non-statistical noise in the observational data, (2) rocket penetration of the auroral form, (3) background sources of emission, (4) smearing due to the photometer field of view, and (5) temporal variations in the auroral form. These tests show that the inversion procedures may be successfully applied to rocket observations made in medium intensity aurora with standard rocket photometer instruments. The inversion procedures have been used to recover two-dimensional distributions of auroral emission rates and ionization rates from an existing set of N2+3914A rocket photometer measurements which were made in a tomographic spin scan mode during the ARIES auroral campaign. The two-dimensional distributions of the 3914A volume emission rates recoverd from the inversion of the rocket data compare very well with the distributions that were inferred from ground-based measurements using triangulation-tomography techniques and the N2 ionization rates derived from the rocket tomography results are in very good agreement with the in situ particle measurements that were made during the flight. Three pre-prints describing the tomographic inversion techniques and the tomographic analysis of the ARIES rocket data are included as appendices.
Wagner, Rebecca; Wetzel, Stephanie J; Kern, John; Kingston, H M Skip
2012-02-01
The employment of chemical weapons by rogue states and/or terrorist organizations is an ongoing concern in the United States. The quantitative analysis of nerve agents must be rapid and reliable for use in the private and public sectors. Current methods describe a tedious and time-consuming derivatization for gas chromatography-mass spectrometry and liquid chromatography in tandem with mass spectrometry. Two solid-phase extraction (SPE) techniques for the analysis of glyphosate and methylphosphonic acid are described with the utilization of isotopically enriched analytes for quantitation via atmospheric pressure chemical ionization-quadrupole time-of-flight mass spectrometry (APCI-Q-TOF-MS) that does not require derivatization. Solid-phase extraction-isotope dilution mass spectrometry (SPE-IDMS) involves pre-equilibration of a naturally occurring sample with an isotopically enriched standard. The second extraction method, i-Spike, involves loading an isotopically enriched standard onto the SPE column before the naturally occurring sample. The sample and the spike are then co-eluted from the column enabling precise and accurate quantitation via IDMS. The SPE methods in conjunction with IDMS eliminate concerns of incomplete elution, matrix and sorbent effects, and MS drift. For accurate quantitation with IDMS, the isotopic contribution of all atoms in the target molecule must be statistically taken into account. This paper describes two newly developed sample preparation techniques for the analysis of nerve agent surrogates in drinking water as well as statistical probability analysis for proper molecular IDMS. The methods described in this paper demonstrate accurate molecular IDMS using APCI-Q-TOF-MS with limits of quantitation as low as 0.400 mg/kg for glyphosate and 0.031 mg/kg for methylphosphonic acid. Copyright © 2012 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.
2010-04-01
Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.
Langley Wind Tunnel Data Quality Assurance-Check Standard Results
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.; Grubb, John P.; Krieger, William B.; Cler, Daniel L.
2000-01-01
A framework for statistical evaluation, control and improvement of wind funnel measurement processes is presented The methodology is adapted from elements of the Measurement Assurance Plans developed by the National Bureau of Standards (now the National Institute of Standards and Technology) for standards and calibration laboratories. The present methodology is based on the notions of statistical quality control (SQC) together with check standard testing and a small number of customer repeat-run sets. The results of check standard and customer repeat-run -sets are analyzed using the statistical control chart-methods of Walter A. Shewhart long familiar to the SQC community. Control chart results are presented for. various measurement processes in five facilities at Langley Research Center. The processes include test section calibration, force and moment measurements with a balance, and instrument calibration.
Cost effectiveness of the stream-gaging program in Nevada
Arteaga, F.E.
1990-01-01
The stream-gaging network in Nevada was evaluated as part of a nationwide effort by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. Specifically, the study dealt with 79 streamflow gages and 2 canal-flow gages that were under the direct operation of Nevada personnel as of 1983. Cost-effective allocations of resources, including budget and operational criteria, were studied using statistical procedures known as Kalman-filtering techniques. The possibility of developing streamflow data at ungaged sites was evaluated using flow-routing and statistical regression analyses. Neither of these methods provided sufficiently accurate results to warrant their use in place of stream gaging. The 81 gaging stations were being operated in 1983 with a budget of $465,500. As a result of this study, all existing stations were determined to be necessary components of the program for the foreseeable future. At the 1983 funding level, the average standard error of streamflow records was nearly 28%. This same overall level of accuracy could have been maintained with a budget of approximately $445,000 if the funds were redistributed more equitably among the gages. The maximum budget analyzed, $1,164 ,000 would have resulted in an average standard error of 11%. The study indicates that a major source of error is lost data. If perfectly operating equipment were available, the standard error for the 1983 program and budget could have been reduced to 21%. (Thacker-USGS, WRD)
SU-E-I-27: Establishing Target Exposure Index Values for Computed Radiography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, N; Tchou, P; Belcher, K
2014-06-01
Purpose: To develop a standard set of target exposure index (TEI) values to be applied to Agfa Computed Radiography (CR) readers in accordance with International Electrotechnical Committee 62494-1 (ed. 1.0). Methods: A large data cohort was collected from six USAF Medical Treatment Facilities that exclusively use Agfa CR Readers. Dose monitoring statistics were collected from each reader. The data was analyzed based on anatomic region, view, and processing speed class. The Agfa specific exposure metric, logarithmic mean (LGM), was converted to exposure index (EI) for each data set. The optimum TEI value was determined by minimizing the number of studiesmore » that fell outside the acceptable deviation index (DI) range of +/− 2 for phototimed techniques or a range of +/−3 for fixed techniques. An anthropomorphic radiographic phantom was used to corroborate the TEI recommendations. Images were acquired of several anatomic regions and views using standard techniques. The images were then evaluated by two radiologists as either acceptable or unacceptable. The acceptable image with the lowest exposure and EI value was compared to the recommended TEI values using a passing DI range. Results: Target EI values were determined for a comprehensive list of anatomic regions and views. Conclusion: Target EI values must be established on each CR unit in order to provide a positive feedback system for the technologist. This system will serve as a mechanism to prevent under or overexposures of patients. The TEI recommendations are a first attempt at a large scale process improvement with the goal of setting reasonable and standardized TEI values. The implementation and effectiveness of the recommended TEI values should be monitored and adjustments made as necessary.« less
NASA Astrophysics Data System (ADS)
Wang, Tian; Cui, Xiaoxin; Ni, Yewen; Liao, Kai; Liao, Nan; Yu, Dunshan; Cui, Xiaole
2017-04-01
With shrinking transistor feature size, the fin-type field-effect transistor (FinFET) has become the most promising option in low-power circuit design due to its superior capability to suppress leakage. To support the VLSI digital system flow based on logic synthesis, we have designed an optimized high-performance low-power FinFET standard cell library based on employing the mixed FBB/RBB technique in the existing stacked structure of each cell. This paper presents the reliability evaluation of the optimized cells under process and operating environment variations based on Monte Carlo analysis. The variations are modelled with Gaussian distribution of the device parameters and 10000 sweeps are conducted in the simulation to obtain the statistical properties of the worst-case delay and input-dependent leakage for each cell. For comparison, a set of non-optimal cells that adopt the same topology without employing the mixed biasing technique is also generated. Experimental results show that the optimized cells achieve standard deviation reduction of 39.1% and 30.7% at most in worst-case delay and input-dependent leakage respectively while the normalized deviation shrinking in worst-case delay and input-dependent leakage can be up to 98.37% and 24.13%, respectively, which demonstrates that our optimized cells are less sensitive to variability and exhibit more reliability. Project supported by the National Natural Science Foundation of China (No. 61306040), the State Key Development Program for Basic Research of China (No. 2015CB057201), the Beijing Natural Science Foundation (No. 4152020), and Natural Science Foundation of Guangdong Province, China (No. 2015A030313147).
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-08
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Prevention, Classifications and Public Health Data Standards, 3311 Toledo Road, Room 2337, Hyattsville, MD...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-28
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Administrator, Classifications and Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-16
... DEPARTMENT OF HEALTH AND HUMAN SERVICES Centers for Disease Control and Prevention National Center for Health Statistics (NCHS), Classifications and Public Health Data Standards Staff, Announces the... Public Health Data Standards Staff, NCHS, 3311 Toledo Road, Room 2337, Hyattsville, Maryland 20782, e...
Statistics as Unbiased Estimators: Exploring the Teaching of Standard Deviation
ERIC Educational Resources Information Center
Wasserman, Nicholas H.; Casey, Stephanie; Champion, Joe; Huey, Maryann
2017-01-01
This manuscript presents findings from a study about the knowledge for and planned teaching of standard deviation. We investigate how understanding variance as an unbiased (inferential) estimator--not just a descriptive statistic for the variation (spread) in data--is related to teachers' instruction regarding standard deviation, particularly…
Di Maria, F; Pistocchi, S; Clarençon, F; Bartolini, B; Blanc, R; Biondi, A; Redjem, H; Chiras, J; Sourour, N; Piotin, M
2015-12-01
Over the past few years, flow diversion has been increasingly adopted for the treatment of intracranial aneurysms, especially in the paraclinoid and paraophthalmic carotid segment. We compared clinical and angiographic outcomes and complication rates in 2 groups of patients with unruptured carotid-ophthalmic aneurysms treated for 7 years by either standard coil-based techniques or flow diversion. From February 2006 to December 2013, 162 unruptured carotid-ophthalmic aneurysms were treated endovascularly in 138 patients. Sixty-seven aneurysms were treated by coil-based techniques in 61 patients. Flow diverters were deployed in 95 unruptured aneurysms (77 patients), with additional coiling in 27 patients. Complication rates, clinical outcome, and immediate and long-term angiographic results were retrospectively analyzed. No procedure-related deaths occurred. Four procedure-related thromboembolic events (6.6%) leading to permanent morbidity in 1 case (1.6%) occurred in the coiling group. Neurologic complications were observed in 6 patients (7.8%) in the flow-diversion group, resulting in 3.9% permanent morbidity. No statistically significant difference was found between complication (P = .9) and morbidity rates (P = .6). In the coiling group (median follow-up, 31.5 ± 24.5 months), recanalization occurred at 1 year in 23/50 (54%) aneurysms and 27/55 aneurysms (50.9%) at the latest follow-up, leading to retreatment in 6 patients (9%). In the flow-diversion group (mean follow-up, 13.5 ± 10.8 months), 85.3% (35/41) of all aneurysms were occluded after 12 months, and 74.6% (50/67) on latest follow-up. The retreatment rate was 2.1%. Occlusion rates between the 2 groups differed significantly at 12 months (P < .001) and at the latest follow-up (P < .005). Our retrospective analysis shows better long-term occlusion of carotid-ophthalmic aneurysms after use of flow diverters compared with standard coil-based techniques, without significant differences in permanent morbidity. © 2015 by American Journal of Neuroradiology.
Russo, Russell R; Burn, Matthew B; Ismaily, Sabir K; Gerrie, Brayden J; Han, Shuyang; Alexander, Jerry; Lenherr, Christopher; Noble, Philip C; Harris, Joshua D; McCulloch, Patrick C
2017-09-07
Accurate measurements of knee and hip motion are required for management of musculoskeletal pathology. The purpose of this investigation was to compare three techniques for measuring motion at the hip and knee. The authors hypothesized that digital photography would be equivalent in accuracy and show higher precision compared to the other two techniques. Using infrared motion capture analysis as the reference standard, hip flexion/abduction/internal rotation/external rotation and knee flexion/extension were measured using visual estimation, goniometry, and photography on 10 fresh frozen cadavers. These measurements were performed by three physical therapists and three orthopaedic surgeons. Accuracy was defined by the difference from the reference standard, while precision was defined by the proportion of measurements within either 5° or 10°. Analysis of variance (ANOVA), t-tests, and chi-squared tests were used. Although two statistically significant differences were found in measurement accuracy between the three techniques, neither of these differences met clinical significance (difference of 1.4° for hip abduction and 1.7° for the knee extension). Precision of measurements was significantly higher for digital photography than: (i) visual estimation for hip abduction and knee extension, and (ii) goniometry for knee extension only. There was no clinically significant difference in measurement accuracy between the three techniques for hip and knee motion. Digital photography only showed higher precision for two joint motions (hip abduction and knee extension). Overall digital photography shows equivalent accuracy and near-equivalent precision to visual estimation and goniometry.
75 FR 53925 - Sea Turtle Conservation; Shrimp and Summer Flounder Trawling Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-02
... because of the statistical probability the candidate TED may not achieve the standard (i.e., control TED... the test with 4 turtle captures because of the statistical probability the candidate TED may not... because of the statistical probability the candidate TED may not achieve the standard (i.e., [[Page 53930...
Chi-squared and C statistic minimization for low count per bin data
NASA Astrophysics Data System (ADS)
Nousek, John A.; Shue, David R.
1989-07-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy
NASA Technical Reports Server (NTRS)
Nousek, John A.; Shue, David R.
1989-01-01
Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.
Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti
2016-07-01
A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.
Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J.; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T.; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti
2016-01-01
Motivation: A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. Results: We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness. Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Availability and implementation: Code is available at https://github.com/aalto-ics-kepaco Contacts: anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27153689
The Importance of Introductory Statistics Students Understanding Appropriate Sampling Techniques
ERIC Educational Resources Information Center
Menil, Violeta C.
2005-01-01
In this paper the author discusses the meaning of sampling, the reasons for sampling, the Central Limit Theorem, and the different techniques of sampling. Practical and relevant examples are given to make the appropriate sampling techniques understandable to students of Introductory Statistics courses. With a thorough knowledge of sampling…
Development and comparison of projection and image space 3D nodule insertion techniques
NASA Astrophysics Data System (ADS)
Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan
2016-04-01
This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.
Should I Pack My Umbrella? Clinical versus Statistical Prediction of Mental Health Decisions
ERIC Educational Resources Information Center
Aegisdottir, Stefania; Spengler, Paul M.; White, Michael J.
2006-01-01
In this rejoinder, the authors respond to the insightful commentary of Strohmer and Arm, Chwalisz, and Hilton, Harris, and Rice about the meta-analysis on statistical versus clinical prediction techniques for mental health judgments. The authors address issues including the availability of statistical prediction techniques for real-life psychology…
Change Detection in Rough Time Series
2014-09-01
Business Statistics : An Inferential Approach, Dellen: San Francisco. [18] Winston, W. (1997) Operations Research Applications and Algorithms, Duxbury...distribution that can present significant challenges to conventional statistical tracking techniques. To address this problem the proposed method...applies hybrid fuzzy statistical techniques to series granules instead of to individual measures. Three examples demonstrated the robust nature of the
Enhancing Students' Ability to Use Statistical Reasoning with Everyday Problems
ERIC Educational Resources Information Center
Lawson, Timothy J.; Schwiers, Michael; Doellman, Maureen; Grady, Greg; Kelnhofer, Robert
2003-01-01
We discuss a technique for teaching students everyday applications of statistical concepts. We used this technique with students (n = 50) enrolled in several sections of an introductory statistics course; students (n = 45) in other sections served as a comparison group. A class of introductory psychology students (n = 24) served as a second…
Tanabe, Paula; Ferket, Kathleen; Thomas, Ronald; Paice, Judith; Marcantonio, Richard
2002-04-01
The purpose of this study was to determine the effectiveness of nursing interventions in decreasing pain for children with minor musculoskeletal trauma and moderate pain and to examine patient satisfaction. Children were assigned to 1 of 3 intervention groups: (1) standard care (ice, elevation, and immobilization) only; (2) standard care and ibuprofen; or (3) standard care and distraction. Children were monitored for pain ratings for 60 minutes. Children who sustained minor musculoskeletal trauma within the past 24 hours and presented with pain ratings of 2 or greater using the 0-5 Wong/Baker faces scale were included. Two patient satisfaction questions were asked of parents upon their child's discharge from the emergency department. A statistically significant decrease in pain for all patients (76) occurred at 30 minutes (F = 4.39, P <.05) and was maintained at 60 minutes. The distraction group demonstrated a statistically significant reduction in pain compared with the other groups at 30 minutes; this reduction was maintained at 60 minutes (F = 47.07, P <.05). Parents of only 6 children expressed dissatisfaction with overall pain management. Twelve percent of children who were not in the group receiving medication received analgesics while in the emergency department. At discharge, only 37% of children with fractures and/or sprains had received medications for pain. Children with musculoskeletal trauma may be under-medicated. Distraction techniques can be an effective adjunct to analgesia for children with musculoskeletal pain in the emergency department and should be made available. Ibuprofen may not be an effective analgesic for children with these injuries; stronger analgesics may be required.
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
Dyract compomer: comparison of total etch vs. no etch technique.
Kugel, G; Perry, R D; Hoang, E; Hoang, T; Ferrari, M
1998-01-01
Different dental materials and methods can influence the integrity of the marginal seal of restorations. To evaluate the microleakage of Dyract AP Light Cured Compomer, a polyacid modified resin (Caulk), using etched and unetched techniques, standardized trapezoidal Class V restorations were placed on facial or lingual surfaces of 20 human molars with the gingival margin in the cementum. Each restoration was scored at the cervical by two independent, double blinded operators, with reference to the DEJ, for dye penetration on a ranking system of: 0 = no evidence of dye penetration; 1 = dye penetration up to one-half the distance to the axial wall; 2 = dye penetration beyond one-half the distance to the axial wall but short of the axial wall; 3 = dye penetration to the axial wall or beyond. Statistical analysis (Fisher Exact Test) indicated that the etched compomer demonstrated significantly less microleakage when compared to the unetched compomer (p < 0.05).
Diaby, Vakaramoko; Sanogo, Vassiki; Moussa, Kouame Richard
2016-01-01
In this paper, the readers are introduced to ELICIT, an imprecise weight elicitation technique for multicriteria decision analysis for healthcare. The application of ELICIT consists of two steps: the rank ordering of evaluation criteria based on decision-makers' (DMs) preferences using the principal component analysis; and the estimation of criteria weights and their descriptive statistics using the variable interdependent analysis and the Monte Carlo method. The application of ELICIT is illustrated with a hypothetical case study involving the elicitation of weights for five criteria used to select the best device for eye surgery. The criteria were ranked from 1-5, based on a strict preference relationship established by the DMs. For each criterion, the deterministic weight was estimated as well as the standard deviation and 95% credibility interval. ELICIT is appropriate in situations where only ordinal DMs' preferences are available to elicit decision criteria weights.
Practical Computer Security through Cryptography
NASA Technical Reports Server (NTRS)
McNab, David; Twetev, David (Technical Monitor)
1998-01-01
The core protocols upon which the Internet was built are insecure. Weak authentication and the lack of low level encryption services introduce vulnerabilities that propagate upwards in the network stack. Using statistics based on CERT/CC Internet security incident reports, the relative likelihood of attacks via these vulnerabilities is analyzed. The primary conclusion is that the standard UNIX BSD-based authentication system is by far the most commonly exploited weakness. Encryption of Sensitive password data and the adoption of cryptographically-based authentication protocols can greatly reduce these vulnerabilities. Basic cryptographic terminology and techniques are presented, with attention focused on the ways in which technology such as encryption and digital signatures can be used to protect against the most commonly exploited vulnerabilities. A survey of contemporary security software demonstrates that tools based on cryptographic techniques, such as Kerberos, ssh, and PGP, are readily available and effectively close many of the most serious security holes. Nine practical recommendations for improving security are described.
NASA Astrophysics Data System (ADS)
Gbedo, Yémalin Gabin; Mangin-Brinet, Mariane
2017-07-01
We present a new procedure to determine parton distribution functions (PDFs), based on Markov chain Monte Carlo (MCMC) methods. The aim of this paper is to show that we can replace the standard χ2 minimization by procedures grounded on statistical methods, and on Bayesian inference in particular, thus offering additional insight into the rich field of PDFs determination. After a basic introduction to these techniques, we introduce the algorithm we have chosen to implement—namely Hybrid (or Hamiltonian) Monte Carlo. This algorithm, initially developed for Lattice QCD, turns out to be very interesting when applied to PDFs determination by global analyses; we show that it allows us to circumvent the difficulties due to the high dimensionality of the problem, in particular concerning the acceptance. A first feasibility study is performed and presented, which indicates that Markov chain Monte Carlo can successfully be applied to the extraction of PDFs and of their uncertainties.
Measuring nepotism through shared last names: the case of Italian Academia.
Allesina, Stefano
2011-01-01
Nepotistic practices are detrimental for academia. Here I show how disciplines with a high likelihood of nepotism can be detected using standard statistical techniques based on shared last names among professors. As an example, I analyze the set of all 61,340 Italian academics. I find that nepotism is prominent in Italy, with particular disciplinary sectors being detected as especially problematic. Out of 28 disciplines, 9 - accounting for more than half of Italian professors - display a significant paucity of last names. Moreover, in most disciplines a clear north-south trend emerges, with likelihood of nepotism increasing with latitude. Even accounting for the geographic clustering of last names, I find that for many disciplines the probability of name-sharing is boosted when professors work in the same institution or sub-discipline. Using these techniques policy makers can target cuts and funding in order to promote fair practices.
New Standards Require Teaching More Statistics: Are Preservice Secondary Mathematics Teachers Ready?
ERIC Educational Resources Information Center
Lovett, Jennifer N.; Lee, Hollylynne S.
2017-01-01
Mathematics teacher education programs often need to respond to changing expectations and standards for K-12 curriculum and accreditation. New standards for high school mathematics in the United States include a strong emphasis in statistics. This article reports results from a mixed methods cross-institutional study examining the preparedness of…
Bodner, Todd E.
2017-01-01
Wilkinson and Task Force on Statistical Inference (1999) recommended that researchers include information on the practical magnitude of effects (e.g., using standardized effect sizes) to distinguish between the statistical and practical significance of research results. To date, however, researchers have not widely incorporated this recommendation into the interpretation and communication of the conditional effects and differences in conditional effects underlying statistical interactions involving a continuous moderator variable where at least one of the involved variables has an arbitrary metric. This article presents a descriptive approach to investigate two-way statistical interactions involving continuous moderator variables where the conditional effects underlying these interactions are expressed in standardized effect size metrics (i.e., standardized mean differences and semi-partial correlations). This approach permits researchers to evaluate and communicate the practical magnitude of particular conditional effects and differences in conditional effects using conventional and proposed guidelines, respectively, for the standardized effect size and therefore provides the researcher important supplementary information lacking under current approaches. The utility of this approach is demonstrated with two real data examples and important assumptions underlying the standardization process are highlighted. PMID:28484404
Technical Note: The Initial Stages of Statistical Data Analysis
Tandy, Richard D.
1998-01-01
Objective: To provide an overview of several important data-related considerations in the design stage of a research project and to review the levels of measurement and their relationship to the statistical technique chosen for the data analysis. Background: When planning a study, the researcher must clearly define the research problem and narrow it down to specific, testable questions. The next steps are to identify the variables in the study, decide how to group and treat subjects, and determine how to measure, and the underlying level of measurement of, the dependent variables. Then the appropriate statistical technique can be selected for data analysis. Description: The four levels of measurement in increasing complexity are nominal, ordinal, interval, and ratio. Nominal data are categorical or “count” data, and the numbers are treated as labels. Ordinal data can be ranked in a meaningful order by magnitude. Interval data possess the characteristics of ordinal data and also have equal distances between levels. Ratio data have a natural zero point. Nominal and ordinal data are analyzed with nonparametric statistical techniques and interval and ratio data with parametric statistical techniques. Advantages: Understanding the four levels of measurement and when it is appropriate to use each is important in determining which statistical technique to use when analyzing data. PMID:16558489
Attar, Bijan Movahedian; Zalzali, Haidar; Razavi, Mohammad; Ghoreishian, Mehdi; Rezaei, Majid
2012-10-01
Epineural suturing is the most common technique used for peripheral nerve anastomosis. In addition to the foreign body reaction to the suture material, the surgical duration and difficulty of suturing in confined anatomic locations are major problems. We evaluated the effectiveness of fibrin glue as an acceptable alternative for nerve anastomosis in dogs. Eight adult female dogs weighing 18 to 24 kg were used in the present study. The facial nerve was transected bilaterally. On the right side, the facial nerve was subjected to epineural suturing; and on the left side, the nerve was anastomosed using fibrin adhesive. After 16 weeks, the nerve conduction velocity and proportion of the nerve fibers that crossed the anastomosis site were evaluated and compared for the epineural suture (right side) and fibrin glue (left side). The data were analyzed using the paired t test and univariate analysis of variance. The mean postoperative nerve conduction velocity was 29.87 ± 7.65 m/s and 26.75 ± 3.97 m/s on the right and left side, respectively. No statistically significant difference was found in the postoperative nerve conduction velocity between the 2 techniques (P = .444). The proportion of nerve fibers that crossed the anastomotic site was 71.25% ± 7.59% and 72.25% ± 8.31% on the right and left side, respectively. The histologic evaluation showed no statistically significant difference in the proportion of the nerve fibers that crossed the anastomotic site between the 2 techniques (P = .598). The results suggest that the efficacies of epineural suturing and fibrin gluing in peripheral nerve anastomosis are similar. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Vojdani, M; Torabi, K; Farjood, E; Khaledi, Aar
2013-09-01
Metal-ceramic crowns are most commonly used as the complete coverage restorations in clinical daily use. Disadvantages of conventional hand-made wax-patterns introduce some alternative ways by means of CAD/CAM technologies. This study compares the marginal and internal fit of copings cast from CAD/CAM and conventional fabricated wax-patterns. Twenty-four standardized brass dies were prepared and randomly divided into 2 groups according to the wax-patterns fabrication method (CAD/CAM technique and conventional method) (n=12). All the wax-patterns were fabricated in a standard fashion by means of contour, thickness and internal relief (M1-M12: representative of CAD/CAM group, C1-C12: representative of conventional group). CAD/CAM milling machine (Cori TEC 340i; imes-icore GmbH, Eiterfeld, Germany) was used to fabricate the CAD/CAM group wax-patterns. The copings cast from 24 wax-patterns were cemented to the corresponding dies. For all the coping-die assemblies cross-sectional technique was used to evaluate the marginal and internal fit at 15 points. The Student's t- test was used for statistical analysis (α=0.05). The overall mean (SD) for absolute marginal discrepancy (AMD) was 254.46 (25.10) um for CAD/CAM group and 88.08(10.67) um for conventional group (control). The overall mean of internal gap total (IGT) was 110.77(5.92) um for CAD/CAM group and 76.90 (10.17) um for conventional group. The Student's t-test revealed significant differences between 2 groups. Marginal and internal gaps were found to be significantly higher at all measured areas in CAD/CAM group than conventional group (p< 0.001). Within limitations of this study, conventional method of wax-pattern fabrication produced copings with significantly better marginal and internal fit than CAD/CAM (machine-milled) technique. All the factors for 2 groups were standardized except wax pattern fabrication technique, therefore, only the conventional group results in copings with clinically acceptable margins of less than 120um.
Vojdani, M; Torabi, K; Farjood, E; Khaledi, AAR
2013-01-01
Statement of Problem: Metal-ceramic crowns are most commonly used as the complete coverage restorations in clinical daily use. Disadvantages of conventional hand-made wax-patterns introduce some alternative ways by means of CAD/CAM technologies. Purpose: This study compares the marginal and internal fit of copings cast from CAD/CAM and conventional fabricated wax-patterns. Materials and Method: Twenty-four standardized brass dies were prepared and randomly divided into 2 groups according to the wax-patterns fabrication method (CAD/CAM technique and conventional method) (n=12). All the wax-patterns were fabricated in a standard fashion by means of contour, thickness and internal relief (M1-M12: representative of CAD/CAM group, C1-C12: representative of conventional group). CAD/CAM milling machine (Cori TEC 340i; imes-icore GmbH, Eiterfeld, Germany) was used to fabricate the CAD/CAM group wax-patterns. The copings cast from 24 wax-patterns were cemented to the corresponding dies. For all the coping-die assemblies cross-sectional technique was used to evaluate the marginal and internal fit at 15 points. The Student’s t- test was used for statistical analysis (α=0.05). Results: The overall mean (SD) for absolute marginal discrepancy (AMD) was 254.46 (25.10) um for CAD/CAM group and 88.08(10.67) um for conventional group (control). The overall mean of internal gap total (IGT) was 110.77(5.92) um for CAD/CAM group and 76.90 (10.17) um for conventional group. The Student’s t-test revealed significant differences between 2 groups. Marginal and internal gaps were found to be significantly higher at all measured areas in CAD/CAM group than conventional group (p< 0.001). Conclusion: Within limitations of this study, conventional method of wax-pattern fabrication produced copings with significantly better marginal and internal fit than CAD/CAM (machine-milled) technique. All the factors for 2 groups were standardized except wax pattern fabrication technique, therefore, only the conventional group results in copings with clinically acceptable margins of less than 120um. PMID:24724133
... Standards Act and Program MQSA Insights MQSA National Statistics Share Tweet Linkedin Pin it More sharing options ... but should level off with time. Archived Scorecard Statistics 2018 Scorecard Statistics 2017 Scorecard Statistics 2016 Scorecard ...
NASA Astrophysics Data System (ADS)
Peresan, Antonella; Gentili, Stefania
2017-04-01
Identification and statistical characterization of seismic clusters may provide useful insights about the features of seismic energy release and their relation to physical properties of the crust within a given region. Moreover, a number of studies based on spatio-temporal analysis of main-shocks occurrence require preliminary declustering of the earthquake catalogs. Since various methods, relying on different physical/statistical assumptions, may lead to diverse classifications of earthquakes into main events and related events, we aim to investigate the classification differences among different declustering techniques. Accordingly, a formal selection and comparative analysis of earthquake clusters is carried out for the most relevant earthquakes in North-Eastern Italy, as reported in the local OGS-CRS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. The comparison is then extended to selected earthquake sequences associated with a different seismotectonic setting, namely to events that occurred in the region struck by the recent Central Italy destructive earthquakes, making use of INGV data. Various techniques, ranging from classical space-time windows methods to ad hoc manual identification of aftershocks, are applied for detection of earthquake clusters. In particular, a statistical method based on nearest-neighbor distances of events in space-time-energy domain, is considered. Results from clusters identification by the nearest-neighbor method turn out quite robust with respect to the time span of the input catalogue, as well as to minimum magnitude cutoff. The identified clusters for the largest events reported in North-Eastern Italy since 1977 are well consistent with those reported in earlier studies, which were aimed at detailed manual aftershocks identification. The study shows that the data-driven approach, based on the nearest-neighbor distances, can be satisfactorily applied to decompose the seismic catalog into background seismicity and individual sequences of earthquake clusters, also in areas characterized by moderate seismic activity, where the standard declustering techniques may turn out rather gross approximations. With these results acquired, the main statistical features of seismic clusters are explored, including complex interdependence of related events, with the aim to characterize the space-time patterns of earthquakes occurrence in North-Eastern Italy and capture their basic differences with Central Italy sequences.
Vexler, Albert; Yu, Jihnhee
2018-04-13
A common statistical doctrine supported by many introductory courses and textbooks is that t-test type procedures based on normally distributed data points are anticipated to provide a standard in decision-making. In order to motivate scholars to examine this convention, we introduce a simple approach based on graphical tools of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. In this context, we propose employing a p-values-based method, taking into account the stochastic nature of p-values. We focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we extend the EPV concept to be considered in terms of the ROC curve technique. This provides expressive evaluations and visualizations of a wide spectrum of testing mechanisms' properties. We show that the conventional power characterization of tests is a partial aspect of the presented EPV/ROC technique. We desire that this explanation of the EPV/ROC approach convinces researchers of the usefulness of the EPV/ROC approach for depicting different characteristics of decision-making procedures, in light of the growing interest regarding correct p-values-based applications.
Adjustment of geochemical background by robust multivariate statistics
Zhou, D.
1985-01-01
Conventional analyses of exploration geochemical data assume that the background is a constant or slowly changing value, equivalent to a plane or a smoothly curved surface. However, it is better to regard the geochemical background as a rugged surface, varying with changes in geology and environment. This rugged surface can be estimated from observed geological, geochemical and environmental properties by using multivariate statistics. A method of background adjustment was developed and applied to groundwater and stream sediment reconnaissance data collected from the Hot Springs Quadrangle, South Dakota, as part of the National Uranium Resource Evaluation (NURE) program. Source-rock lithology appears to be a dominant factor controlling the chemical composition of groundwater or stream sediments. The most efficacious adjustment procedure is to regress uranium concentration on selected geochemical and environmental variables for each lithologic unit, and then to delineate anomalies by a common threshold set as a multiple of the standard deviation of the combined residuals. Robust versions of regression and RQ-mode principal components analysis techniques were used rather than ordinary techniques to guard against distortion caused by outliers Anomalies delineated by this background adjustment procedure correspond with uranium prospects much better than do anomalies delineated by conventional procedures. The procedure should be applicable to geochemical exploration at different scales for other metals. ?? 1985.
Current genetic methodologies in the identification of disaster victims and in forensic analysis.
Ziętkiewicz, Ewa; Witt, Magdalena; Daca, Patrycja; Zebracka-Gala, Jadwiga; Goniewicz, Mariusz; Jarząb, Barbara; Witt, Michał
2012-02-01
This review presents the basic problems and currently available molecular techniques used for genetic profiling in disaster victim identification (DVI). The environmental conditions of a mass disaster often result in severe fragmentation, decomposition and intermixing of the remains of victims. In such cases, traditional identification based on the anthropological and physical characteristics of the victims is frequently inconclusive. This is the reason why DNA profiling became the gold standard for victim identification in mass-casualty incidents (MCIs) or any forensic cases where human remains are highly fragmented and/or degraded beyond recognition. The review provides general information about the sources of genetic material for DNA profiling, the genetic markers routinely used during genetic profiling (STR markers, mtDNA and single-nucleotide polymorphisms [SNP]) and the basic statistical approaches used in DNA-based disaster victim identification. Automated technological platforms that allow the simultaneous analysis of a multitude of genetic markers used in genetic identification (oligonucleotide microarray techniques and next-generation sequencing) are also presented. Forensic and population databases containing information on human variability, routinely used for statistical analyses, are discussed. The final part of this review is focused on recent developments, which offer particularly promising tools for forensic applications (mRNA analysis, transcriptome variation in individuals/populations and genetic profiling of specific cells separated from mixtures).
Ocular Biocompatibility of Nitinol Intraocular Clips
Velez-Montoya, Raul; Erlanger, Michael
2012-01-01
Purpose. To evaluate the tolerance and biocompatibility of a preformed nitinol intraocular clip in an animal model after anterior segment surgery. Methods. Yucatan mini-pigs were used. A 30-gauge prototype injector was used to attach a shape memory nitinol clip to the iris of five pigs. Another five eyes received conventional polypropylene suture with a modified Seipser slip knot. The authors compared the surgical time of each technique. All eyes underwent standard full-field electroretinogram at baseline and 8 weeks after surgery. The animals were euthanized and eyes collected for histologic analysis after 70 days (10 weeks) postsurgery. The corneal thickness, corneal endothelial cell counts, specular microscopy parameters, retina cell counts, and electroretinogram parameters were compared between the groups. A two sample t-test for means and a P value of 0.05 were use for assessing statistical differences between measurements. Results. The injection of the nitinol clip was 15 times faster than conventional suturing. There were no statistical differences between the groups for corneal thickness, endothelial cell counts, specular microscopy parameters, retina cell counts, and electroretinogram measurements. Conclusions. The nitinol clip prototype is well tolerated and showed no evidence of toxicity in the short-term. The injectable delivery system was faster and technically less challenging than conventional suture techniques. PMID:22064995
Random field assessment of nanoscopic inhomogeneity of bone
Dong, X. Neil; Luo, Qing; Sparkman, Daniel M.; Millwater, Harry R.; Wang, Xiaodu
2010-01-01
Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to present the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. PMID:20817128
Explorations in Statistics: Standard Deviations and Standard Errors
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2008-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…
Comparing Simulated and Theoretical Sampling Distributions of the U3 Person-Fit Statistic.
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Meijer, Rob R.; Sijtsma, Klaas
2002-01-01
Studied whether the theoretical sampling distribution of the U3 person-fit statistic is in agreement with the simulated sampling distribution under different item response theory models and varying item and test characteristics. Simulation results suggest that the use of standard normal deviates for the standardized version of the U3 statistic may…
Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob
2016-08-01
In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).
Statistical baseline assessment in cardiotocography.
Agostinelli, Angela; Braccili, Eleonora; Marchegiani, Enrico; Rosati, Riccardo; Sbrollini, Agnese; Burattini, Luca; Morettini, Micaela; Di Nardo, Francesco; Fioretti, Sandro; Burattini, Laura
2017-07-01
Cardiotocography (CTG) is the most common non-invasive diagnostic technique to evaluate fetal well-being. It consists in the recording of fetal heart rate (FHR; bpm) and maternal uterine contractions. Among the main parameters characterizing FHR, baseline (BL) is fundamental to determine fetal hypoxia and distress. In computerized applications, BL is typically computed as mean FHR±ΔFHR, with ΔFHR=8 bpm or ΔFHR=10 bpm, both values being experimentally fixed. In this context, the present work aims: to propose a statistical procedure for ΔFHR assessment; to quantitatively determine ΔFHR value by applying such procedure to clinical data; and to compare the statistically-determined ΔFHR value against the experimentally-determined ΔFHR values. To these aims, the 552 recordings of the "CTU-UHB intrapartum CTG database" from Physionet were submitted to an automatic procedure, which consisted in a FHR preprocessing phase and a statistical BL assessment. During preprocessing, FHR time series were divided into 20-min sliding windows, in which missing data were removed by linear interpolation. Only windows with a correction rate lower than 10% were further processed for BL assessment, according to which ΔFHR was computed as FHR standard deviation. Total number of accepted windows was 1192 (38.5%) over 383 recordings (69.4%) with at least an accepted window. Statistically-determined ΔFHR value was 9.7 bpm. Such value was statistically different from 8 bpm (P<;10 -19 ) but not from 10 bpm (P=0.16). Thus, ΔFHR=10 bpm is preferable over 8 bpm because both experimentally and statistically validated.
Analysis of statistical misconception in terms of statistical reasoning
NASA Astrophysics Data System (ADS)
Maryati, I.; Priatna, N.
2018-05-01
Reasoning skill is needed for everyone to face globalization era, because every person have to be able to manage and use information from all over the world which can be obtained easily. Statistical reasoning skill is the ability to collect, group, process, interpret, and draw conclusion of information. Developing this skill can be done through various levels of education. However, the skill is low because many people assume that statistics is just the ability to count and using formulas and so do students. Students still have negative attitude toward course which is related to research. The purpose of this research is analyzing students’ misconception in descriptive statistic course toward the statistical reasoning skill. The observation was done by analyzing the misconception test result and statistical reasoning skill test; observing the students’ misconception effect toward statistical reasoning skill. The sample of this research was 32 students of math education department who had taken descriptive statistic course. The mean value of misconception test was 49,7 and standard deviation was 10,6 whereas the mean value of statistical reasoning skill test was 51,8 and standard deviation was 8,5. If the minimal value is 65 to state the standard achievement of a course competence, students’ mean value is lower than the standard competence. The result of students’ misconception study emphasized on which sub discussion that should be considered. Based on the assessment result, it was found that students’ misconception happen on this: 1) writing mathematical sentence and symbol well, 2) understanding basic definitions, 3) determining concept that will be used in solving problem. In statistical reasoning skill, the assessment was done to measure reasoning from: 1) data, 2) representation, 3) statistic format, 4) probability, 5) sample, and 6) association.
Tu, Li-ping; Chen, Jing-bo; Hu, Xiao-juan; Zhang, Zhi-feng
2016-01-01
Background and Goal. The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods. Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results. The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions. At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible. PMID:28050555
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Qi, Zhen; Tu, Li-Ping; Chen, Jing-Bo; Hu, Xiao-Juan; Xu, Jia-Tuo; Zhang, Zhi-Feng
2016-01-01
Background and Goal . The application of digital image processing techniques and machine learning methods in tongue image classification in Traditional Chinese Medicine (TCM) has been widely studied nowadays. However, it is difficult for the outcomes to generalize because of lack of color reproducibility and image standardization. Our study aims at the exploration of tongue colors classification with a standardized tongue image acquisition process and color correction. Methods . Three traditional Chinese medical experts are chosen to identify the selected tongue pictures taken by the TDA-1 tongue imaging device in TIFF format through ICC profile correction. Then we compare the mean value of L * a * b * of different tongue colors and evaluate the effect of the tongue color classification by machine learning methods. Results . The L * a * b * values of the five tongue colors are statistically different. Random forest method has a better performance than SVM in classification. SMOTE algorithm can increase classification accuracy by solving the imbalance of the varied color samples. Conclusions . At the premise of standardized tongue acquisition and color reproduction, preliminary objectification of tongue color classification in Traditional Chinese Medicine (TCM) is feasible.
Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?
Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie
2012-01-01
A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746
Nonplanar KdV and KP equations for quantum electron-positron-ion plasma
NASA Astrophysics Data System (ADS)
Dutta, Debjit
2015-12-01
Nonlinear quantum ion-acoustic waves with the effects of nonplanar cylindrical geometry, quantum corrections, and transverse perturbations are studied. By using the standard reductive perturbation technique, a cylindrical Kadomtsev-Petviashvili equation for ion-acoustic waves is derived by incorporating quantum-mechanical effects. The quantum-mechanical effects via quantum diffraction and quantum statistics and the role of transverse perturbations in cylindrical geometry on the dynamics of this wave are studied analytically. It is found that the dynamics of ion-acoustic solitary waves (IASWs) is governed by a three-dimensional cylindrical Kadomtsev-Petviashvili equation (CKPE). The results could help in a theoretical analysis of astrophysical and laser produced plasmas.
Moulding techniques in lipstick manufacture: a comparative evaluation.
Dweck, A C; Burnham, C A
1980-06-01
Synopsis This paper examines two methods of lipstick bulk manufacture: one via a direct method and the other via stock concentrates. The paper continues with a comparison of two manufactured bulks moulded in three different ways - first by split moulding, secondly by Rotamoulding, and finally by Ejectoret moulding. Full consideration is paid to time, labour and cost standards of each approach and the resultant moulding examined using some novel physical testing methods. The results of these tests are statistically analysed. Finally, on the basis of the gathered data and photomicrographical work a theoretical lipstick structure is proposed by which the results may be explained.
Direct comparison of optical lattice clocks with an intercontinental baseline of 9000 km.
Hachisu, H; Fujieda, M; Nagano, S; Gotoh, T; Nogami, A; Ido, T; Falke, St; Huntemann, N; Grebing, C; Lipphardt, B; Lisdat, Ch; Piester, D
2014-07-15
We have demonstrated a direct frequency comparison between two ⁸⁷Sr lattice clocks operated in intercontinentally separated laboratories in real time. Two-way satellite time and frequency transfer technique, based on the carrier-phase, was employed for a direct comparison, with a baseline of 9000 km between Japan and Germany. A frequency comparison was achieved for 83,640 s, resulting in a fractional difference of (1.1±1.6)×10⁻¹⁵, where the statistical part is the largest contributor to the uncertainty. This measurement directly confirms the agreement of the two optical frequency standards on an intercontinental scale.
Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures
Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha
2017-01-01
Aims and Objectives: The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. Materials and Methods: A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Results: Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions (P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant (P < 0.001). Conclusions: Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems. PMID:28713763
Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures.
Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha
2017-06-01
The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions ( P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant ( P < 0.001). Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems.
Laslett, Mark; McDonald, Barry; Tropp, Hans; Aprill, Charles N; Öberg, Birgitta
2005-01-01
Background The tissue origin of low back pain (LBP) or referred lower extremity symptoms (LES) may be identified in about 70% of cases using advanced imaging, discography and facet or sacroiliac joint blocks. These techniques are invasive and availability varies. A clinical examination is non-invasive and widely available but its validity is questioned. Diagnostic studies usually examine single tests in relation to single reference standards, yet in clinical practice, clinicians use multiple tests and select from a range of possible diagnoses. There is a need for studies that evaluate the diagnostic performance of clinical diagnoses against available reference standards. Methods We compared blinded clinical diagnoses with diagnoses based on available reference standards for known causes of LBP or LES such as discography, facet, sacroiliac or hip joint blocks, epidurals injections, advanced imaging studies or any combination of these tests. A prospective, blinded validity design was employed. Physiotherapists examined consecutive patients with chronic lumbopelvic pain and/or referred LES scheduled to receive the reference standard examinations. When diagnoses were in complete agreement regardless of complexity, "exact" agreement was recorded. When the clinical diagnosis was included within the reference standard diagnoses, "clinical agreement" was recorded. The proportional chance criterion (PCC) statistic was used to estimate agreement on multiple diagnostic possibilities because it accounts for the prevalence of individual categories in the sample. The kappa statistic was used to estimate agreement on six pathoanatomic diagnoses. Results In a sample of chronic LBP patients (n = 216) with high levels of disability and distress, 67% received a patho-anatomic diagnosis based on available reference standards, and 10% had more than one tissue origin of pain identified. For 27 diagnostic categories and combinations, chance clinical agreement (PCC) was estimated at 13%. "Exact" agreement between clinical and reference standard diagnoses was 32% and "clinical agreement" 51%. For six pathoanatomic categories (disc, facet joint, sacroiliac joint, hip joint, nerve root and spinal stenosis), PCC was 33% with actual agreement 56%. There was no overlap of 95% confidence intervals on any comparison. Diagnostic agreement on the six most common patho-anatomic categories produced a kappa of 0.31. Conclusion Clinical diagnoses agree with reference standards diagnoses more often than chance. Using available reference standards, most patients can have a tissue source of pain identified. PMID:15943873
In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...
Statistical analysis of tire treadwear data
DOT National Transportation Integrated Search
1985-03-01
This report describes the results of a statistical analysis of the treadwear : variability of radial tires subjected to the Uniform Tire Quality Grading (UTQG) : standard. Because unexplained variability in the treadwear portion of the standard : cou...
El-Kholey, Khalid E
2017-03-01
The study was designed to evaluate the anesthetic efficacy of 4 % articaine with 1:100,000 epinephrine (A100) in infiltration and inferior alveolar nerve block (IANB) anesthetic techniques for the pain control during extraction of the mandibular posterior teeth. This prospective randomized single-blind clinical trial included 100 patients needing extraction of at least two mandibular molars. Patients received either infiltration in the buccal vestibule opposite to the first molar supplemented with lingual infiltration or standard IANB with A100. For assessment of depth of anesthesia obtained by the two anaethetic techniques, presence or absence of pain during the extraction were rated using the visual analog scale. Fifty patients received infiltration anesthesia and fifty patients were anesthetized by IANB. The success rate of pain-free extraction after buccal infiltration was 94 %, whereas by using IANB with the same anesthetic it was 92 %. No statistical differences were detected in the success rates between the two anesthetic techniques ( P = 0.15). Buccal Infiltration can be considered a good option during extraction of the mandibular molar and premolar teeth of course, with supplemental lingual anesthesia.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
2011-01-01
Background Clinical researchers have often preferred to use a fixed effects model for the primary interpretation of a meta-analysis. Heterogeneity is usually assessed via the well known Q and I2 statistics, along with the random effects estimate they imply. In recent years, alternative methods for quantifying heterogeneity have been proposed, that are based on a 'generalised' Q statistic. Methods We review 18 IPD meta-analyses of RCTs into treatments for cancer, in order to quantify the amount of heterogeneity present and also to discuss practical methods for explaining heterogeneity. Results Differing results were obtained when the standard Q and I2 statistics were used to test for the presence of heterogeneity. The two meta-analyses with the largest amount of heterogeneity were investigated further, and on inspection the straightforward application of a random effects model was not deemed appropriate. Compared to the standard Q statistic, the generalised Q statistic provided a more accurate platform for estimating the amount of heterogeneity in the 18 meta-analyses. Conclusions Explaining heterogeneity via the pre-specification of trial subgroups, graphical diagnostic tools and sensitivity analyses produced a more desirable outcome than an automatic application of the random effects model. Generalised Q statistic methods for quantifying and adjusting for heterogeneity should be incorporated as standard into statistical software. Software is provided to help achieve this aim. PMID:21473747
Wang, Rong; Xu, Xin
2015-12-01
To compare the effect of 2 methods of occlusion adjustment on occlusal balance and muscles of mastication in patients with dental implant restoration. Twenty patients, each with a single edentulous posterior dentition with no distal dentition were selected, and divided into 2 groups. Patients in group A underwent original occlusion adjustment method and patients in group B underwent occlusal plane reduction technique. Ankylos implants were implanted in the edentulous space in each patient and restored with fixed prosthodontics single unit crown. Occlusion was adjusted in each restoration accordingly. Electromyograms were conducted to determine the effect of adjustment methods on occlusion and muscles of mastication 3 months and 6 months after initial restoration and adjustment. Data was collected and measurements for balanced occlusal measuring standards were obtained, including central occlusion force (COF), asymmetry index of molar occlusal force(AMOF). Balanced muscles of mastication measuring standards were also obtained including measurements from electromyogram for the muscles of mastication and the anterior bundle of the temporalis muscle at the mandibular rest position, average electromyogram measurements of the anterior bundle of the temporalis muscle at the intercuspal position(ICP), Astot, masseter muscle asymmetry index, and anterior temporalis asymmetry index (ASTA). Statistical analysis was performed using Student 's t test with SPSS 18.0 software package. Three months after occlusion adjustment, parameters of the original occlusion adjustment method were significantly different between group A and group B in balanced occlusal measuring standards and balanced muscles of mastication measuring standards. Six months after occlusion adjustment, parameters of the original occlusion adjustment methods were significantly different between group A and group B in balanced muscles of mastication measuring standards, but was no significant difference in balanced occlusal measuring standards. Using occlusion plane reduction adjustment technique, it is possible to obtain occlusion index and muscles of mastication's electromyogram index similar to the opposite side's natural dentition in patients with single unit fix prosthodontics crown and single posterior edentulous dentition without distal dentitions.
ERIC Educational Resources Information Center
Karadag, Engin
2010-01-01
To assess research methods and analysis of statistical techniques employed by educational researchers, this study surveyed unpublished doctoral dissertation from 2003 to 2007. Frequently used research methods consisted of experimental research; a survey; a correlational study; and a case study. Descriptive statistics, t-test, ANOVA, factor…
Statistics in the Workplace: A Survey of Use by Recent Graduates with Higher Degrees
ERIC Educational Resources Information Center
Harraway, John A.; Barker, Richard J.
2005-01-01
A postal survey was conducted regarding statistical techniques, research methods and software used in the workplace by 913 graduates with PhD and Masters degrees in the biological sciences, psychology, business, economics, and statistics. The study identified gaps between topics and techniques learned at university and those used in the workplace,…
Abe, Hiroyuki; Mori, Naoko; Tsuchiya, Keiko; Schacht, David V; Pineda, Federico D; Jiang, Yulei; Karczmar, Gregory S
2016-11-01
The purposes of this study were to evaluate diagnostic parameters measured with ultrafast MRI acquisition and with standard acquisition and to compare diagnostic utility for differentiating benign from malignant lesions. Ultrafast acquisition is a high-temporal-resolution (7 seconds) imaging technique for obtaining 3D whole-breast images. The dynamic contrast-enhanced 3-T MRI protocol consists of an unenhanced standard and an ultrafast acquisition that includes eight contrast-enhanced ultrafast images and four standard images. Retrospective assessment was performed for 60 patients with 33 malignant and 29 benign lesions. A computer-aided detection system was used to obtain initial enhancement rate and signal enhancement ratio (SER) by means of identification of a voxel showing the highest signal intensity in the first phase of standard imaging. From the same voxel, the enhancement rate at each time point of the ultrafast acquisition and the AUC of the kinetic curve from zero to each time point of ultrafast imaging were obtained. There was a statistically significant difference between benign and malignant lesions in enhancement rate and kinetic AUC for ultrafast imaging and also in initial enhancement rate and SER for standard imaging. ROC analysis showed no significant differences between enhancement rate in ultrafast imaging and SER or initial enhancement rate in standard imaging. Ultrafast imaging is useful for discriminating benign from malignant lesions. The differential utility of ultrafast imaging is comparable to that of standard kinetic assessment in a shorter study time.
He, Yiping; He, Tongqiang; Wang, Yanxia; Xu, Zhao; Xu, Yehong; Wu, Yiqing; Ji, Jing; Mi, Yang
2014-11-01
To explore the effect of different diagnositic criteria of subclinical hypothyroidism using thyroid stimulating hormone (TSH) and positive thyroid peroxidase antibodies (TPO-Ab) on the pregnancy outcomes. 3 244 pregnant women who had their antenatal care and delivered in Child and Maternity Health Hospital of Shannxi Province August from 2011 to February 2013 were recruited prospectively. According to the standard of American Thyroid Association (ATA), pregnant women with normal serum free thyroxine (FT4) whose serum TSH level> 2.50 mU/L were diagnosed as subclinical hypothyroidism in pregnancy (foreign standard group). According to the Guideline of Diagnosis and Therapy of Prenatal and Postpartum Thyroid Disease made by Chinese Society of Endocrinology and Chinese Society of Perinatal Medicine in 2012, pregnant women with serum TSH level> 5.76 mU/L, and normal FT4 were diagnosed as subclinical hypothyroidism in pregnancy(national standard group). Pregnant women with subclinical hypothyroidism whose serum TSH levels were between 2.50-5.76 mU/L were referred as the study observed group; and pregnant women with serum TSH level< 2.50 mU/L and negative TPO- Ab were referred as the control group. Positive TPO-Ab results and the pregnancy outcomes were analyzed. (1) There were 635 cases in the foreign standard group, with the incidence of 19.57% (635/3 244). And there were 70 cases in the national standard group, with the incidence of 2.16% (70/3 244). There were statistically significant difference between the two groups (P < 0.01). There were 565 cases in the study observed group, with the incidence of 17.42% (565/3 244). There was statistically significant difference (P < 0.01) when compared with the national standard group; while there was no statistically significant difference (P > 0.05) when compared with the foreign standard group. (2) Among the 3 244 cases, 402 cases had positive TPO-Ab. 318 positive cases were in the foreign standard group, and the incidence of subclinical hypothyroidism was 79.10% (318/402). There were 317 negative cases in the foreign standard group, with the incidence of 11.15% (317/2 842). The difference was statistically significant (P < 0.01) between them. In the national standard group, 46 cases had positive TPO-Ab, with the incidence of 11.44% (46/402), and 24 cases had negative result, with the incidence of 0.84% (24/2 842). There were statistically significant difference (P < 0.01) between them. In the study observed group, 272 cases were TPO-Ab positive, with the incidence of 67.66% (272/402), and 293 cases were negative, with the incidence of 10.31% (293/2 842), the difference was statistically significant (P < 0.01). (3) The incidence of miscarriage, premature delivery, gestational hypertension disease, gestational diabetes mellitus(GDM)in the foreign standard group had statistically significant differences (P < 0.05) when compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. And the incidence of miscarriage, premature delivery, gestational hypertension disease, GDM in the national standard group had statistical significant difference (P < 0.05) compared with the control group, respectively. While there was no statistically significant difference (P > 0.05) in the incidence of placental abruption or fetal distress. This study observed group of pregnant women's abortion, gestational hypertension disease, GDM incidence respectively compared with control group, the difference had statistical significance (P < 0.05); but in preterm labor, placental abruption, and fetal distress incidence, there were no statistically significant difference (P > 0.05). (4) The incidence of miscarriage, premature delivery, gestational hypertension disease, GDM, placental abruption, fetal distress in the TPO-Ab positive cases of the national standard group showed an increase trend when compared with TPO-Ab negative cases, with no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases of the study observed group had statistical significance difference (P < 0.05) when compared with TPO-Ab negative cases; while the incidence of miscarriage, premature birth, placental abruption, fetal distress had no statistically significant difference (P > 0.05). The incidence of gestational hypertension disease and GDM in the TPO-Ab positive cases had statistically significance difference when compared with TPO-Ab negtive cases of foreign standard group (P < 0.05). (1) The incidence of subclinical hypothyroidism is rather high during early pregnancy and can lead to adverse pregnancy outcome. (2) Positive TPO-Ab result has important predictive value of the thyroid dysfunction and GDM. (3) Relatively, the ATA standard of diagnosis (serum TSH level> 2.50 mU/L) is safer for the antenatal care; the national standard (serum TSH level> 5.76 mU/L) is not conducive to pregnancy management.
Willis, Brian H; Riley, Richard D
2017-09-20
An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.
Statistical approach for selection of biologically informative genes.
Das, Samarendra; Rai, Anil; Mishra, D C; Rai, Shesh N
2018-05-20
Selection of informative genes from high dimensional gene expression data has emerged as an important research area in genomics. Many gene selection techniques have been proposed so far are either based on relevancy or redundancy measure. Further, the performance of these techniques has been adjudged through post selection classification accuracy computed through a classifier using the selected genes. This performance metric may be statistically sound but may not be biologically relevant. A statistical approach, i.e. Boot-MRMR, was proposed based on a composite measure of maximum relevance and minimum redundancy, which is both statistically sound and biologically relevant for informative gene selection. For comparative evaluation of the proposed approach, we developed two biological sufficient criteria, i.e. Gene Set Enrichment with QTL (GSEQ) and biological similarity score based on Gene Ontology (GO). Further, a systematic and rigorous evaluation of the proposed technique with 12 existing gene selection techniques was carried out using five gene expression datasets. This evaluation was based on a broad spectrum of statistically sound (e.g. subject classification) and biological relevant (based on QTL and GO) criteria under a multiple criteria decision-making framework. The performance analysis showed that the proposed technique selects informative genes which are more biologically relevant. The proposed technique is also found to be quite competitive with the existing techniques with respect to subject classification and computational time. Our results also showed that under the multiple criteria decision-making setup, the proposed technique is best for informative gene selection over the available alternatives. Based on the proposed approach, an R Package, i.e. BootMRMR has been developed and available at https://cran.r-project.org/web/packages/BootMRMR. This study will provide a practical guide to select statistical techniques for selecting informative genes from high dimensional expression data for breeding and system biology studies. Published by Elsevier B.V.
Statistics and the Question of Standards
Stigler, Stephen M.
1996-01-01
This is a written version of a memorial lecture given in honor of Churchill Eisenhart at the National Institute of Standards and Technology on May 5, 1995. The relationship and the interplay between statistics and standards over the past centuries are described. Historical examples are presented to illustrate mutual dependency and development in the two fields. PMID:27805077
Standardized pivot shift test improves measurement accuracy.
Hoshino, Yuichi; Araujo, Paulo; Ahlden, Mattias; Moore, Charity G; Kuroda, Ryosuke; Zaffagnini, Stefano; Karlsson, Jon; Fu, Freddie H; Musahl, Volker
2012-04-01
The variability of the pivot shift test techniques greatly interferes with achieving a quantitative and generally comparable measurement. The purpose of this study was to compare the variation of the quantitative pivot shift measurements with different surgeons' preferred techniques to a standardized technique. The hypothesis was that standardizing the pivot shift test would improve consistency in the quantitative evaluation when compared with surgeon-specific techniques. A whole lower body cadaveric specimen was prepared to have a low-grade pivot shift on one side and high-grade pivot shift on the other side. Twelve expert surgeons performed the pivot shift test using (1) their preferred technique and (2) a standardized technique. Electromagnetic tracking was utilized to measure anterior tibial translation and acceleration of the reduction during the pivot shift test. The variation of the measurement was compared between the surgeons' preferred technique and the standardized technique. The anterior tibial translation during pivot shift test was similar between using surgeons' preferred technique (left 24.0 ± 4.3 mm; right 15.5 ± 3.8 mm) and using standardized technique (left 25.1 ± 3.2 mm; right 15.6 ± 4.0 mm; n.s.). However, the variation in acceleration was significantly smaller with the standardized technique (left 3.0 ± 1.3 mm/s(2); right 2.5 ± 0.7 mm/s(2)) compared with the surgeons' preferred technique (left 4.3 ± 3.3 mm/s(2); right 3.4 ± 2.3 mm/s(2); both P < 0.01). Standardizing the pivot shift test maneuver provides a more consistent quantitative evaluation and may be helpful in designing future multicenter clinical outcome trials. Diagnostic study, Level I.
Minimum Information about a Genotyping Experiment (MIGEN)
Huang, Jie; Mirel, Daniel; Pugh, Elizabeth; Xing, Chao; Robinson, Peter N.; Pertsemlidis, Alexander; Ding, LiangHao; Kozlitina, Julia; Maher, Joseph; Rios, Jonathan; Story, Michael; Marthandan, Nishanth; Scheuermann, Richard H.
2011-01-01
Genotyping experiments are widely used in clinical and basic research laboratories to identify associations between genetic variations and normal/abnormal phenotypes. Genotyping assay techniques vary from single genomic regions that are interrogated using PCR reactions to high throughput assays examining genome-wide sequence and structural variation. The resulting genotype data may include millions of markers of thousands of individuals, requiring various statistical, modeling or other data analysis methodologies to interpret the results. To date, there are no standards for reporting genotyping experiments. Here we present the Minimum Information about a Genotyping Experiment (MIGen) standard, defining the minimum information required for reporting genotyping experiments. MIGen standard covers experimental design, subject description, genotyping procedure, quality control and data analysis. MIGen is a registered project under MIBBI (Minimum Information for Biological and Biomedical Investigations) and is being developed by an interdisciplinary group of experts in basic biomedical science, clinical science, biostatistics and bioinformatics. To accommodate the wide variety of techniques and methodologies applied in current and future genotyping experiment, MIGen leverages foundational concepts from the Ontology for Biomedical Investigations (OBI) for the description of the various types of planned processes and implements a hierarchical document structure. The adoption of MIGen by the research community will facilitate consistent genotyping data interpretation and independent data validation. MIGen can also serve as a framework for the development of data models for capturing and storing genotyping results and experiment metadata in a structured way, to facilitate the exchange of metadata. PMID:22180825
Caremans, Jeroen; Hamans, Evert; Muylle, Ludo; Van de Heyning, Paul; Van Rompaey, Vincent
2016-06-01
Allograft tympano-ossicular systems (ATOS) have proven their use over many decades in tympanoplasty and reconstruction after resection of cholesteatoma. The transcranial bone plug technique has been used in the past 50 years to procure en bloc ATOS (tympanic membrane with malleus, incus and stapes attached). Recently, our group reported the feasibility of the endoscopic procurement technique. The aim of this study was to assess whether clinical outcome is equivalent in ATOS acquired by using the endoscopic procurement technique compared to ATOS acquired by using the transcranial technique. A double-blind randomized controlled audit was performed in a tertiary referral center in patients that underwent allograft tympanoplasty because of chronic otitis media with and without cholesteatoma. Allograft epithelialisation was evaluated at the short-term postoperative visit by microscopic examination. Failures were reported if reperforation was observed. Fifty patients underwent allograft tympanoplasty: 34 received endoscopically procured ATOS and 16 received transcranially procured ATOS. One failed case was observed, in the endoscopic procurement group. We did not observe a statistically significant difference between the two groups in failure rate. This study demonstrates equivalence of the clinical outcome of allograft tympanoplasty using either endoscopic or transcranial procured ATOS and therefore indicates that the endoscopic technique can be considered the new standard procurement technique. Especially because the endoscopic procurement technique has several advantages compared to the former transcranial procurement technique: it avoids risk of prion transmission and it is faster while lacking any noticeable incision.
Techniques in teaching statistics : linking research production and research use.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martinez-Moyano, I .; Smith, A.; Univ. of Massachusetts at Boston)
In the spirit of closing the 'research-practice gap,' the authors extend evidence-based principles to statistics instruction in social science graduate education. The authors employ a Delphi method to survey experienced statistics instructors to identify teaching techniques to overcome the challenges inherent in teaching statistics to students enrolled in practitioner-oriented master's degree programs. Among the teaching techniques identi?ed as essential are using real-life examples, requiring data collection exercises, and emphasizing interpretation rather than results. Building on existing research, preliminary interviews, and the ?ndings from the study, the authors develop a model describing antecedents to the strength of the link between researchmore » and practice.« less
Evaluating uses of data mining techniques in propensity score estimation: a simulation study.
Setoguchi, Soko; Schneeweiss, Sebastian; Brookhart, M Alan; Glynn, Robert J; Cook, E Francis
2008-06-01
In propensity score modeling, it is a standard practice to optimize the prediction of exposure status based on the covariate information. In a simulation study, we examined in what situations analyses based on various types of exposure propensity score (EPS) models using data mining techniques such as recursive partitioning (RP) and neural networks (NN) produce unbiased and/or efficient results. We simulated data for a hypothetical cohort study (n = 2000) with a binary exposure/outcome and 10 binary/continuous covariates with seven scenarios differing by non-linear and/or non-additive associations between exposure and covariates. EPS models used logistic regression (LR) (all possible main effects), RP1 (without pruning), RP2 (with pruning), and NN. We calculated c-statistics (C), standard errors (SE), and bias of exposure-effect estimates from outcome models for the PS-matched dataset. Data mining techniques yielded higher C than LR (mean: NN, 0.86; RPI, 0.79; RP2, 0.72; and LR, 0.76). SE tended to be greater in models with higher C. Overall bias was small for each strategy, although NN estimates tended to be the least biased. C was not correlated with the magnitude of bias (correlation coefficient [COR] = -0.3, p = 0.1) but increased SE (COR = 0.7, p < 0.001). Effect estimates from EPS models by simple LR were generally robust. NN models generally provided the least numerically biased estimates. C was not associated with the magnitude of bias but was with the increased SE.
The macular photostress test in diabetes, glaucoma, and cataract
NASA Astrophysics Data System (ADS)
Baptista, António M. G.; Sousa, Raul A. R. C.; Rocha, Filomena A. S. Q.; Fernandes, Paula Sepúlveda; Macedo, António F.
2013-11-01
Purpose. The photostress recovery time test (PSRT) has been widely reported as a helpful screening clinical tool. However, the poor standardization of its measurement technique remains to be a limitation among clinicians. The purpose of this study is to apply a recommended clinical technique to measure the PSRT in some of the most commons eye diseases to ascertain whether these diseases affect the PSRT values. Methods. One hundred and one controls and 105 patients, with diagnosed diabetes (without visible signs of diabetic retinopathy), primary open angle glaucoma (POAG) or cataracts underwent photostress testing. The test was performed with a direct ophthalmoscope for illuminating the macula for 30 seconds. Participants belonged to three age classes: A, B and C; and were divided into four groups: control, diabetic, POAG and cataract. The age range for A, B and C classes were respectively 43-54, 55-64 and 65-74 years. The groups were also further compared within each age class. In addition, the influence of age on PSRT was evaluated using the control group. Results. Results demonstrate that PSRT changes with age (p<0.02). In class A, diabetic group had a faster PSRT than control group, (mean +/- standard deviation) 20.22+/-7.51 and 26.14+/-8.34 seconds. The difference between these groups was statistical significant (t-test, p=0.012). Cataract and POAG groups did not affect the PSRT significantly. Conclusions. The technique used for the Photostress showed that diabetics, younger than 54 years, may have faster PSRT and that, aging delays PSRT.
A new technique for quantitative analysis of hair loss in mice using grayscale analysis.
Ponnapakkam, Tulasi; Katikaneni, Ranjitha; Gulati, Rohan; Gensure, Robert
2015-03-09
Alopecia is a common form of hair loss which can occur in many different conditions, including male-pattern hair loss, polycystic ovarian syndrome, and alopecia areata. Alopecia can also occur as a side effect of chemotherapy in cancer patients. In this study, our goal was to develop a consistent and reliable method to quantify hair loss in mice, which will allow investigators to accurately assess and compare new therapeutic approaches for these various forms of alopecia. The method utilizes a standard gel imager to obtain and process images of mice, measuring the light absorption, which occurs in rough proportion to the amount of black (or gray) hair on the mouse. Data that has been quantified in this fashion can then be analyzed using standard statistical techniques (i.e., ANOVA, T-test). This methodology was tested in mouse models of chemotherapy-induced alopecia, alopecia areata and alopecia from waxing. In this report, the detailed protocol is presented for performing these measurements, including validation data from C57BL/6 and C3H/HeJ strains of mice. This new technique offers a number of advantages, including relative simplicity of application, reliance on equipment which is readily available in most research laboratories, and applying an objective, quantitative assessment which is more robust than subjective evaluations. Improvements in quantification of hair growth in mice will improve study of alopecia models and facilitate evaluation of promising new therapies in preclinical studies.
Metrology of vibration measurements by laser techniques
NASA Astrophysics Data System (ADS)
von Martens, Hans-Jürgen
2008-06-01
Metrology as the art of careful measurement has been understood as uniform methodology for measurements in natural sciences, covering methods for the consistent assessment of experimental data and a corpus of rules regulating application in technology and in trade and industry. The knowledge, methods and tools available for precision measurements can be exploited for measurements at any level of uncertainty in any field of science and technology. A metrological approach to the preparation, execution and evaluation (including expression of uncertainty) of measurements of translational and rotational motion quantities using laser interferometer methods and techniques will be presented. The realization and dissemination of the SI units of motion quantities (vibration and shock) have been based on laser interferometer methods specified in international documentary standards. New and upgraded ISO standards are reviewed with respect to their suitability for ensuring traceable vibration measurements and calibrations in an extended frequency range of 0.4 Hz to higher than 100 kHz. Using adequate vibration exciters to generate sufficient displacement or velocity amplitudes, the upper frequency limits of the laser interferometer methods specified in ISO 16063-11 for frequencies <= 10 kHz can be expanded to 100 kHz and beyond. A comparison of different methods simultaneously used for vibration measurements at 100 kHz will be demonstrated. A statistical analysis of numerous experimental results proves the highest accuracy achievable currently in vibration measurements by specific laser methods, techniques and procedures (i.e. measurement uncertainty 0.05 % at frequencies <= 10 kHz, <= 1 % up to 100 kHz).
New methods and results for quantification of lightning-aircraft electrodynamics
NASA Technical Reports Server (NTRS)
Pitts, Felix L.; Lee, Larry D.; Perala, Rodney A.; Rudolph, Terence H.
1987-01-01
The NASA F-106 collected data on the rates of change of electromagnetic parameters on the aircraft surface during over 700 direct lightning strikes while penetrating thunderstorms at altitudes from 15,000 t0 40,000 ft (4,570 to 12,190 m). These in situ measurements provided the basis for the first statistical quantification of the lightning electromagnetic threat to aircraft appropriate for determining indirect lightning effects on aircraft. These data are used to update previous lightning criteria and standards developed over the years from ground-based measurements. The proposed standards will be the first which reflect actual aircraft responses measured at flight altitudes. Nonparametric maximum likelihood estimates of the distribution of the peak electromagnetic rates of change for consideration in the new standards are obtained based on peak recorder data for multiple-strike flights. The linear and nonlinear modeling techniques developed provide means to interpret and understand the direct-strike electromagnetic data acquired on the F-106. The reasonable results obtained with the models, compared with measured responses, provide increased confidence that the models may be credibly applied to other aircraft.
Promoting Robust Design of Diode Lasers for Space: A National Initiative
NASA Technical Reports Server (NTRS)
Tratt, David M.; Amzajerdian, Farzin; Kashem, Nasir B.; Shapiro, Andrew A.; Mense, Allan T.
2007-01-01
The Diode-laser Array Working Group (DAWG) is a national-level consumer/provider forum for discussion of engineering and manufacturing issues which influence the reliability and survivability of high-power broad-area laser diode devices in space, with an emphasis on laser diode arrays (LDAs) for optical pumping of solid-state laser media. The goals of the group are to formulate and validate standardized test and qualification protocols, operational control recommendations, and consensus manufacturing and certification standards. The group is using reliability and lifetime data collected by laser diode manufacturers and the user community to develop a set of standardized guidelines for specifying and qualifying laser diodes for long-duration operation in space, the ultimate goal being to promote an informed U.S. Government investment and procurement strategy for assuring the availability and durability of space-qualified LDAs. The group is also working to establish effective implementation of statistical design techniques at the supplier design, development, and manufacturing levels to help reduce product performance variability and improve product reliability for diodes employed in space applications
Garbarino, John R.; Struzeski, Tedmund M.
1998-01-01
Inductively coupled plasma-optical emission spectrometry (ICP-OES) and inductively coupled plasma-mass spectrometry (ICP-MS) can be used to determine 26 elements in whole-water digests. Both methods have distinct advantages and disadvantages--ICP-OES is capable of analyzing samples with higher elemental concentrations without dilution, however, ICP-MS is more sensitive and capable of determining much lower elemental concentrations. Both techniques gave accurate results for spike recoveries, digested standard reference-water samples, and whole-water digests. Average spike recoveries in whole-water digests were 100 plus/minus 10 percent, although recoveries for digests with high dissolved-solid concentrations were lower for selected elements by ICP-MS. Results for standard reference-water samples were generally within 1 standard deviation of hte most probable values. Statistical analysis of the results from 43 whole-water digest indicated that there was no significant difference among ICP-OES, ICP-MS, and former official methods of analysis for 24 of the 26 elements evaluated.
Statistical Symbolic Execution with Informed Sampling
NASA Technical Reports Server (NTRS)
Filieri, Antonio; Pasareanu, Corina S.; Visser, Willem; Geldenhuys, Jaco
2014-01-01
Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with in- formed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Anders T., E-mail: andehans@rm.dk; Lukacova, Slavka; Lassen-Ramshad, Yasmin
2015-01-01
When standard conformal x-ray technique for craniospinal irradiation is used, it is a challenge to achieve satisfactory dose coverage of the target including the area of the cribriform plate, while sparing organs at risk. We present a new intensity-modulated radiation therapy (IMRT), noncoplanar technique, for delivering irradiation to the cranial part and compare it with 3 other techniques and previously published results. A total of 13 patients who had previously received craniospinal irradiation with standard conformal x-ray technique were reviewed. New treatment plans were generated for each patient using the noncoplanar IMRT-based technique, a coplanar IMRT-based technique, and a coplanarmore » volumetric-modulated arch therapy (VMAT) technique. Dosimetry data for all patients were compared with the corresponding data from the conventional treatment plans. The new noncoplanar IMRT technique substantially reduced the mean dose to organs at risk compared with the standard radiation technique. The 2 other coplanar techniques also reduced the mean dose to some of the critical organs. However, this reduction was not as substantial as the reduction obtained by the noncoplanar technique. Furthermore, compared with the standard technique, the IMRT techniques reduced the total calculated radiation dose that was delivered to the normal tissue, whereas the VMAT technique increased this dose. Additionally, the coverage of the target was significantly improved by the noncoplanar IMRT technique. Compared with the standard technique, the coplanar IMRT and the VMAT technique did not improve the coverage of the target significantly. All the new planning techniques increased the number of monitor units (MU) used—the noncoplanar IMRT technique by 99%, the coplanar IMRT technique by 122%, and the VMAT technique by 26%—causing concern for leak radiation. The noncoplanar IMRT technique covered the target better and decreased doses to organs at risk compared with the other techniques. All the new techniques increased the number of MU compared with the standard technique.« less
Dort, Jonathan; Trickey, Amber; Paige, John; Schwarz, Erin; Dunkin, Brian
2017-08-01
Practicing surgeons commonly learn new procedures and techniques by attending a "hands-on" course, though trainings are often ineffective at promoting subsequent procedure adoption in practice. We describe implementation of a new program with the SAGES All Things Hernia Hands-On Course, Acquisition of Data for Outcomes and Procedure Transfer (ADOPT), which employs standardized, proven teaching techniques, and 1-year mentorship. Attendee confidence and procedure adoption are compared between standard and ADOPT programs. For the pilot ADOPT course implementation, a hands-on course focusing on abdominal wall hernia repair was chosen. ADOPT participants were recruited among enrollees for the standard Hands-On Hernia Course. Enrollment in ADOPT was capped at 10 participants and limited to a 2:1 student-to-faculty ratio, compared to the standard course 22 participants with a 4:1 student-to-faculty ratio. ADOPT mentors interacted with participants through webinars, phone conferences, and continuous email availability throughout the year. All participants were asked to provide pre- and post-course surveys inquiring about the number of targeted hernia procedures performed and related confidence level. Four of 10 ADOPT participants (40%) and six of 22 standard training participants (27%) returned questionnaires. Over the 3 months following the course, ADOPT participants performed more ventral hernia mesh insertion procedures than standard training participants (median 13 vs. 0.5, p = 0.010) and considerably more total combined procedures (median 26 vs. 7, p = 0.054). Compared to standard training, learners who participated in ADOPT reported greater confidence improvements in employing a components separation via an open approach (p = 0.051), and performing an open transversus abdominis release, though the difference did not achieve statistical significance (p = 0.14). These results suggest that the ADOPT program, with standardized and structured teaching, telementoring, and a longitudinal educational approach, is effective and leads to better transfer of learned skills and procedures to clinical practice.
Gold-standard for computer-assisted morphological sperm analysis.
Chang, Violeta; Garcia, Alejandra; Hitschfeld, Nancy; Härtel, Steffen
2017-04-01
Published algorithms for classification of human sperm heads are based on relatively small image databases that are not open to the public, and thus no direct comparison is available for competing methods. We describe a gold-standard for morphological sperm analysis (SCIAN-MorphoSpermGS), a dataset of sperm head images with expert-classification labels in one of the following classes: normal, tapered, pyriform, small or amorphous. This gold-standard is for evaluating and comparing known techniques and future improvements to present approaches for classification of human sperm heads for semen analysis. Although this paper does not provide a computational tool for morphological sperm analysis, we present a set of experiments for comparing sperm head description and classification common techniques. This classification base-line is aimed to be used as a reference for future improvements to present approaches for human sperm head classification. The gold-standard provides a label for each sperm head, which is achieved by majority voting among experts. The classification base-line compares four supervised learning methods (1- Nearest Neighbor, naive Bayes, decision trees and Support Vector Machine (SVM)) and three shape-based descriptors (Hu moments, Zernike moments and Fourier descriptors), reporting the accuracy and the true positive rate for each experiment. We used Fleiss' Kappa Coefficient to evaluate the inter-expert agreement and Fisher's exact test for inter-expert variability and statistical significant differences between descriptors and learning techniques. Our results confirm the high degree of inter-expert variability in the morphological sperm analysis. Regarding the classification base line, we show that none of the standard descriptors or classification approaches is best suitable for tackling the problem of sperm head classification. We discovered that the correct classification rate was highly variable when trying to discriminate among non-normal sperm heads. By using the Fourier descriptor and SVM, we achieved the best mean correct classification: only 49%. We conclude that the SCIAN-MorphoSpermGS will provide a standard tool for evaluation of characterization and classification approaches for human sperm heads. Indeed, there is a clear need for a specific shape-based descriptor for human sperm heads and a specific classification approach to tackle the problem of high variability within subcategories of abnormal sperm cells. Copyright © 2017 Elsevier Ltd. All rights reserved.
Abu-Tahun, Ibrahim; Al-Rabab'ah, Mohammad A; Hammad, Mohammad; Khraisat, Ameen
2014-12-01
The aim of this study was to investigate the technical quality of root canal treatment provided by the undergraduate students as their first experience in molar endodontics using nickel-titanium (NiTi) files in a crown-down approach compared with stainless steel standard technique. This study was carried out by the fifth year undergraduate students attending peer review sessions as a part of their training programme, using two different questionnaires to assess the overall technical quality and potential problems regarding endodontic complications after root canal preparation with these two techniques. The overall results indicated a statistically significant difference in the performance of the two instrument techniques in difficult cases showing better performance of the NiTi system and mean rotary preparation time (P < 0.001). Under the conditions of this study, novice dental students, using NiTi ProTaper rotary files, were able to prepare root canals faster with more preparation accuracy compared with canals of same teeth prepared with hand instruments. © 2014 Australian Society of Endodontology.
Diagnostic accuracy of chest X-rays acquired using a digital camera for low-cost teleradiology.
Szot, Agnieszka; Jacobson, Francine L; Munn, Samson; Jazayeri, Darius; Nardell, Edward; Harrison, David; Drosten, Ralph; Ohno-Machado, Lucila; Smeaton, Laura M; Fraser, Hamish S F
2004-02-01
Store-and-forward telemedicine, using e-mail to send clinical data and digital images, offers a low-cost alternative for physicians in developing countries to obtain second opinions from specialists. To explore the potential usefulness of this technique, 91 chest X-ray images were photographed using a digital camera and a view box. Four independent readers (three radiologists and one pulmonologist) read two types of digital (JPEG and JPEG2000) and original film images and indicated their confidence in the presence of eight features known to be radiological indicators of tuberculosis (TB). The results were compared to a "gold standard" established by two different radiologists, and assessed using receiver operating characteristic (ROC) curve analysis. There was no statistical difference in the overall performance between the readings from the original films and both types of digital images. The size of JPEG2000 images was approximately 120KB, making this technique feasible for slow internet connections. Our preliminary results show the potential usefulness of this technique particularly for tuberculosis and lung disease, but further studies are required to refine its potential.
Baker, Jay B; Maskell, Kevin F; Matlock, Aaron G; Walsh, Ryan M; Skinner, Carl G
2015-07-01
We compared intubating with a preloaded bougie (PB) against standard bougie technique in terms of success rates, time to successful intubation and provider preference on a cadaveric airway model. In this prospective, crossover study, healthcare providers intubated a cadaver using the PB technique and the standard bougie technique. Participants were randomly assigned to start with either technique. Following standardized training and practice, procedural success and time for each technique was recorded for each participant. Subsequently, participants were asked to rate their perceived ease of intubation on a visual analogue scale of 1 to 10 (1=difficult and 10=easy) and to select which technique they preferred. 47 participants with variable experience intubating were enrolled at an emergency medicine intern airway course. The success rate of all groups for both techniques was equal (95.7%). The range of times to completion for the standard bougie technique was 16.0-70.2 seconds, with a mean time of 29.7 seconds. The range of times to completion for the PB technique was 15.7-110.9 seconds, with a mean time of 29.4 seconds. There was a non-significant difference of 0.3 seconds (95% confidence interval -2.8 to 3.4 seconds) between the two techniques. Participants rated the relative ease of intubation as 7.3/10 for the standard technique and 7.6/10 for the preloaded technique (p=0.53, 95% confidence interval of the difference -0.97 to 0.50). Thirty of 47 participants subjectively preferred the PB technique (p=0.039). There was no significant difference in success or time to intubation between standard bougie and PB techniques. The majority of participants in this study preferred the PB technique. Until a clear and clinically significant difference is found between these techniques, emergency airway operators should feel confident in using the technique with which they are most comfortable.
NASA Astrophysics Data System (ADS)
Doherty, W.; Lightfoot, P. C.; Ames, D. E.
2014-08-01
The effects of polynomial interpolation and internal standardization drift corrections on the inter-measurement dispersion (statistical) of isotope ratios measured with a multi-collector plasma mass spectrometer were investigated using the (analyte, internal standard) isotope systems of (Ni, Cu), (Cu, Ni), (Zn, Cu), (Zn, Ga), (Sm, Eu), (Hf, Re) and (Pb, Tl). The performance of five different correction factors was compared using a (statistical) range based merit function ωm which measures the accuracy and inter-measurement range of the instrument calibration. The frequency distribution of optimal correction factors over two hundred data sets uniformly favored three particular correction factors while the remaining two correction factors accounted for a small but still significant contribution to the reduction of the inter-measurement dispersion. Application of the merit function is demonstrated using the detection of Cu and Ni isotopic fractionation in laboratory and geologic-scale chemical reactor systems. Solvent extraction (diphenylthiocarbazone (Cu, Pb) and dimethylglyoxime (Ni) was used to either isotopically fractionate the metal during extraction using the method of competition or to isolate the Cu and Ni from the sample (sulfides and associated silicates). In the best case, differences in isotopic composition of ± 3 in the fifth significant figure could be routinely and reliably detected for Cu65/63 and Ni61/62. One of the internal standardization drift correction factors uses a least squares estimator to obtain a linear functional relationship between the measured analyte and internal standard isotope ratios. Graphical analysis demonstrates that the points on these graphs are defined by highly non-linear parametric curves and not two linearly correlated quantities which is the usual interpretation of these graphs. The success of this particular internal standardization correction factor was found in some cases to be due to a fortuitous, scale dependent, parametric curve effect.
NASA Astrophysics Data System (ADS)
Pulido-Velazquez, David; Juan Collados-Lara, Antonio; Pardo-Iguzquiza, Eulogio; Jimeno-Saez, Patricia; Fernandez-Chacon, Francisca
2016-04-01
In order to design adaptive strategies to global change we need to assess the future impact of climate change on water resources, which depends on precipitation and temperature series in the systems. The objective of this work is to generate future climate series in the "Alto Genil" Basin (southeast Spain) for the period 2071-2100 by perturbing the historical series using different statistical methods. For this targeted we use information coming from regionals climate model simulations (RCMs) available in two European projects, CORDEX (2013), with a spatial resolution of 12.5 km, and ENSEMBLES (2009), with a spatial resolution of 25 km. The historical climate series used for the period 1971-2000 have been obtained from Spain02 project (2012) which has the same spatial resolution that CORDEX project (both use the EURO-CORDEX grid). Two emission scenarios have been considered: the Representative Concentration Pathways (RCP) 8.5 emissions scenario, which is the most unfavorable scenario considered in the fifth Assessment Report (AR5) by the Intergovernmental Panel on Climate Change (IPCC), and the A1B emission scenario of fourth Assessment Report (AR4). We use the RCM simulations to create an ensemble of predictions weighting their information according to their ability to reproduce the main statistic of the historical climatology. A multi-objective analysis has been performed to identify which models are better in terms of goodness of fit to the cited statistic of the historical series. The ensemble of the CORDEX and the ENSEMBLES projects has been finally created with nine and four models respectively. These ensemble series have been used to assess the anomalies in mean and standard deviation (differences between the control and future RCM series). A "delta-change" method (Pulido-Velazquez et al., 2011) has been applied to define future series by modifying the historical climate series in accordance with the cited anomalies in mean and standard deviation. A comparison between results for scenario A1B and RCP8.5 has been performed. The reduction obtained for the mean rainfall respect to the historical are 24.2 % and 24.4 % respectively, and the increment in the temperature are 46.3 % and 31.2 % respectively. A sensitivity analysis of the results to the statistical downscaling techniques employed has been performed. The next techniques have been explored: Perturbation method or "delta-change"; Regression method (a regression function which relates the RCM and the historic information will be used to generate future climate series for the fixed period); Quantile mapping, (it attempts to find a transformation function which relates the observed variable and the modeled variable maintaining an statistical distribution equals the observed variable); Stochastic weather generator (SWG): They can be uni-site or multi-site (which considers the spatial correlation of climatic series). A comparative analysis of these techniques has been performed identifying the advantages and disadvantages of each of them. Acknowledgments: This research has been partially supported by the GESINHIMPADAPT project (CGL2013-48424-C2-2-R) with Spanish MINECO funds. We would also like to thank Spain02, ENSEMBLES and CORDEX projects for the data provided for this study.
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2013 CFR
2013-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2014 CFR
2014-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2010 CFR
2010-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2011 CFR
2011-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
25 CFR 542.19 - What are the minimum internal control standards for accounting?
Code of Federal Regulations, 2012 CFR
2012-04-01
...; (3) Individual and statistical game records to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop by each table game, and to reflect statistical drop, statistical win, and the percentage of statistical win to statistical drop for each type of table game, by...
Earth Observation System Flight Dynamics System Covariance Realism
NASA Technical Reports Server (NTRS)
Zaidi, Waqar H.; Tracewell, David
2016-01-01
This presentation applies a covariance realism technique to the National Aeronautics and Space Administration (NASA) Earth Observation System (EOS) Aqua and Aura spacecraft based on inferential statistics. The technique consists of three parts: collection calculation of definitive state estimates through orbit determination, calculation of covariance realism test statistics at each covariance propagation point, and proper assessment of those test statistics.
The Shock and Vibration Digest. Volume 16, Number 1
1984-01-01
investigation of the measure- ment of frequency band average loss factors of structural components for use in the statistical energy analysis method of...stiffness. Matrix methods Key Words: Finite element technique. Statistical energy analysis . Experimental techniques. Framed structures, Com- puter...programs In order to further understand the practical application of the statistical energy analysis , a two section plate-like frame structure is
The Standard Deviation of Launch Vehicle Environments
NASA Technical Reports Server (NTRS)
Yunis, Isam
2005-01-01
Statistical analysis is used in the development of the launch vehicle environments of acoustics, vibrations, and shock. The standard deviation of these environments is critical to accurate statistical extrema. However, often very little data exists to define the standard deviation and it is better to use a typical standard deviation than one derived from a few measurements. This paper uses Space Shuttle and expendable launch vehicle flight data to define a typical standard deviation for acoustics and vibrations. The results suggest that 3dB is a conservative and reasonable standard deviation for the source environment and the payload environment.
NASA Technical Reports Server (NTRS)
Aires, Filipe; Rossow, William B.; Chedin, Alain; Hansen, James E. (Technical Monitor)
2000-01-01
The use of the Principal Component Analysis technique for the analysis of geophysical time series has been questioned in particular for its tendency to extract components that mix several physical phenomena even when the signal is just their linear sum. We demonstrate with a data simulation experiment that the Independent Component Analysis, a recently developed technique, is able to solve this problem. This new technique requires the statistical independence of components, a stronger constraint, that uses higher-order statistics, instead of the classical decorrelation a weaker constraint, that uses only second-order statistics. Furthermore, ICA does not require additional a priori information such as the localization constraint used in Rotational Techniques.
Annealing of Co-Cr dental alloy: effects on nanostructure and Rockwell hardness.
Ayyıldız, Simel; Soylu, Elif Hilal; Ide, Semra; Kılıç, Selim; Sipahi, Cumhur; Pişkin, Bulent; Gökçe, Hasan Suat
2013-11-01
The aim of the study was to evaluate the effect of annealing on the nanostructure and hardness of Co-Cr metal ceramic samples that were fabricated with a direct metal laser sintering (DMLS) technique. Five groups of Co-Cr dental alloy samples were manufactured in a rectangular form measuring 4 × 2 × 2 mm. Samples fabricated by a conventional casting technique (Group I) and prefabricated milling blanks (Group II) were examined as conventional technique groups. The DMLS samples were randomly divided into three groups as not annealed (Group III), annealed in argon atmosphere (Group IV), or annealed in oxygen atmosphere (Group V). The nanostructure was examined with the small-angle X-ray scattering method. The Rockwell hardness test was used to measure the hardness changes in each group, and the means and standard deviations were statistically analyzed by one-way ANOVA for comparison of continuous variables and Tukey's HSD test was used for post hoc analysis. P values of <.05 were accepted as statistically significant. The general nanostructures of the samples were composed of small spherical entities stacked atop one another in dendritic form. All groups also displayed different hardness values depending on the manufacturing technique. The annealing procedure and environment directly affected both the nanostructure and hardness of the Co-Cr alloy. Group III exhibited a non-homogeneous structure and increased hardness (48.16 ± 3.02 HRC) because the annealing process was incomplete and the inner stress was not relieved. Annealing in argon atmosphere of Group IV not only relieved the inner stresses but also decreased the hardness (27.40 ± 3.98 HRC). The results of fitting function presented that Group IV was the most homogeneous product as the minimum bilayer thickness was measured (7.11 Å). After the manufacturing with DMLS technique, annealing in argon atmosphere is an essential process for Co-Cr metal ceramic substructures. The dentists should be familiar with the materials that are used in clinic for prosthodontics treatments.
Yang, X; Le, D; Zhang, Y L; Liang, L Z; Yang, G; Hu, W J
2016-10-18
To explore a crown form classification method for upper central incisor which is more objective and scientific than traditional classification method based on the standardized photography technique. To analyze the relationship between crown form of upper central incisors and papilla filling in periodontally healthy Chinese Han-nationality youth. In the study, 180 periodontally healthy Chinese youth ( 75 males, and 105 females ) aged 20-30 (24.3±4.5) years were included. With the standardized upper central incisor photography technique, pictures of 360 upper central incisors were obtained. Each tooth was classified as triangular, ovoid or square by 13 experienced specialist majors in prothodontics independently and the final classification result was decided by most evaluators in order to ensure objectivity. The standardized digital photo was also used to evaluate the gingival papilla filling situation. The papilla filling result was recorded as present or absent according to naked eye observation. The papilla filling rates of different crown forms were analyzed. Statistical analyses were performed with SPSS 19.0. The proportions of triangle, ovoid and square forms of upper central incisor in Chinese Han-nationality youth were 31.4% (113/360), 37.2% (134/360) and 31.4% (113/360 ), respectively, and no statistical difference was found between the males and females. Average κ value between each two evaluators was 0.381. Average κ value was raised up to 0.563 when compared with the final classification result. In the study, 24 upper central incisors without contact were excluded, and the papilla filling rates of triangle, ovoid and square crown were 56.4% (62/110), 69.6% (87/125), 76.2% (77/101) separately. The papilla filling rate of square form was higher (P=0.007). The proportion of clinical crown form of upper central incisor in Chinese Han-nationality youth is obtained. Compared with triangle form, square form is found to favor a gingival papilla that fills the interproximal embrasure space. The consistency of the present classification method for upper central incisor is not satisfying, which indicates that a new classification method, more scientific and objective than the present one, is to be found.
NASA Astrophysics Data System (ADS)
Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad
2018-02-01
The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the reduction in estimated noise levels for those groups with the fewer number of noisy data points.
An intercomparison of approaches for improving operational seasonal streamflow forecasts
NASA Astrophysics Data System (ADS)
Mendoza, Pablo A.; Wood, Andrew W.; Clark, Elizabeth; Rothwell, Eric; Clark, Martyn P.; Nijssen, Bart; Brekke, Levi D.; Arnold, Jeffrey R.
2017-07-01
For much of the last century, forecasting centers around the world have offered seasonal streamflow predictions to support water management. Recent work suggests that the two major avenues to advance seasonal predictability are improvements in the estimation of initial hydrologic conditions (IHCs) and the incorporation of climate information. This study investigates the marginal benefits of a variety of methods using IHCs and/or climate information, focusing on seasonal water supply forecasts (WSFs) in five case study watersheds located in the US Pacific Northwest region. We specify two benchmark methods that mimic standard operational approaches - statistical regression against IHCs and model-based ensemble streamflow prediction (ESP) - and then systematically intercompare WSFs across a range of lead times. Additional methods include (i) statistical techniques using climate information either from standard indices or from climate reanalysis variables and (ii) several hybrid/hierarchical approaches harnessing both land surface and climate predictability. In basins where atmospheric teleconnection signals are strong, and when watershed predictability is low, climate information alone provides considerable improvements. For those basins showing weak teleconnections, custom predictors from reanalysis fields were more effective in forecast skill than standard climate indices. ESP predictions tended to have high correlation skill but greater bias compared to other methods, and climate predictors failed to substantially improve these deficiencies within a trace weighting framework. Lower complexity techniques were competitive with more complex methods, and the hierarchical expert regression approach introduced here (hierarchical ensemble streamflow prediction - HESP) provided a robust alternative for skillful and reliable water supply forecasts at all initialization times. Three key findings from this effort are (1) objective approaches supporting methodologically consistent hindcasts open the door to a broad range of beneficial forecasting strategies; (2) the use of climate predictors can add to the seasonal forecast skill available from IHCs; and (3) sample size limitations must be handled rigorously to avoid over-trained forecast solutions. Overall, the results suggest that despite a rich, long heritage of operational use, there remain a number of compelling opportunities to improve the skill and value of seasonal streamflow predictions.
ERIC Educational Resources Information Center
Raymond, Mark R.; Clauser, Brian E.; Furman, Gail E.
2010-01-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary…
King, Ashley B; Klausner, Adam P; Johnson, Corey M; Moore, Blake W; Wilson, Steven K; Grob, B Mayer
2011-10-01
The challenge of resident education in urologic surgery programs is to overcome disparity imparted by diverse patient populations, limited training times, and inequalities in the availability of expert surgical educators. Specifically, in the area of prosthetic urology, only a small proportion of programs have full-time faculty available to train residents in this discipline. To examine whether a new model using yearly training sessions from a recognized expert can establish a successful penile prosthetics program and result in better outcomes, higher case volumes, and willingness to perform more complex surgeries. A recognized expert conducted one to two operative training sessions yearly to teach standardized technique for penile prosthetics to residents. Each session consisted of three to four operative cases performed under the direct supervision of the expert. Retrospective data were collected from all penile prosthetic operations before (February, 2000 to June, 2004: N = 44) and after (July, 2004 to October, 2007: N = 79) implementation of these sessions. Outcomes reviewed included patient age, race, medical comorbidities, operative time, estimated blood loss, type of prosthesis, operative approach, drain usage, length of stay, and complications including revision/explantation rates. Statistical analysis was performed using Student's t-tests, Fisher's tests, and survival curves using the Kaplan-Meier technique (P value ≤ 0.05 to define statistical significance). Patient characteristics were not significantly different pre- vs. post-training. Operative time and estimated blood loss significantly decreased. Inflatable implants increased from 19/44 (43.2%, pre-training) to 69/79 (87.3%, post-training) (P < 0.01). Operations per year increased from 9.96 (pre-training) to 24 (post-training) (P < 0.01). Revision/explantation occurred in 11/44 patients (25%, pre-training) vs. 7/79 (8.9%, post-training) (P < 0.05). These data demonstrate that yearly sessions with a recognized expert can improve surgical outcomes, type, and volume of implants and can reduce explantation/revision rates. This represents an excellent model for improved training of urologic residents in penile prosthetics surgery. © 2011 International Society for Sexual Medicine.
Sun, Jihang; Yu, Tong; Liu, Jinrong; Duan, Xiaomin; Hu, Di; Liu, Yong; Peng, Yun
2017-03-16
Model-based iterative reconstruction (MBIR) is a promising reconstruction method which could improve CT image quality with low radiation dose. The purpose of this study was to demonstrate the advantage of using MBIR for noise reduction and image quality improvement in low dose chest CT for children with necrotizing pneumonia, over the adaptive statistical iterative reconstruction (ASIR) and conventional filtered back-projection (FBP) technique. Twenty-six children with necrotizing pneumonia (aged 2 months to 11 years) who underwent standard of care low dose CT scans were included. Thinner-slice (0.625 mm) images were retrospectively reconstructed using MBIR, ASIR and conventional FBP techniques. Image noise and signal-to-noise ratio (SNR) for these thin-slice images were measured and statistically analyzed using ANOVA. Two radiologists independently analyzed the image quality for detecting necrotic lesions, and results were compared using a Friedman's test. Radiation dose for the overall patient population was 0.59 mSv. There was a significant improvement in the high-density and low-contrast resolution of the MBIR reconstruction resulting in more detection and better identification of necrotic lesions (38 lesions in 0.625 mm MBIR images vs. 29 lesions in 0.625 mm FBP images). The subjective display scores (mean ± standard deviation) for the detection of necrotic lesions were 5.0 ± 0.0, 2.8 ± 0.4 and 2.5 ± 0.5 with MBIR, ASIR and FBP reconstruction, respectively, and the respective objective image noise was 13.9 ± 4.0HU, 24.9 ± 6.6HU and 33.8 ± 8.7HU. The image noise decreased by 58.9 and 26.3% in MBIR images as compared to FBP and ASIR images. Additionally, the SNR of MBIR images was significantly higher than FBP images and ASIR images. The quality of chest CT images obtained by MBIR in children with necrotizing pneumonia was significantly improved by the MBIR technique as compared to the ASIR and FBP reconstruction, to provide a more confident and accurate diagnosis for necrotizing pneumonia.
St. Pierre, Tim G.; House, Michael J.; Bangma, Sander J.; Pang, Wenjie; Bathgate, Andrew; Gan, Eng K.; Ayonrinde, Oyekoya T.; Bhathal, Prithi S.; Clouston, Andrew; Olynyk, John K.; Adams, Leon A.
2016-01-01
Background and Aims Validation of non-invasive methods of liver fat quantification requires a reference standard. However, using standard histopathology assessment of liver biopsies is problematical because of poor repeatability. We aimed to assess a stereological method of measuring volumetric liver fat fraction (VLFF) in liver biopsies and to use the method to validate a magnetic resonance imaging method for measurement of VLFF. Methods VLFFs were measured in 59 subjects (1) by three independent analysts using a stereological point counting technique combined with the Delesse principle on liver biopsy histological sections and (2) by three independent analysts using the HepaFat-Scan® technique on magnetic resonance images of the liver. Bland Altman statistics and intraclass correlation (IC) were used to assess the repeatability of each method and the bias between the methods of liver fat fraction measurement. Results Inter-analyst repeatability coefficients for the stereology and HepaFat-Scan® methods were 8.2 (95% CI 7.7–8.8)% and 2.4 (95% CI 2.2–2.5)% VLFF respectively. IC coefficients were 0.86 (95% CI 0.69–0.93) and 0.990 (95% CI 0.985–0.994) respectively. Small biases (≤3.4%) were observable between two pairs of analysts using stereology while no significant biases were observable between any of the three pairs of analysts using HepaFat-Scan®. A bias of 1.4±0.5% VLFF was observed between the HepaFat-Scan® method and the stereological method. Conclusions Repeatability of the stereological method is superior to the previously reported performance of assessment of hepatic steatosis by histopathologists and is a suitable reference standard for validating non-invasive methods of measurement of VLFF. PMID:27501242
NASA Astrophysics Data System (ADS)
Doelling, David R.; Bhatt, Rajendra; Haney, Conor O.; Gopalan, Arun; Scarino, Benjamin R.
2017-09-01
The new 3rd generation geostationary (GEO) imagers will have many of the same NPP-VIIRS imager spectral bands, thereby offering the opportunity to apply the VIIRS cloud, aerosol, and land use retrieval algorithms on the new GEO imager measurements. Climate quality retrievals require multi-channel calibrated radiances that are stable over time. The deep convective cloud calibration technique (DCCT) is a large ensemble statistical technique that assumes that the DCC reflectance is stable over time. Because DCC are found in sufficient numbers across all GEO domains, they provide a uniform calibration stability evaluation across the GEO constellation. The baseline DCCT has been successful in calibrating visible and near-infrared channels. However, for shortwave infrared (SWIR) channels the DCCT is not as effective to monitor radiometric stability. The DCCT was optimized as a function wavelength in this paper. For SWIR bands, the greatest reduction of the DCC response trend standard error was achieved through deseasonalization. This is effective because the DCC reflectance exhibits small regional seasonal cycles that can be characterized on a monthly basis. On the other hand, the inter-annually variability in DCC response was found to be extremely small. The Met-9 0.65-μm channel DCC response was found to have a 3% seasonal cycle. Deseasonalization reduced the trend standard error from 1% to 0.4%. For the NPP-VIIRS SWIR bands, deseasonalization reduced the trend standard error by more than half. All VIIRS SWIR band trend standard errors were less than 1%. The DCCT should be able to monitor the stability of all GEO imager solar reflective bands across the tropical domain with the same uniform accuracy.
Greenfeld, Max; van de Meent, Jan-Willem; Pavlichin, Dmitri S; Mabuchi, Hideo; Wiggins, Chris H; Gonzalez, Ruben L; Herschlag, Daniel
2015-01-16
Single-molecule techniques have emerged as incisive approaches for addressing a wide range of questions arising in contemporary biological research [Trends Biochem Sci 38:30-37, 2013; Nat Rev Genet 14:9-22, 2013; Curr Opin Struct Biol 2014, 28C:112-121; Annu Rev Biophys 43:19-39, 2014]. The analysis and interpretation of raw single-molecule data benefits greatly from the ongoing development of sophisticated statistical analysis tools that enable accurate inference at the low signal-to-noise ratios frequently associated with these measurements. While a number of groups have released analysis toolkits as open source software [J Phys Chem B 114:5386-5403, 2010; Biophys J 79:1915-1927, 2000; Biophys J 91:1941-1951, 2006; Biophys J 79:1928-1944, 2000; Biophys J 86:4015-4029, 2004; Biophys J 97:3196-3205, 2009; PLoS One 7:e30024, 2012; BMC Bioinformatics 288 11(8):S2, 2010; Biophys J 106:1327-1337, 2014; Proc Int Conf Mach Learn 28:361-369, 2013], it remains difficult to compare analysis for experiments performed in different labs due to a lack of standardization. Here we propose a standardized single-molecule dataset (SMD) file format. SMD is designed to accommodate a wide variety of computer programming languages, single-molecule techniques, and analysis strategies. To facilitate adoption of this format we have made two existing data analysis packages that are used for single-molecule analysis compatible with this format. Adoption of a common, standard data file format for sharing raw single-molecule data and analysis outcomes is a critical step for the emerging and powerful single-molecule field, which will benefit both sophisticated users and non-specialists by allowing standardized, transparent, and reproducible analysis practices.
Kılınçer, Abidin; Akpınar, Erhan; Erbil, Bülent; Ünal, Emre; Karaosmanoğlu, Ali Devrim; Kaynaroğlu, Volkan; Akata, Deniz; Özmen, Mustafa
2017-08-01
To determine the diagnostic accuracy of abdominal CT with compression to the right lower quadrant (RLQ) in adults with acute appendicitis. 168 patients (age range, 18-78 years) were included who underwent contrast-enhanced CT for suspected appendicitis performed either using compression to the RLQ (n = 71) or a standard protocol (n = 97). Outer diameter of the appendix, appendiceal wall thickening, luminal content and associated findings were evaluated in each patient. Kruskal-Wallis, Fisher's and Pearson's chi-squared tests were used for statistical analysis. There was no significant difference in the mean outer diameter (MOD) between compression CT scans (10.6 ± 1.9 mm) and standard protocol (11.2 ± 2.3 mm) in patients with acute appendicitis (P = 1). MOD was significantly lower in the compression group (5.2 ± 0.8 mm) compared to the standard protocol (6.5 ± 1.1 mm) (P < 0.01) in patients without appendicitis. A cut-off value of 6.75 mm for the outer diameter of the appendix was found to be 100% sensitive in the diagnosis of acute appendicitis for both groups. The specificity was higher for compression CT technique (67.7 vs. 94.9%). Normal appendix diameter was significantly smaller in the compression-CT group compared to standard-CT group, increasing diagnostic accuracy of abdominal compression CT. • Normal appendix diameter is significantly smaller in compression CT. • Compression could force contrast material to flow through the appendiceal lumen. • Compression CT may be a CT counterpart of graded compression US.
Using radar imagery for crop discrimination: a statistical and conditional probability study
Haralick, R.M.; Caspall, F.; Simonett, D.S.
1970-01-01
A number of the constraints with which remote sensing must contend in crop studies are outlined. They include sensor, identification accuracy, and congruencing constraints; the nature of the answers demanded of the sensor system; and the complex temporal variances of crops in large areas. Attention is then focused on several methods which may be used in the statistical analysis of multidimensional remote sensing data.Crop discrimination for radar K-band imagery is investigated by three methods. The first one uses a Bayes decision rule, the second a nearest-neighbor spatial conditional probability approach, and the third the standard statistical techniques of cluster analysis and principal axes representation.Results indicate that crop type and percent of cover significantly affect the strength of the radar return signal. Sugar beets, corn, and very bare ground are easily distinguishable, sorghum, alfalfa, and young wheat are harder to distinguish. Distinguishability will be improved if the imagery is examined in time sequence so that changes between times of planning, maturation, and harvest provide additional discriminant tools. A comparison between radar and photography indicates that radar performed surprisingly well in crop discrimination in western Kansas and warrants further study.
Multiscale image processing and antiscatter grids in digital radiography.
Lo, Winnie Y; Hornof, William J; Zwingenberger, Allison L; Robertson, Ian D
2009-01-01
Scatter radiation is a source of noise and results in decreased signal-to-noise ratio and thus decreased image quality in digital radiography. We determined subjectively whether a digitally processed image made without a grid would be of similar quality to an image made with a grid but without image processing. Additionally the effects of exposure dose and of a using a grid with digital radiography on overall image quality were studied. Thoracic and abdominal radiographs of five dogs of various sizes were made. Four acquisition techniques were included (1) with a grid, standard exposure dose, digital image processing; (2) without a grid, standard exposure dose, digital image processing; (3) without a grid, half the exposure dose, digital image processing; and (4) with a grid, standard exposure dose, no digital image processing (to mimic a film-screen radiograph). Full-size radiographs as well as magnified images of specific anatomic regions were generated. Nine reviewers rated the overall image quality subjectively using a five-point scale. All digitally processed radiographs had higher overall scores than nondigitally processed radiographs regardless of patient size, exposure dose, or use of a grid. The images made at half the exposure dose had a slightly lower quality than those made at full dose, but this was only statistically significant in magnified images. Using a grid with digital image processing led to a slight but statistically significant increase in overall quality when compared with digitally processed images made without a grid but whether this increase in quality is clinically significant is unknown.
Nett, Michael; Avelar, Rui; Sheehan, Michael; Cushner, Fred
2011-03-01
Standard medial parapatellar arthrotomies of 10 cadaveric knees were closed with either conventional interrupted absorbable sutures (control group, mean of 19.4 sutures) or a single running knotless bidirectional barbed absorbable suture (experimental group). Water-tightness of the arthrotomy closure was compared by simulating a tense hemarthrosis and measuring arthrotomy leakage over 3 minutes. Mean total leakage was 356 mL and 89 mL in the control and experimental groups, respectively (p = 0.027). Using 8 of the 10 knees (4 closed with control sutures, 4 closed with an experimental suture), a tense hemarthrosis was again created, and iatrogenic suture rupture was performed: a proximal suture was cut at 1 minute; a distal suture was cut at 2 minutes. The impact of suture rupture was compared by measuring total arthrotomy leakage over 3 minutes. Mean total leakage was 601 mL and 174 mL in the control and experimental groups, respectively (p = 0.3). In summary, using a cadaveric model, arthrotomies closed with a single bidirectional barbed running suture were statistically significantly more water-tight than those closed using a standard interrupted technique. The sample size was insufficient to determine whether the two closure techniques differed in leakage volume after suture rupture.
Checa-Moreno, R; Manzano, E; Mirón, G; Capitan-Vallvey, L F
2008-05-15
In this paper, we performed a comparison between commonly used strategies amino acid ratios (Aa ratios), two-dimensional ratio plots (2D-Plot) and statistical correlation factor (SCF) and a classification technique, soft independent modelling of class analogy (SIMCA), to identify protein binders present in old artwork samples. To do this, we used a natural standard collection of proteinaceous binders prepared in our laboratory using old recipes and eleven samples coming from Cultural Heritage, such as mural and easel paintings, manuscripts and polychrome sculptures from the 15-18th centuries. Protein binder samples were hydrolyzed and their constitutive amino acids were determined as PITC-derivatives using HPLC-DAD. Amino acid profile data were used to perform the comparison between the four different strategies mentioned above. Traditional strategies can lead to ambiguous or non-conclusive results. With SIMCA, it is possible to provide a more robust and less subjective identification knowing the confidence level of identification. As a standard, we used proteinaceous albumin (whole egg, yolk and glair); casein (goat, cow and sheep) and collagen (mammalian and fish). The process results in a more robust understanding of proteinaceous binding media in old artworks that makes it possible to distinguish them according to their origin.
Evaluation of AL-FEC performance for IP television services QoS
NASA Astrophysics Data System (ADS)
Mammi, E.; Russo, G.; Neri, A.
2010-01-01
The IP television services quality is a critical issue because of the nature of transport infrastructure. Packet loss is the main cause of service degradation in such kind of network platforms. The use of forward error correction (FEC) techniques in the application layer (AL-FEC), between the source of TV service (video server) and the user terminal, seams to be an efficient strategy to counteract packet losses alternatively or in addiction to suitable traffic management policies (only feasible in "managed networks"). A number of AL-FEC techniques have been discussed in literature and proposed for inclusion in TV over IP international standards. In this paper a performance evaluation of the AL-FEC defined in SMPTE 2022-1 standard is presented. Different typical events occurring in IP networks causing different types (in terms of statistic distribution) of IP packet losses have been studied and AL-FEC performance to counteract these kind of losses have been evaluated. The performed analysis has been carried out in view of fulfilling the TV services QoS requirements that are usually very demanding. For managed networks, this paper envisages a strategy to combine the use of AL-FEC with the set-up of a transport quality based on FEC packets prioritization. Promising results regard this kind of strategy have been obtained.
Ripple, Dean C; Montgomery, Christopher B; Hu, Zhishang
2015-02-01
Accurate counting and sizing of protein particles has been limited by discrepancies of counts obtained by different methods. To understand the bias and repeatability of techniques in common use in the biopharmaceutical community, the National Institute of Standards and Technology has conducted an interlaboratory comparison for sizing and counting subvisible particles from 1 to 25 μm. Twenty-three laboratories from industry, government, and academic institutions participated. The circulated samples consisted of a polydisperse suspension of abraded ethylene tetrafluoroethylene particles, which closely mimic the optical contrast and morphology of protein particles. For restricted data sets, agreement between data sets was reasonably good: relative standard deviations (RSDs) of approximately 25% for light obscuration counts with lower diameter limits from 1 to 5 μm, and approximately 30% for flow imaging with specified manufacturer and instrument setting. RSDs of the reported counts for unrestricted data sets were approximately 50% for both light obscuration and flow imaging. Differences between instrument manufacturers were not statistically significant for light obscuration but were significant for flow imaging. We also report a method for accounting for differences in the reported diameter for flow imaging and electrical sensing zone techniques; the method worked well for diameters greater than 15 μm. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Development of an Uncertainty Model for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Walter, Joel A.; Lawrence, William R.; Elder, David W.; Treece, Michael D.
2010-01-01
This paper introduces an uncertainty model being developed for the National Transonic Facility (NTF). The model uses a Monte Carlo technique to propagate standard uncertainties of measured values through the NTF data reduction equations to calculate the combined uncertainties of the key aerodynamic force and moment coefficients and freestream properties. The uncertainty propagation approach to assessing data variability is compared with ongoing data quality assessment activities at the NTF, notably check standard testing using statistical process control (SPC) techniques. It is shown that the two approaches are complementary and both are necessary tools for data quality assessment and improvement activities. The SPC approach is the final arbiter of variability in a facility. Its result encompasses variation due to people, processes, test equipment, and test article. The uncertainty propagation approach is limited mainly to the data reduction process. However, it is useful because it helps to assess the causes of variability seen in the data and consequently provides a basis for improvement. For example, it is shown that Mach number random uncertainty is dominated by static pressure variation over most of the dynamic pressure range tested. However, the random uncertainty in the drag coefficient is generally dominated by axial and normal force uncertainty with much less contribution from freestream conditions.
Simulation training and resident performance of singleton vaginal breech delivery.
Deering, Shad; Brown, Jill; Hodor, Jonathon; Satin, Andrew J
2006-01-01
To determine whether simulation training improves resident competency in the management of a simulated vaginal breech delivery. Without advance notice or training, residents from 2 obstetrics and gynecology residency programs participated in a standardized simulation scenario of management of an imminent term vaginal breech delivery. The scenario used an obstetric birth simulator and human actors, with the encounters digitally recorded. Residents then received a training session with the simulator on the proper techniques for vaginal breech delivery. Two weeks later they were retested using a similar simulation scenario. A physician, blinded to training status, graded the residents' performance using a standardized evaluation sheet. Statistical analysis included the Wilcoxon signed rank test, McNemar chi2, regression analysis, and paired t test as appropriate with a P value of less than .05 considered significant. Twenty residents from 2 institutions completed all parts of the study protocol. Trained residents had significantly higher scores in 8 of 12 critical delivery components (P < .05). Overall performance of the delivery and safety in performing the delivery also improved significantly (P = .001 for both). Simulation training improved resident performance in the management of a simulated vaginal breech delivery. Performance of a term breech vaginal delivery is well suited for simulation training, because it is uncommon and inevitable, and improper technique may result in significant injury. II-2.
Ex vivo Mueller polarimetric imaging of the uterine cervix: a first statistical evaluation
NASA Astrophysics Data System (ADS)
Rehbinder, Jean; Haddad, Huda; Deby, Stanislas; Teig, Benjamin; Nazac, André; Novikova, Tatiana; Pierangelo, Angelo; Moreau, François
2016-07-01
Early detection through screening plays a major role in reducing the impact of cervical cancer on patients. When detected before the invasive stage, precancerous lesions can be eliminated with very limited surgery. Polarimetric imaging is a potential alternative to the standard screening methods currently used. In a previous proof-of-concept study, significant contrasts have been found in polarimetric images acquired for healthy and precancerous regions of excised cervical tissue. To quantify the ability of the technique to differentiate between healthy and precancerous tissue, polarimetric images of seventeen cervical conization specimens (cone-shaped or cylindrical wedges from the uterine cervix) are compared with results from histopathological diagnoses, which is considered to be the "gold standard." The sensitivity and specificity of the technique are calculated for images acquired at wavelengths of 450, 550, and 600 nm, aiming to differentiate between high-grade cervical intraepithelial neoplasia (CIN 2-3) and healthy squamous epithelium. To do so, a sliding threshold for the scalar retardance parameter was used for the sample zones, as labeled after histological diagnosis. An optimized value of ˜83% is achieved for both sensitivity and specificity for images acquired at 450 nm and for a threshold scalar retardance value of 10.6 deg. This study paves the way for an application of polarimetry in the clinic.
Ensuring Positiveness of the Scaled Difference Chi-square Test Statistic.
Satorra, Albert; Bentler, Peter M
2010-06-01
A scaled difference test statistic [Formula: see text] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (2001). The statistic [Formula: see text] is asymptotically equivalent to the scaled difference test statistic T̄(d) introduced in Satorra (2000), which requires more involved computations beyond standard output of SEM software. The test statistic [Formula: see text] has been widely used in practice, but in some applications it is negative due to negativity of its associated scaling correction. Using the implicit function theorem, this note develops an improved scaling correction leading to a new scaled difference statistic T̄(d) that avoids negative chi-square values.
Raymond, Mark R; Clauser, Brian E; Furman, Gail E
2010-10-01
The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.
Robotic radical cystectomy and intracorporeal urinary diversion: The USC technique.
Abreu, Andre Luis de Castro; Chopra, Sameer; Azhar, Raed A; Berger, Andre K; Miranda, Gus; Cai, Jie; Gill, Inderbir S; Aron, Monish; Desai, Mihir M
2014-07-01
Radical cystectomy is the gold-standard treatment for muscle-invasive and refractory nonmuscle-invasive bladder cancer. We describe our technique for robotic radical cystectomy (RRC) and intracorporeal urinary diversion (ICUD), that replicates open surgical principles, and present our preliminary results. Specific descriptions for preoperative planning, surgical technique, and postoperative care are provided. Demographics, perioperative and 30-day complications data were collected prospectively and retrospectively analyzed. Learning curve trends were analyzed individually for ileal conduits (IC) and neobladders (NB). SAS(®) Software Version 9.3 was used for statistical analyses with statistical significance set at P < 0.05. Between July 2010 and September 2013, RRC and lymph node dissection with ICUD were performed in 103 consecutive patients (orthotopic NB=46, IC 57). All procedures were completed robotically replicating the open surgical principles. The learning curve trends showed a significant reduction in hospital stay for both IC (11 vs. 6-day, P < 0.01) and orthotopic NB (13 vs. 7.5-day, P < 0.01) when comparing the first third of the cohort with the rest of the group. Overall median (range) operative time and estimated blood loss was 7 h (4.8-13) and 200 mL (50-1200), respectively. Within 30-day postoperatively, complications occurred in 61 (59%) patients, with the majority being low grade (n = 43), and no patient died. Median (range) nodes yield was 36 (0-106) and 4 (3.9%) specimens had positive surgical margins. Robotic radical cystectomy with totally ICUD is safe and feasible. It can be performed using the established open surgical principles with encouraging perioperative outcomes.