Sample records for simple statistical techniques

  1. [Comparative study of the repair of full thickness tear of the supraspinatus by means of "single row" or "suture bridge" techniques].

    PubMed

    Arroyo-Hernández, M; Mellado-Romero, M A; Páramo-Díaz, P; Martín-López, C M; Cano-Egea, J M; Vilá Y Rico, J

    2015-01-01

    The purpose of this study is to analyze if there is any difference between the arthroscopic reparation of full-thickness supraspinatus tears with simple row technique versus suture bridge technique. We accomplished a retrospective study of 123 patients with full-thickness supraspinatus tears between January 2009 and January 2013 in our hospital. There were 60 simple row reparations, and 63 suture bridge ones. The mean age in the simple row group was 62.9, and in the suture bridge group was 63.3 years old. There were more women than men in both groups (67%). All patients were studied using the Constant test. The mean Constant test in the suture bridge group was 76.7, and in the simple row group was 72.4. We have also accomplished a statistical analysis of each Constant item. Strength was higher in the suture bridge group, with a significant statistical difference (p 0.04). The range of movement was also greater in the suture bridge group, but was not statistically significant. Suture bridge technique has better clinical results than single row reparations, but the difference is not statistically significant (p = 0.298).

  2. An automated approach to the design of decision tree classifiers

    NASA Technical Reports Server (NTRS)

    Argentiero, P.; Chin, R.; Beaudet, P.

    1982-01-01

    An automated technique is presented for designing effective decision tree classifiers predicated only on a priori class statistics. The procedure relies on linear feature extractions and Bayes table look-up decision rules. Associated error matrices are computed and utilized to provide an optimal design of the decision tree at each so-called 'node'. A by-product of this procedure is a simple algorithm for computing the global probability of correct classification assuming the statistical independence of the decision rules. Attention is given to a more precise definition of decision tree classification, the mathematical details on the technique for automated decision tree design, and an example of a simple application of the procedure using class statistics acquired from an actual Landsat scene.

  3. The Relationship between Visual Analysis and Five Statistical Analyses in a Simple AB Single-Case Research Design

    ERIC Educational Resources Information Center

    Brossart, Daniel F.; Parker, Richard I.; Olson, Elizabeth A.; Mahadevan, Lakshmi

    2006-01-01

    This study explored some practical issues for single-case researchers who rely on visual analysis of graphed data, but who also may consider supplemental use of promising statistical analysis techniques. The study sought to answer three major questions: (a) What is a typical range of effect sizes from these analytic techniques for data from…

  4. Using Statistical Process Control to Make Data-Based Clinical Decisions.

    ERIC Educational Resources Information Center

    Pfadt, Al; Wheeler, Donald J.

    1995-01-01

    Statistical process control (SPC), which employs simple statistical tools and problem-solving techniques such as histograms, control charts, flow charts, and Pareto charts to implement continual product improvement procedures, can be incorporated into human service organizations. Examples illustrate use of SPC procedures to analyze behavioral data…

  5. Morse Code, Scrabble, and the Alphabet

    ERIC Educational Resources Information Center

    Richardson, Mary; Gabrosek, John; Reischman, Diann; Curtiss, Phyliss

    2004-01-01

    In this paper we describe an interactive activity that illustrates simple linear regression. Students collect data and analyze it using simple linear regression techniques taught in an introductory applied statistics course. The activity is extended to illustrate checks for regression assumptions and regression diagnostics taught in an…

  6. Hood of the truck statistics for food animal practitioners.

    PubMed

    Slenning, Barrett D

    2006-03-01

    This article offers some tips on working with statistics and develops four relatively simple procedures to deal with most kinds of data with which veterinarians work. The criterion for a procedure to be a "Hood of the Truck Statistics" (HOT Stats) technique is that it must be simple enough to be done with pencil, paper, and a calculator. The goal of HOT Stats is to have the tools available to run quick analyses in only a few minutes so that decisions can be made in a timely fashion. The discipline allows us to move away from the all-too-common guess work about effects and differences we perceive following a change in treatment or management. The techniques allow us to move toward making more defensible, credible, and more quantifiably "risk-aware" real-time recommendations to our clients.

  7. Large ensemble modeling of the last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, David; Chang, Won; Haran, Murali; Applegate, Patrick; DeConto, Robert

    2016-05-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ˜ 20 000 yr. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. The analyses provide sea-level-rise envelopes with well-defined parametric uncertainty bounds, but the simple averaging method only provides robust results with full-factorial parameter sampling in the large ensemble. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree well with the more advanced techniques. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds.

  8. Improving cerebellar segmentation with statistical fusion

    NASA Astrophysics Data System (ADS)

    Plassard, Andrew J.; Yang, Zhen; Prince, Jerry L.; Claassen, Daniel O.; Landman, Bennett A.

    2016-03-01

    The cerebellum is a somatotopically organized central component of the central nervous system well known to be involved with motor coordination and increasingly recognized roles in cognition and planning. Recent work in multiatlas labeling has created methods that offer the potential for fully automated 3-D parcellation of the cerebellar lobules and vermis (which are organizationally equivalent to cortical gray matter areas). This work explores the trade offs of using different statistical fusion techniques and post hoc optimizations in two datasets with distinct imaging protocols. We offer a novel fusion technique by extending the ideas of the Selective and Iterative Method for Performance Level Estimation (SIMPLE) to a patch-based performance model. We demonstrate the effectiveness of our algorithm, Non- Local SIMPLE, for segmentation of a mixed population of healthy subjects and patients with severe cerebellar anatomy. Under the first imaging protocol, we show that Non-Local SIMPLE outperforms previous gold-standard segmentation techniques. In the second imaging protocol, we show that Non-Local SIMPLE outperforms previous gold standard techniques but is outperformed by a non-locally weighted vote with the deeper population of atlases available. This work advances the state of the art in open source cerebellar segmentation algorithms and offers the opportunity for routinely including cerebellar segmentation in magnetic resonance imaging studies that acquire whole brain T1-weighted volumes with approximately 1 mm isotropic resolution.

  9. Techniques for estimating selected streamflow characteristics of rural unregulated streams in Ohio

    USGS Publications Warehouse

    Koltun, G.F.; Whitehead, Matthew T.

    2002-01-01

    This report provides equations for estimating mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and streamflow quartiles (the 25th-, 50th-, and 75th-percentile streamflows) as a function of selected basin characteristics for rural, unregulated streams in Ohio. The equations were developed from streamflow statistics and basin-characteristics data for as many as 219 active or discontinued streamflow-gaging stations on rural, unregulated streams in Ohio with 10 or more years of homogenous daily streamflow record. Streamflow statistics and basin-characteristics data for the 219 stations are presented in this report. Simple equations (based on drainage area only) and best-fit equations (based on drainage area and at least two other basin characteristics) were developed by means of ordinary least-squares regression techniques. Application of the best-fit equations generally involves quantification of basin characteristics that require or are facilitated by use of a geographic information system. In contrast, the simple equations can be used with information that can be obtained without use of a geographic information system; however, the simple equations have larger prediction errors than the best-fit equations and exhibit geographic biases for most streamflow statistics. The best-fit equations should be used instead of the simple equations whenever possible.

  10. A simple white noise analysis of neuronal light responses.

    PubMed

    Chichilnisky, E J

    2001-05-01

    A white noise technique is presented for estimating the response properties of spiking visual system neurons. The technique is simple, robust, efficient and well suited to simultaneous recordings from multiple neurons. It provides a complete and easily interpretable model of light responses even for neurons that display a common form of response nonlinearity that precludes classical linear systems analysis. A theoretical justification of the technique is presented that relies only on elementary linear algebra and statistics. Implementation is described with examples. The technique and the underlying model of neural responses are validated using recordings from retinal ganglion cells, and in principle are applicable to other neurons. Advantages and disadvantages of the technique relative to classical approaches are discussed.

  11. Comparative forensic soil analysis of New Jersey state parks using a combination of simple techniques with multivariate statistics.

    PubMed

    Bonetti, Jennifer; Quarino, Lawrence

    2014-05-01

    This study has shown that the combination of simple techniques with the use of multivariate statistics offers the potential for the comparative analysis of soil samples. Five samples were obtained from each of twelve state parks across New Jersey in both the summer and fall seasons. Each sample was examined using particle-size distribution, pH analysis in both water and 1 M CaCl2 , and a loss on ignition technique. Data from each of the techniques were combined, and principal component analysis (PCA) and canonical discriminant analysis (CDA) were used for multivariate data transformation. Samples from different locations could be visually differentiated from one another using these multivariate plots. Hold-one-out cross-validation analysis showed error rates as low as 3.33%. Ten blind study samples were analyzed resulting in no misclassifications using Mahalanobis distance calculations and visual examinations of multivariate plots. Seasonal variation was minimal between corresponding samples, suggesting potential success in forensic applications. © 2014 American Academy of Forensic Sciences.

  12. The effectiveness of simple drainage technique in improvement of cerebral blood flow in patients with chronic subdural hemorrhage.

    PubMed

    Kaplan, Metin; Erol, Fatih Serhat; Bozgeyik, Zülküf; Koparan, Mehmet

    2007-07-01

    In the present study, the clinical effectiveness of a surgical procedure in which no draining tubes are installed following simple burr hole drainage and saline irrigation is investigated. 10 patients, having undergone operative intervention for unilateral chronic subdural hemorrhage, having a clinical grade of 2 and a hemorrhage thickness of 2 cm, were included in the study. The cerebral blood flow rates of middle cerebral artery were evaluated bilaterally with Doppler before and after the surgery. All the cases underwent the operation using the simple burr hole drainage technique without the drain and consequent saline irrigation. Statistical analysis was performed by Wilcoxon signed rank test (p<0.05). There was a pronounced decrease in the preoperative MCA blood flow in the hemisphere the hemorrhage had occurred (p=0.008). An increased PI value on the side of the hemorrhage drew our attention (p=0.005). Postoperative MCA blood flow measurements showed a statistically significant improvement (p=0.005). Furthermore, the PI value showed normalization (p<0.05). The paresis and the level of consciousness improved in all cases. Simple burr hole drainage technique is sufficient for the improvement of cerebral blood flow and clinical recovery in patients with chronic subdural hemorrhage.

  13. Proceedings, Seminar on Probabilistic Methods in Geotechnical Engineering

    NASA Astrophysics Data System (ADS)

    Hynes-Griffin, M. E.; Buege, L. L.

    1983-09-01

    Contents: Applications of Probabilistic Methods in Geotechnical Engineering; Probabilistic Seismic and Geotechnical Evaluation at a Dam Site; Probabilistic Slope Stability Methodology; Probability of Liquefaction in a 3-D Soil Deposit; Probabilistic Design of Flood Levees; Probabilistic and Statistical Methods for Determining Rock Mass Deformability Beneath Foundations: An Overview; Simple Statistical Methodology for Evaluating Rock Mechanics Exploration Data; New Developments in Statistical Techniques for Analyzing Rock Slope Stability.

  14. Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.

    PubMed

    Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W

    2017-08-01

    Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.

  15. Applying Statistics in the Undergraduate Chemistry Laboratory: Experiments with Food Dyes.

    ERIC Educational Resources Information Center

    Thomasson, Kathryn; Lofthus-Merschman, Sheila; Humbert, Michelle; Kulevsky, Norman

    1998-01-01

    Describes several experiments to teach different aspects of the statistical analysis of data using household substances and a simple analysis technique. Each experiment can be performed in three hours. Students learn about treatment of spurious data, application of a pooled variance, linear least-squares fitting, and simultaneous analysis of dyes…

  16. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, George

    1993-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multiparameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resource.

  17. Multivariate statistical analysis software technologies for astrophysical research involving large data bases

    NASA Technical Reports Server (NTRS)

    Djorgovski, Stanislav

    1992-01-01

    The existing and forthcoming data bases from NASA missions contain an abundance of information whose complexity cannot be efficiently tapped with simple statistical techniques. Powerful multivariate statistical methods already exist which can be used to harness much of the richness of these data. Automatic classification techniques have been developed to solve the problem of identifying known types of objects in multi parameter data sets, in addition to leading to the discovery of new physical phenomena and classes of objects. We propose an exploratory study and integration of promising techniques in the development of a general and modular classification/analysis system for very large data bases, which would enhance and optimize data management and the use of human research resources.

  18. Advanced statistics: linear regression, part I: simple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    Simple linear regression is a mathematical technique used to model the relationship between a single independent predictor variable and a single dependent outcome variable. In this, the first of a two-part series exploring concepts in linear regression analysis, the four fundamental assumptions and the mechanics of simple linear regression are reviewed. The most common technique used to derive the regression line, the method of least squares, is described. The reader will be acquainted with other important concepts in simple linear regression, including: variable transformations, dummy variables, relationship to inference testing, and leverage. Simplified clinical examples with small datasets and graphic models are used to illustrate the points. This will provide a foundation for the second article in this series: a discussion of multiple linear regression, in which there are multiple predictor variables.

  19. Comments on `A Cautionary Note on the Interpretation of EOFs'.

    NASA Astrophysics Data System (ADS)

    Behera, Swadhin K.; Rao, Suryachandra A.; Saji, Hameed N.; Yamagata, Toshio

    2003-04-01

    The misleading aspect of the statistical analyses used in Dommenget and Latif, which raises concerns on some of the reported climate modes, is demonstrated. Adopting simple statistical techniques, the physical existence of the Indian Ocean dipole mode is shown and then the limitations of varimax and regression analyses in capturing the climate mode are discussed.

  20. Correlation and simple linear regression.

    PubMed

    Zou, Kelly H; Tuncali, Kemal; Silverman, Stuart G

    2003-06-01

    In this tutorial article, the concepts of correlation and regression are reviewed and demonstrated. The authors review and compare two correlation coefficients, the Pearson correlation coefficient and the Spearman rho, for measuring linear and nonlinear relationships between two continuous variables. In the case of measuring the linear relationship between a predictor and an outcome variable, simple linear regression analysis is conducted. These statistical concepts are illustrated by using a data set from published literature to assess a computed tomography-guided interventional technique. These statistical methods are important for exploring the relationships between variables and can be applied to many radiologic studies.

  1. Theoretical and Experimental Investigation of Random Gust Loads Part I : Aerodynamic Transfer Function of a Simple Wing Configuration in Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Hakkinen, Raimo J; Richardson, A S , Jr

    1957-01-01

    Sinusoidally oscillating downwash and lift produced on a simple rigid airfoil were measured and compared with calculated values. Statistically stationary random downwash and the corresponding lift on a simple rigid airfoil were also measured and the transfer functions between their power spectra determined. The random experimental values are compared with theoretically approximated values. Limitations of the experimental technique and the need for more extensive experimental data are discussed.

  2. An analysis of I/O efficient order-statistic-based techniques for noise power estimation in the HRMS sky survey's operational system

    NASA Technical Reports Server (NTRS)

    Zimmerman, G. A.; Olsen, E. T.

    1992-01-01

    Noise power estimation in the High-Resolution Microwave Survey (HRMS) sky survey element is considered as an example of a constant false alarm rate (CFAR) signal detection problem. Order-statistic-based noise power estimators for CFAR detection are considered in terms of required estimator accuracy and estimator dynamic range. By limiting the dynamic range of the value to be estimated, the performance of an order-statistic estimator can be achieved by simpler techniques requiring only a single pass of the data. Simple threshold-and-count techniques are examined, and it is shown how several parallel threshold-and-count estimation devices can be used to expand the dynamic range to meet HRMS system requirements with minimal hardware complexity. An input/output (I/O) efficient limited-precision order-statistic estimator with wide but limited dynamic range is also examined.

  3. Statistical bias correction method applied on CMIP5 datasets over the Indian region during the summer monsoon season for climate change applications

    NASA Astrophysics Data System (ADS)

    Prasanna, V.

    2018-01-01

    This study makes use of temperature and precipitation from CMIP5 climate model output for climate change application studies over the Indian region during the summer monsoon season (JJAS). Bias correction of temperature and precipitation from CMIP5 GCM simulation results with respect to observation is discussed in detail. The non-linear statistical bias correction is a suitable bias correction method for climate change data because it is simple and does not add up artificial uncertainties to the impact assessment of climate change scenarios for climate change application studies (agricultural production changes) in the future. The simple statistical bias correction uses observational constraints on the GCM baseline, and the projected results are scaled with respect to the changing magnitude in future scenarios, varying from one model to the other. Two types of bias correction techniques are shown here: (1) a simple bias correction using a percentile-based quantile-mapping algorithm and (2) a simple but improved bias correction method, a cumulative distribution function (CDF; Weibull distribution function)-based quantile-mapping algorithm. This study shows that the percentile-based quantile mapping method gives results similar to the CDF (Weibull)-based quantile mapping method, and both the methods are comparable. The bias correction is applied on temperature and precipitation variables for present climate and future projected data to make use of it in a simple statistical model to understand the future changes in crop production over the Indian region during the summer monsoon season. In total, 12 CMIP5 models are used for Historical (1901-2005), RCP4.5 (2005-2100), and RCP8.5 (2005-2100) scenarios. The climate index from each CMIP5 model and the observed agricultural yield index over the Indian region are used in a regression model to project the changes in the agricultural yield over India from RCP4.5 and RCP8.5 scenarios. The results revealed a better convergence of model projections in the bias corrected data compared to the uncorrected data. The study can be extended to localized regional domains aimed at understanding the changes in the agricultural productivity in the future with an agro-economy or a simple statistical model. The statistical model indicated that the total food grain yield is going to increase over the Indian region in the future, the increase in the total food grain yield is approximately 50 kg/ ha for the RCP4.5 scenario from 2001 until the end of 2100, and the increase in the total food grain yield is approximately 90 kg/ha for the RCP8.5 scenario from 2001 until the end of 2100. There are many studies using bias correction techniques, but this study applies the bias correction technique to future climate scenario data from CMIP5 models and applied it to crop statistics to find future crop yield changes over the Indian region.

  4. Evaluation of wind field statistics near and inside clouds using a coherent Doppler lidar

    NASA Astrophysics Data System (ADS)

    Lottman, Brian Todd

    1998-09-01

    This work proposes advanced techniques for measuring the spatial wind field statistics near and inside clouds using a vertically pointing solid state coherent Doppler lidar on a fixed ground based platform. The coherent Doppler lidar is an ideal instrument for high spatial and temporal resolution velocity estimates. The basic parameters of lidar are discussed, including a complete statistical description of the Doppler lidar signal. This description is extended to cases with simple functional forms for aerosol backscatter and velocity. An estimate for the mean velocity over a sensing volume is produced by estimating the mean spectra. There are many traditional spectral estimators, which are useful for conditions with slowly varying velocity and backscatter. A new class of estimators (novel) is introduced that produces reliable velocity estimates for conditions with large variations in aerosol backscatter and velocity with range, such as cloud conditions. Performance of traditional and novel estimators is computed for a variety of deterministic atmospheric conditions using computer simulated data. Wind field statistics are produced for actual data for a cloud deck, and for multi- layer clouds. Unique results include detection of possible spectral signatures for rain, estimates for the structure function inside a cloud deck, reliable velocity estimation techniques near and inside thin clouds, and estimates for simple wind field statistics between cloud layers.

  5. A data compression technique for synthetic aperture radar images

    NASA Technical Reports Server (NTRS)

    Frost, V. S.; Minden, G. J.

    1986-01-01

    A data compression technique is developed for synthetic aperture radar (SAR) imagery. The technique is based on an SAR image model and is designed to preserve the local statistics in the image by an adaptive variable rate modification of block truncation coding (BTC). A data rate of approximately 1.6 bit/pixel is achieved with the technique while maintaining the image quality and cultural (pointlike) targets. The algorithm requires no large data storage and is computationally simple.

  6. A stitch in time saves nine: suture technique does not affect intestinal growth in a young, growing animal model.

    PubMed

    Gurien, Lori A; Wyrick, Deidre L; Smith, Samuel D; Maxson, R Todd

    2016-05-01

    Although this issue remains unexamined, pediatric surgeons commonly use simple interrupted suture for bowel anastomosis, as it is thought to improve intestinal growth postoperatively compared to continuous running suture. However, effects on intestinal growth are unclear. We compared intestinal growth using different anastomotic techniques during the postoperative period in young rats. Young, growing rats underwent small bowel transection and anastomosis using either simple interrupted or continuous running technique. At 7-weeks postoperatively after a four-fold growth, the anastomotic site was resected. Diameters and burst pressures were measured. Thirteen rats underwent anastomosis with simple interrupted technique and sixteen with continuous running method. No differences were found in body weight at first (102.46 vs 109.75g) or second operations (413.85 vs 430.63g). Neither the diameters (0.69 vs 0.79cm) nor burst pressures were statistically different, although the calculated circumference was smaller in the simple interrupted group (2.18 vs 2.59cm; p=0.03). No ruptures occurred at the anastomotic line. This pilot study is the first to compare continuous running to simple interrupted intestinal anastomosis in a pediatric model and showed no difference in growth. Adopting continuous running techniques for bowel anastomosis in young children may lead to faster operative time without affecting intestinal growth. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Large ensemble modeling of last deglacial retreat of the West Antarctic Ice Sheet: comparison of simple and advanced statistical techniques

    NASA Astrophysics Data System (ADS)

    Pollard, D.; Chang, W.; Haran, M.; Applegate, P.; DeConto, R.

    2015-11-01

    A 3-D hybrid ice-sheet model is applied to the last deglacial retreat of the West Antarctic Ice Sheet over the last ~ 20 000 years. A large ensemble of 625 model runs is used to calibrate the model to modern and geologic data, including reconstructed grounding lines, relative sea-level records, elevation-age data and uplift rates, with an aggregate score computed for each run that measures overall model-data misfit. Two types of statistical methods are used to analyze the large-ensemble results: simple averaging weighted by the aggregate score, and more advanced Bayesian techniques involving Gaussian process-based emulation and calibration, and Markov chain Monte Carlo. Results for best-fit parameter ranges and envelopes of equivalent sea-level rise with the simple averaging method agree quite well with the more advanced techniques, but only for a large ensemble with full factorial parameter sampling. Best-fit parameter ranges confirm earlier values expected from prior model tuning, including large basal sliding coefficients on modern ocean beds. Each run is extended 5000 years into the "future" with idealized ramped climate warming. In the majority of runs with reasonable scores, this produces grounding-line retreat deep into the West Antarctic interior, and the analysis provides sea-level-rise envelopes with well defined parametric uncertainty bounds.

  8. Molecular Analysis of Date Palm Genetic Diversity Using Random Amplified Polymorphic DNA (RAPD) and Inter-Simple Sequence Repeats (ISSRs).

    PubMed

    El Sharabasy, Sherif F; Soliman, Khaled A

    2017-01-01

    The date palm is an ancient domesticated plant with great diversity and has been cultivated in the Middle East and North Africa for at last 5000 years. Date palm cultivars are classified based on the fruit moisture content, as dry, semidry, and soft dates. There are a number of biochemical and molecular techniques available for characterization of the date palm variation. This chapter focuses on the DNA-based markers random amplified polymorphic DNA (RAPD) and inter-simple sequence repeats (ISSR) techniques, in addition to biochemical markers based on isozyme analysis. These techniques coupled with appropriate statistical tools proved useful for determining phylogenetic relationships among date palm cultivars and provide information resources for date palm gene banks.

  9. [Digital radiography in young children. Considerations based on experiences in practice].

    PubMed

    Berkhout, W E R; Mileman, P A; Weerheijm, K L

    2004-10-01

    In dentistry, digital radiology techniques, such as a charge-coupled device and a storage phosphor plate, are gaining popularity. It was the objective of this study to assess the importance of the advantages and disadvantages of digital radiology techniques for bitewing radiography in young children, when compared to conventional film. A group of dentists received a questionnaire regarding their experiences with digital radiology techniques or conventional films among young children. Using the Simple Multi-Attributive Rating Technique (SMART) a final weighted score was calculated for the charge-coupled device, the phosphor plate, and conventional film. The scores were 7.40, 7.38, and 6.98 respectively. The differences were not statistically significant (p > 0.47). It could be concluded that, on the basis of experiences in practice, there are no statistically significant preferences for the use of digital radioogy techniques for bitewing radiography in young children.

  10. Mathematical neuroscience: from neurons to circuits to systems.

    PubMed

    Gutkin, Boris; Pinto, David; Ermentrout, Bard

    2003-01-01

    Applications of mathematics and computational techniques to our understanding of neuronal systems are provided. Reduction of membrane models to simplified canonical models demonstrates how neuronal spike-time statistics follow from simple properties of neurons. Averaging over space allows one to derive a simple model for the whisker barrel circuit and use this to explain and suggest several experiments. Spatio-temporal pattern formation methods are applied to explain the patterns seen in the early stages of drug-induced visual hallucinations.

  11. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  12. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    NASA Astrophysics Data System (ADS)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  13. A simple test of association for contingency tables with multiple column responses.

    PubMed

    Decady, Y J; Thomas, D R

    2000-09-01

    Loughin and Scherer (1998, Biometrics 54, 630-637) investigated tests of association in two-way tables when one of the categorical variables allows for multiple-category responses from individual respondents. Standard chi-squared tests are invalid in this case, and they developed a bootstrap test procedure that provides good control of test levels under the null hypothesis. This procedure and some others that have been proposed are computationally involved and are based on techniques that are relatively unfamiliar to many practitioners. In this paper, the methods introduced by Rao and Scott (1981, Journal of the American Statistical Association 76, 221-230) for analyzing complex survey data are used to develop a simple test based on a corrected chi-squared statistic.

  14. Mapping of terrain by computer clustering techniques using multispectral scanner data and using color aerial film

    NASA Technical Reports Server (NTRS)

    Smedes, H. W.; Linnerud, H. J.; Woolaver, L. B.; Su, M. Y.; Jayroe, R. R.

    1972-01-01

    Two clustering techniques were used for terrain mapping by computer of test sites in Yellowstone National Park. One test was made with multispectral scanner data using a composite technique which consists of (1) a strictly sequential statistical clustering which is a sequential variance analysis, and (2) a generalized K-means clustering. In this composite technique, the output of (1) is a first approximation of the cluster centers. This is the input to (2) which consists of steps to improve the determination of cluster centers by iterative procedures. Another test was made using the three emulsion layers of color-infrared aerial film as a three-band spectrometer. Relative film densities were analyzed using a simple clustering technique in three-color space. Important advantages of the clustering technique over conventional supervised computer programs are (1) human intervention, preparation time, and manipulation of data are reduced, (2) the computer map, gives unbiased indication of where best to select the reference ground control data, (3) use of easy to obtain inexpensive film, and (4) the geometric distortions can be easily rectified by simple standard photogrammetric techniques.

  15. Visual Analysis of North Atlantic Hurricane Trends Using Parallel Coordinates and Statistical Techniques

    DTIC Science & Technology

    2008-07-07

    analyzing multivariate data sets. The system was developed using the Java Development Kit (JDK) version 1.5; and it yields interactive performance on a... script and captures output from the MATLAB’s “regress” and “stepwisefit” utilities that perform simple and stepwise regression, respectively. The MATLAB...Statistical Association, vol. 85, no. 411, pp. 664–675, 1990. [9] H. Hauser, F. Ledermann, and H. Doleisch, “ Angular brushing of extended parallel coordinates

  16. A New Technique for Personality Scale Construction. Preliminary Findings.

    ERIC Educational Resources Information Center

    Schaffner, Paul E.; Darlington, Richard B.

    Most methods of personality scale construction have clear statistical disadvantages. A hybrid method (Darlington and Bishop, 1966) was found to increase scale validity more than any other method, with large item pools. A simple modification of the Darlington-Bishop method (algebraically and conceptually similar to ridge regression, but…

  17. A Deterministic Annealing Approach to Clustering AIRS Data

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Braverman, Amy; Ruzmaikin, Alexander

    2012-01-01

    We will examine the validity of means and standard deviations as a basis for climate data products. We will explore the conditions under which these two simple statistics are inadequate summaries of the underlying empirical probability distributions by contrasting them with a nonparametric, method called Deterministic Annealing technique

  18. Multiple Contact Dates and SARS Incubation Periods

    PubMed Central

    2004-01-01

    Many severe acute respiratory syndrome (SARS) patients have multiple possible incubation periods due to multiple contact dates. Multiple contact dates cannot be used in standard statistical analytic techniques, however. I present a simple spreadsheet-based method that uses multiple contact dates to calculate the possible incubation periods of SARS. PMID:15030684

  19. A short note on calculating the adjusted SAR index

    USDA-ARS?s Scientific Manuscript database

    A simple algebraic technique is presented for computing the adjusted SAR Index proposed by Suarez (1981). The statistical formula presented in this note facilitates the computation of the adjusted SAR without the use of either a look-up table, custom computer software or the need to compute exact a...

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilbert, Richard O.

    The application of statistics to environmental pollution monitoring studies requires a knowledge of statistical analysis methods particularly well suited to pollution data. This book fills that need by providing sampling plans, statistical tests, parameter estimation procedure techniques, and references to pertinent publications. Most of the statistical techniques are relatively simple, and examples, exercises, and case studies are provided to illustrate procedures. The book is logically divided into three parts. Chapters 1, 2, and 3 are introductory chapters. Chapters 4 through 10 discuss field sampling designs and Chapters 11 through 18 deal with a broad range of statistical analysis procedures. Somemore » statistical techniques given here are not commonly seen in statistics book. For example, see methods for handling correlated data (Sections 4.5 and 11.12), for detecting hot spots (Chapter 10), and for estimating a confidence interval for the mean of a lognormal distribution (Section 13.2). Also, Appendix B lists a computer code that estimates and tests for trends over time at one or more monitoring stations using nonparametric methods (Chapters 16 and 17). Unfortunately, some important topics could not be included because of their complexity and the need to limit the length of the book. For example, only brief mention could be made of time series analysis using Box-Jenkins methods and of kriging techniques for estimating spatial and spatial-time patterns of pollution, although multiple references on these topics are provided. Also, no discussion of methods for assessing risks from environmental pollution could be included.« less

  1. Restoration of labral anatomy and biomechanics after superior labral anterior-posterior repair: comparison of mattress versus simple technique.

    PubMed

    Boddula, Madhav R; Adamson, Gregory J; Gupta, Akash; McGarry, Michelle H; Lee, Thay Q

    2012-04-01

    Both simple and mattress repair techniques have been utilized with success for type II superior labral anterior-posterior (SLAP) lesions; however, direct anatomic and biomechanical comparisons of these techniques have yet to be clearly demonstrated. For type II SLAP lesions, the mattress suture repair technique will result in greater labral height and better position on the glenoid face and exhibit stronger biomechanical characteristics, when cyclically loaded and loaded to failure through the biceps, compared with the simple suture repair technique. Controlled laboratory study. Six matched pairs of cadaveric shoulders were dissected, and a clock face was created on the glenoid from 9 o'clock (posterior) to 3 o'clock (anterior). For the intact specimen, labral height and labral distance from the glenoid edge were measured using a MicroScribe. A SLAP lesion was then created from 10 o'clock to 2 o'clock. Lesions were repaired with two 3.0-mm BioSuture-Tak anchors placed at 11 o'clock and 1 o'clock. For each pair, a mattress repair was used for one shoulder, and a simple repair was used for the contralateral shoulder. After repair, labral height and labral distance from the glenoid edge were again measured. The specimens were then cyclically loaded and loaded to failure through the biceps using an Instron machine. A paired t test was used for statistical analysis. After mattress repair, a significant increase in labral height occurred compared with intact from 2.5 ± 0.3 mm to 4.3 ± 0.3 mm at 11 o'clock (P = .013), 2.7 ± 0.5 mm to 4.2 ± 0.7 mm at 12:30 o'clock (P = .007), 3.1 ± 0.5 mm to 4.2 ± 0.7 mm at 1 o'clock (P = .006), and 2.8 ± 0.7 mm to 3.7 ± 0.8 mm at 1:30 o'clock (P = .037). There was no significant difference in labral height between the intact condition and after simple repair at any clock face position. Labral height was significantly increased in the mattress repairs compared with simple repairs at 11 o'clock (mean difference, 2.0 mm; P = .008) and 12:30 o'clock (mean difference, 1.3 mm; P = .044). Labral distance from the glenoid edge was not significantly different between techniques. No difference was observed between the mattress and simple repair techniques for all biomechanical parameters, except the simple technique had a higher load and energy absorbed at 2-mm displacement. The mattress technique created a greater labral height while maintaining similar biomechanical characteristics compared with the simple repair, with the exception of load and energy absorbed at 2-mm displacement, which was increased for the simple technique. Mattress repair for type II SLAP lesions creates a higher labral bumper compared with simple repairs, while both techniques resulted in similar biomechanical characteristics.

  2. A study of two statistical methods as applied to shuttle solid rocket booster expenditures

    NASA Technical Reports Server (NTRS)

    Perlmutter, M.; Huang, Y.; Graves, M.

    1974-01-01

    The state probability technique and the Monte Carlo technique are applied to finding shuttle solid rocket booster expenditure statistics. For a given attrition rate per launch, the probable number of boosters needed for a given mission of 440 launches is calculated. Several cases are considered, including the elimination of the booster after a maximum of 20 consecutive launches. Also considered is the case where the booster is composed of replaceable components with independent attrition rates. A simple cost analysis is carried out to indicate the number of boosters to build initially, depending on booster costs. Two statistical methods were applied in the analysis: (1) state probability method which consists of defining an appropriate state space for the outcome of the random trials, and (2) model simulation method or the Monte Carlo technique. It was found that the model simulation method was easier to formulate while the state probability method required less computing time and was more accurate.

  3. A primer on the study of transitory dynamics in ecological series using the scale-dependent correlation analysis.

    PubMed

    Rodríguez-Arias, Miquel Angel; Rodó, Xavier

    2004-03-01

    Here we describe a practical, step-by-step primer to scale-dependent correlation (SDC) analysis. The analysis of transitory processes is an important but often neglected topic in ecological studies because only a few statistical techniques appear to detect temporary features accurately enough. We introduce here the SDC analysis, a statistical and graphical method to study transitory processes at any temporal or spatial scale. SDC analysis, thanks to the combination of conventional procedures and simple well-known statistical techniques, becomes an improved time-domain analogue of wavelet analysis. We use several simple synthetic series to describe the method, a more complex example, full of transitory features, to compare SDC and wavelet analysis, and finally we analyze some selected ecological series to illustrate the methodology. The SDC analysis of time series of copepod abundances in the North Sea indicates that ENSO primarily is the main climatic driver of short-term changes in population dynamics. SDC also uncovers some long-term, unexpected features in the population. Similarly, the SDC analysis of Nicholson's blowflies data locates where the proposed models fail and provides new insights about the mechanism that drives the apparent vanishing of the population cycle during the second half of the series.

  4. Intraosseous anesthesia with solution injection controlled by a computerized system versus conventional oral anesthesia: a preliminary study.

    PubMed

    Beneito-Brotons, Rut; Peñarrocha-Oltra, David; Ata-Ali, Javier; Peñarrocha, María

    2012-05-01

    To compare a computerized intraosseous anesthesia system with the conventional oral anesthesia techniques, and analyze the latency and duration of the anesthetic effect and patient preference. A simple-blind prospective study was made between March 2007 and May 2008. Each patient was subjected to two anesthetic techniques: conventional and intraosseous using the Quicksleeper® system (DHT, Cholet, France). A split-mouth design was adopted in which each patient underwent treatment of a tooth with one of the techniques, and treatment of the homologous contralateral tooth with the other technique. The treatments consisted of restorations, endodontic procedures and simple extractions. The study series comprised 12 females and 18 males with a mean age of 36.8 years. The 30 subjects underwent a total of 60 anesthetic procedures. Intraosseous and conventional oral anesthesia caused discomfort during administration in 46.3% and 32.1% of the patients, respectively. The latency was 7.1±2.23 minutes for the conventional technique and 0.48±0.32 for intraosseous anesthesia--the difference being statistically significant. The depth of the anesthetic effect was sufficient to allow the patients to tolerate the dental treatments. The duration of the anesthetic effect in soft tissues was 199.3 minutes with the conventional technique versus only 1.6 minutes with intraosseous anesthesia--the difference between the two techniques being statistically significant. Most of the patients (69.7%) preferred intraosseous anesthesia. The described intraosseous anesthetic system is effective, with a much shorter latency than the conventional technique, sufficient duration of anesthesia to perform the required dental treatments, and with a much lesser soft tissue anesthetic effect. Most of the patients preferred intraosseous anesthesia.

  5. Intraosseous anesthesia with solution injection controlled by a computerized system versus conventional oral anesthesia: A preliminary study

    PubMed Central

    Beneito-Brotons, Rut; Peñarrocha-Oltra, David; Ata-Ali, Javier

    2012-01-01

    Objective: To compare a computerized intraosseous anesthesia system with the conventional oral anesthesia techniques, and analyze the latency and duration of the anesthetic effect and patient preference. Design: A simple-blind prospective study was made between March 2007 and May 2008. Each patient was subjected to two anesthetic techniques: conventional and intraosseous using the Quicksleeper® system (DHT, Cholet, France). A split-mouth design was adopted in which each patient underwent treatment of a tooth with one of the techniques, and treatment of the homologous contralateral tooth with the other technique. The treatments consisted of restorations, endodontic procedures and simple extractions. Results: The study series comprised 12 females and 18 males with a mean age of 36.8 years. The 30 subjects underwent a total of 60 anesthetic procedures. Intraosseous and conventional oral anesthesia caused discomfort during administration in 46.3% and 32.1% of the patients, respectively. The latency was 7.1±2.23 minutes for the conventional technique and 0.48±0.32 for intraosseous anesthesia – the difference being statistically significant. The depth of the anesthetic effect was sufficient to allow the patients to tolerate the dental treatments. The duration of the anesthetic effect in soft tissues was 199.3 minutes with the conventional technique versus only 1.6 minutes with intraosseous anesthesia – the difference between the two techniques being statistically significant. Most of the patients (69.7%) preferred intraosseous anesthesia. Conclusions: The described intraosseous anesthetic system is effective, with a much shorter latency than the conventional technique, sufficient duration of anesthesia to perform the required dental treatments, and with a much lesser soft tissue anesthetic effect. Most of the patients preferred intraosseous anesthesia. Key words:Anesthesia, intraosseous, oral anesthesia, infiltrating, mandibular block, Quicksleeper®. PMID:22143722

  6. A Simple Simulation Technique for Nonnormal Data with Prespecified Skewness, Kurtosis, and Covariance Matrix.

    PubMed

    Foldnes, Njål; Olsson, Ulf Henning

    2016-01-01

    We present and investigate a simple way to generate nonnormal data using linear combinations of independent generator (IG) variables. The simulated data have prespecified univariate skewness and kurtosis and a given covariance matrix. In contrast to the widely used Vale-Maurelli (VM) transform, the obtained data are shown to have a non-Gaussian copula. We analytically obtain asymptotic robustness conditions for the IG distribution. We show empirically that popular test statistics in covariance analysis tend to reject true models more often under the IG transform than under the VM transform. This implies that overly optimistic evaluations of estimators and fit statistics in covariance structure analysis may be tempered by including the IG transform for nonnormal data generation. We provide an implementation of the IG transform in the R environment.

  7. Statistics for nuclear engineers and scientists. Part 1. Basic statistical inference

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beggs, W.J.

    1981-02-01

    This report is intended for the use of engineers and scientists working in the nuclear industry, especially at the Bettis Atomic Power Laboratory. It serves as the basis for several Bettis in-house statistics courses. The objectives of the report are to introduce the reader to the language and concepts of statistics and to provide a basic set of techniques to apply to problems of the collection and analysis of data. Part 1 covers subjects of basic inference. The subjects include: descriptive statistics; probability; simple inference for normally distributed populations, and for non-normal populations as well; comparison of two populations; themore » analysis of variance; quality control procedures; and linear regression analysis.« less

  8. A Diagrammatic Exposition of Regression and Instrumental Variables for the Beginning Student

    ERIC Educational Resources Information Center

    Foster, Gigi

    2009-01-01

    Some beginning students of statistics and econometrics have difficulty with traditional algebraic approaches to explaining regression and related techniques. For these students, a simple and intuitive diagrammatic introduction as advocated by Kennedy (2008) may prove a useful framework to support further study. The author presents a series of…

  9. An Analysis of Variance Framework for Matrix Sampling.

    ERIC Educational Resources Information Center

    Sirotnik, Kenneth

    Significant cost savings can be achieved with the use of matrix sampling in estimating population parameters from psychometric data. The statistical design is intuitively simple, using the framework of the two-way classification analysis of variance technique. For example, the mean and variance are derived from the performance of a certain grade…

  10. A Simple Method to Control Positive Baseline Trend within Data Nonoverlap

    ERIC Educational Resources Information Center

    Parker, Richard I.; Vannest, Kimberly J.; Davis, John L.

    2014-01-01

    Nonoverlap is widely used as a statistical summary of data; however, these analyses rarely correct unwanted positive baseline trend. This article presents and validates the graph rotation for overlap and trend (GROT) technique, a hand calculation method for controlling positive baseline trend within an analysis of data nonoverlap. GROT is…

  11. Density-based empirical likelihood procedures for testing symmetry of data distributions and K-sample comparisons.

    PubMed

    Vexler, Albert; Tanajian, Hovig; Hutson, Alan D

    In practice, parametric likelihood-ratio techniques are powerful statistical tools. In this article, we propose and examine novel and simple distribution-free test statistics that efficiently approximate parametric likelihood ratios to analyze and compare distributions of K groups of observations. Using the density-based empirical likelihood methodology, we develop a Stata package that applies to a test for symmetry of data distributions and compares K -sample distributions. Recognizing that recent statistical software packages do not sufficiently address K -sample nonparametric comparisons of data distributions, we propose a new Stata command, vxdbel, to execute exact density-based empirical likelihood-ratio tests using K samples. To calculate p -values of the proposed tests, we use the following methods: 1) a classical technique based on Monte Carlo p -value evaluations; 2) an interpolation technique based on tabulated critical values; and 3) a new hybrid technique that combines methods 1 and 2. The third, cutting-edge method is shown to be very efficient in the context of exact-test p -value computations. This Bayesian-type method considers tabulated critical values as prior information and Monte Carlo generations of test statistic values as data used to depict the likelihood function. In this case, a nonparametric Bayesian method is proposed to compute critical values of exact tests.

  12. A computational visual saliency model based on statistics and machine learning.

    PubMed

    Lin, Ru-Je; Lin, Wei-Song

    2014-08-01

    Identifying the type of stimuli that attracts human visual attention has been an appealing topic for scientists for many years. In particular, marking the salient regions in images is useful for both psychologists and many computer vision applications. In this paper, we propose a computational approach for producing saliency maps using statistics and machine learning methods. Based on four assumptions, three properties (Feature-Prior, Position-Prior, and Feature-Distribution) can be derived and combined by a simple intersection operation to obtain a saliency map. These properties are implemented by a similarity computation, support vector regression (SVR) technique, statistical analysis of training samples, and information theory using low-level features. This technique is able to learn the preferences of human visual behavior while simultaneously considering feature uniqueness. Experimental results show that our approach performs better in predicting human visual attention regions than 12 other models in two test databases. © 2014 ARVO.

  13. A statistical theory for sound radiation and reflection from a duct

    NASA Technical Reports Server (NTRS)

    Cho, Y. C.

    1979-01-01

    A new analytical method is introduced for the study of the sound radiation and reflection from the open end of a duct. The sound is thought of as an aggregation of the quasiparticles-phonons. The motion of the latter is described in terms of the statistical distribution, which is derived from the classical wave theory. The results are in good agreement with the solutions obtained using the Wiener-Hopf technique when the latter is applicable, but the new method is simple and provides straightforward physical interpretation of the problem. Furthermore, it is applicable to a problem involving a duct in which modes are difficult to determine or cannot be defined at all, whereas the Wiener-Hopf technique is not.

  14. On representing the prognostic value of continuous gene expression biomarkers with the restricted mean survival curve.

    PubMed

    Eng, Kevin H; Schiller, Emily; Morrell, Kayla

    2015-11-03

    Researchers developing biomarkers for cancer prognosis from quantitative gene expression data are often faced with an odd methodological discrepancy: while Cox's proportional hazards model, the appropriate and popular technique, produces a continuous and relative risk score, it is hard to cast the estimate in clear clinical terms like median months of survival and percent of patients affected. To produce a familiar Kaplan-Meier plot, researchers commonly make the decision to dichotomize a continuous (often unimodal and symmetric) score. It is well known in the statistical literature that this procedure induces significant bias. We illustrate the liabilities of common techniques for categorizing a risk score and discuss alternative approaches. We promote the use of the restricted mean survival (RMS) and the corresponding RMS curve that may be thought of as an analog to the best fit line from simple linear regression. Continuous biomarker workflows should be modified to include the more rigorous statistical techniques and descriptive plots described in this article. All statistics discussed can be computed via standard functions in the Survival package of the R statistical programming language. Example R language code for the RMS curve is presented in the appendix.

  15. Directional spectra of ocean waves from microwave backscatter: A physical optics solution with application to the short-pulse and two-frequency measurement techniques

    NASA Technical Reports Server (NTRS)

    Jackson, F. C.

    1979-01-01

    Two simple microwave radar techniques that are potentially capable of providing routine satellite measurements of the directional spectrum of ocean waves were developed. One technique, the short pulse technique, makes use of very short pulses to resolve ocean surface wave contrast features in the range direction; the other technique, the two frequency correlation technique makes use of coherency in the transmitted waveform to detect the large ocean wave contrast modulation as a beat or mixing frequency in the power backscattered at two closely separated microwave frequencies. A frequency domain analysis of the short pulse and two frequency systems shows that the two measurement systems are essentially duals; they each operate on the generalized (three frequency) fourth-order statistical moment of the surface transfer function in different, but symmetrical ways, and they both measure the same directional contrast modulation spectrum. A three dimensional physical optics solution for the fourth-order moment was obtained for backscatter in the near vertical, specular regime, assuming Gaussian surface statistics.

  16. Collisional-radiative switching - A powerful technique for converging non-LTE calculations

    NASA Technical Reports Server (NTRS)

    Hummer, D. G.; Voels, S. A.

    1988-01-01

    A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.

  17. Statistical Techniques to Analyze Pesticide Data Program Food Residue Observations.

    PubMed

    Szarka, Arpad Z; Hayworth, Carol G; Ramanarayanan, Tharacad S; Joseph, Robert S I

    2018-06-26

    The U.S. EPA conducts dietary-risk assessments to ensure that levels of pesticides on food in the U.S. food supply are safe. Often these assessments utilize conservative residue estimates, maximum residue levels (MRLs), and a high-end estimate derived from registrant-generated field-trial data sets. A more realistic estimate of consumers' pesticide exposure from food may be obtained by utilizing residues from food-monitoring programs, such as the Pesticide Data Program (PDP) of the U.S. Department of Agriculture. A substantial portion of food-residue concentrations in PDP monitoring programs are below the limits of detection (left-censored), which makes the comparison of regulatory-field-trial and PDP residue levels difficult. In this paper, we present a novel adaption of established statistical techniques, the Kaplan-Meier estimator (K-M), the robust regression on ordered statistic (ROS), and the maximum-likelihood estimator (MLE), to quantify the pesticide-residue concentrations in the presence of heavily censored data sets. The examined statistical approaches include the most commonly used parametric and nonparametric methods for handling left-censored data that have been used in the fields of medical and environmental sciences. This work presents a case study in which data of thiamethoxam residue on bell pepper generated from registrant field trials were compared with PDP-monitoring residue values. The results from the statistical techniques were evaluated and compared with commonly used simple substitution methods for the determination of summary statistics. It was found that the maximum-likelihood estimator (MLE) is the most appropriate statistical method to analyze this residue data set. Using the MLE technique, the data analyses showed that the median and mean PDP bell pepper residue levels were approximately 19 and 7 times lower, respectively, than the corresponding statistics of the field-trial residues.

  18. Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning.

    PubMed

    Xu, Zhoubing; Burke, Ryan P; Lee, Christopher P; Baucom, Rebeccah B; Poulose, Benjamin K; Abramson, Richard G; Landman, Bennett A

    2015-08-01

    Abdominal segmentation on clinically acquired computed tomography (CT) has been a challenging problem given the inter-subject variance of human abdomens and complex 3-D relationships among organs. Multi-atlas segmentation (MAS) provides a potentially robust solution by leveraging label atlases via image registration and statistical fusion. We posit that the efficiency of atlas selection requires further exploration in the context of substantial registration errors. The selective and iterative method for performance level estimation (SIMPLE) method is a MAS technique integrating atlas selection and label fusion that has proven effective for prostate radiotherapy planning. Herein, we revisit atlas selection and fusion techniques for segmenting 12 abdominal structures using clinically acquired CT. Using a re-derived SIMPLE algorithm, we show that performance on multi-organ classification can be improved by accounting for exogenous information through Bayesian priors (so called context learning). These innovations are integrated with the joint label fusion (JLF) approach to reduce the impact of correlated errors among selected atlases for each organ, and a graph cut technique is used to regularize the combined segmentation. In a study of 100 subjects, the proposed method outperformed other comparable MAS approaches, including majority vote, SIMPLE, JLF, and the Wolz locally weighted vote technique. The proposed technique provides consistent improvement over state-of-the-art approaches (median improvement of 7.0% and 16.2% in DSC over JLF and Wolz, respectively) and moves toward efficient segmentation of large-scale clinically acquired CT data for biomarker screening, surgical navigation, and data mining. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Generic Techniques for the Calibration of Robots with Application of the 3-D Fixtures and Statistical Technique on the PUMA 500 and ARID Robots

    NASA Technical Reports Server (NTRS)

    Tawfik, Hazem

    1991-01-01

    A relatively simple, inexpensive, and generic technique that could be used in both laboratories and some operation site environments is introduced at the Robotics Applications and Development Laboratory (RADL) at Kennedy Space Center (KSC). In addition, this report gives a detailed explanation of the set up procedure, data collection, and analysis using this new technique that was developed at the State University of New York at Farmingdale. The technique was used to evaluate the repeatability, accuracy, and overshoot of the Unimate Industrial Robot, PUMA 500. The data were statistically analyzed to provide an insight into the performance of the systems and components of the robot. Also, the same technique was used to check the forward kinematics against the inverse kinematics of RADL's PUMA robot. Recommendations were made for RADL to use this technique for laboratory calibration of the currently existing robots such as the ASEA, high speed controller, Automated Radiator Inspection Device (ARID) etc. Also, recommendations were made to develop and establish other calibration techniques that will be more suitable for site calibration environment and robot certification.

  20. One-Year Outcome Comparison of Laparoscopic, Robotic, and Robotic Intrafascial Simple Prostatectomy for Benign Prostatic Hyperplasia.

    PubMed

    Martín Garzón, Oscar Dario; Azhar, Raed A; Brunacci, Leonardo; Ramirez-Troche, Nelson Emilio; Medina Navarro, Luis; Hernández, Luis Cesar; Nuñez Bragayrac, Luciano; Sotelo Noguera, René Javier

    2016-03-01

    To compare preoperative, intraoperative, and postoperative variables at 1, 6, and 12 months after laparoscopic simple prostatectomy (LSP), robotic simple prostatectomy (RSP), and intrafascial robotic simple prostatectomy (IF-RSP). From January 2003 to November 2014, 315 simple prostatectomies were performed using three techniques, LSP, RSP, and IF-RSP; of the patients who underwent these procedures, 236 met the inclusion criteria for this study. No statistically significant difference (SSD) was found in preoperative or perioperative variables. Of the postoperative variables that were analyzed, an SSD (p > 0.01) in prostate-specific antigen levels was found, with levels of 0.07 ± 1.1 ng/mL following IF-RSP, and the detection rate of prostate adenocarcinoma (26%) and high-grade prostatic intraepithelial neoplasia (HG-PIN; 12%) was higher for IF-RSP. We also found that lower International Prostate Symptom Scores (IPSS) were associated with LSP, at 4.8 ± 3.2. Erectile function was reduced in IF-RSP patients in the first 6 months after surgery but was similar in all patient groups at 12 months after surgery; continence and other measured parameters were also similar at 12 months for all three techniques. The IF-RSP technique is safe and effective, with results at 1-year follow-up for continence, IPSS, and Sexual Health Inventory for Men scores similar to those for the LSP and RSP techniques. IF-RSP also offers the advantages that it does not require postoperative irrigation, has an increased ability to detect prostate cancer (CA) and HG-PIN, and avoids the risk of future cancer and subsequent reintervention for possible new prostate growth.

  1. A common base method for analysis of qPCR data and the application of simple blocking in qPCR experiments.

    PubMed

    Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J

    2017-12-01

    qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.

  2. Information Visualization Techniques for Effective Cross-Discipline Communication

    NASA Astrophysics Data System (ADS)

    Fisher, Ward

    2013-04-01

    Collaboration between research groups in different fields is a common occurrence, but it can often be frustrating due to the absence of a common vocabulary. This lack of a shared context can make expressing important concepts and discussing results difficult. This problem may be further exacerbated when communicating to an audience of laypeople. Without a clear frame of reference, simple concepts are often rendered difficult-to-understand at best, and unintelligible at worst. An easy way to alleviate this confusion is with the use of clear, well-designed visualizations to illustrate an idea, process or conclusion. There exist a number of well-described machine-learning and statistical techniques which can be used to illuminate the information present within complex high-dimensional datasets. Once the information has been separated from the data, clear communication becomes a matter of selecting an appropriate visualization. Ideally, the visualization is information-rich but data-scarce. Anything from a simple bar chart, to a line chart with confidence intervals, to an animated set of 3D point-clouds can be used to render a complex idea as an easily understood image. Several case studies will be presented in this work. In the first study, we will examine how a complex statistical analysis was applied to a high-dimensional dataset, and how the results were succinctly communicated to an audience of microbiologists and chemical engineers. Next, we will examine a technique used to illustrate the concept of the singular value decomposition, as used in the field of computer vision, to a lay audience of undergraduate students from mixed majors. We will then examine a case where a simple animated line plot was used to communicate an approach to signal decomposition, and will finish with a discussion of the tools available to create these visualizations.

  3. Comparison of transform coding methods with an optimal predictor for the data compression of digital elevation models

    NASA Technical Reports Server (NTRS)

    Lewis, Michael

    1994-01-01

    Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.

  4. Advanced statistics: linear regression, part II: multiple linear regression.

    PubMed

    Marill, Keith A

    2004-01-01

    The applications of simple linear regression in medical research are limited, because in most situations, there are multiple relevant predictor variables. Univariate statistical techniques such as simple linear regression use a single predictor variable, and they often may be mathematically correct but clinically misleading. Multiple linear regression is a mathematical technique used to model the relationship between multiple independent predictor variables and a single dependent outcome variable. It is used in medical research to model observational data, as well as in diagnostic and therapeutic studies in which the outcome is dependent on more than one factor. Although the technique generally is limited to data that can be expressed with a linear function, it benefits from a well-developed mathematical framework that yields unique solutions and exact confidence intervals for regression coefficients. Building on Part I of this series, this article acquaints the reader with some of the important concepts in multiple regression analysis. These include multicollinearity, interaction effects, and an expansion of the discussion of inference testing, leverage, and variable transformations to multivariate models. Examples from the first article in this series are expanded on using a primarily graphic, rather than mathematical, approach. The importance of the relationships among the predictor variables and the dependence of the multivariate model coefficients on the choice of these variables are stressed. Finally, concepts in regression model building are discussed.

  5. Interocular suppression

    NASA Astrophysics Data System (ADS)

    Tuna, Ana Rita; Almeida Neves Carrega, Filipa; Nunes, Amélia Fernandes

    2017-08-01

    The objective of this work is to quantify the suppressive imbalance, based on the manipulation of ocular luminance, between a group of subjects with normal binocular vision and a group of subjects with amblyopia. The result reveals that there are statistically significant differences in interocular dominance between two groups, evidencing a greater suppressive imbalance in amblyopic subjects. The technique used, proved to be a simple, easy to apply and economic method, for quantified ocular dominance. It is presented as a technique with the potential to accompany subjects with a marked dominance in one of the eyes that makes fusion difficult.

  6. Seasonal ENSO forecasting: Where does a simple model stand amongst other operational ENSO models?

    NASA Astrophysics Data System (ADS)

    Halide, Halmar

    2017-01-01

    We apply a simple linear multiple regression model called IndOzy for predicting ENSO up to 7 seasonal lead times. The model still used 5 (five) predictors of the past seasonal Niño 3.4 ENSO indices derived from chaos theory and it was rolling-validated to give a one-step ahead forecast. The model skill was evaluated against data from the season of May-June-July (MJJ) 2003 to November-December-January (NDJ) 2015/2016. There were three skill measures such as: Pearson correlation, RMSE, and Euclidean distance were used for forecast verification. The skill of this simple model was than compared to those of combined Statistical and Dynamical models compiled at the IRI (International Research Institute) website. It was found that the simple model was only capable of producing a useful ENSO prediction only up to 3 seasonal leads, while the IRI statistical and Dynamical model skill were still useful up to 4 and 6 seasonal leads, respectively. Even with its short-range seasonal prediction skills, however, the simple model still has a potential to give ENSO-derived tailored products such as probabilistic measures of precipitation and air temperature. Both meteorological conditions affect the presence of wild-land fire hot-spots in Sumatera and Kalimantan. It is suggested that to improve its long-range skill, the simple INDOZY model needs to incorporate a nonlinear model such as an artificial neural network technique.

  7. Statistical assessment of the learning curves of health technologies.

    PubMed

    Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T

    2001-01-01

    (1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second was a case series of consecutive laparoscopic cholecystectomy procedures performed by ten surgeons; the third was randomised trial data derived from the laparoscopic procedure arm of a multicentre trial of groin hernia repair, supplemented by data from non-randomised operations performed during the trial. RESULTS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: Of 4571 abstracts identified, 272 (6%) were later included in the study after review of the full paper. Some 51% of studies assessed a surgical minimal access technique and 95% were case series. The statistical method used most often (60%) was splitting the data into consecutive parts (such as halves or thirds), with only 14% attempting a more formal statistical analysis. The reporting of the studies was poor, with 31% giving no details of data collection methods. RESULTS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: Of 9431 abstracts assessed, 115 (1%) were deemed appropriate for further investigation and, of these, 18 were included in the study. All of the methods for complex data sets were identified in the non-clinical literature. These were discriminant analysis, two-stage estimation of learning rates, generalised estimating equations, multilevel models, latent curve models, time series models and stochastic parameter models. In addition, eight new shapes of learning curves were identified. RESULTS - TESTING OF STATISTICAL METHODS: No one particular shape of learning curve performed significantly better than another. The performance of 'operation time' as a proxy for learning differed between the three procedures. Multilevel modelling using the laparoscopic cholecystectomy data demonstrated and measured surgeon-specific and confounding effects. The inclusion of non-randomised cases, despite the possible limitations of the method, enhanced the interpretation of learning effects. CONCLUSIONS - HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW: The statistical methods used for assessing learning effects in health technology assessment have been crude and the reporting of studies poor. CONCLUSIONS - NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH: A number of statistical methods for assessing learning effects were identified that had not hitherto been used in health technology assessment. There was a hierarchy of methods for the identification and measurement of learning, and the more sophisticated methods for both have had little if any use in health technology assessment. This demonstrated the value of considering fields outside clinical research when addressing methodological issues in health technology assessment. CONCLUSIONS - TESTING OF STATISTICAL METHODS: It has been demonstrated that the portfolio of techniques identified can enhance investigations of learning curve effects. (ABSTRACT TRUNCATED)

  8. Advanced Statistics for Exotic Animal Practitioners.

    PubMed

    Hodsoll, John; Hellier, Jennifer M; Ryan, Elizabeth G

    2017-09-01

    Correlation and regression assess the association between 2 or more variables. This article reviews the core knowledge needed to understand these analyses, moving from visual analysis in scatter plots through correlation, simple and multiple linear regression, and logistic regression. Correlation estimates the strength and direction of a relationship between 2 variables. Regression can be considered more general and quantifies the numerical relationships between an outcome and 1 or multiple variables in terms of a best-fit line, allowing predictions to be made. Each technique is discussed with examples and the statistical assumptions underlying their correct application. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Simplified estimation of age-specific reference intervals for skewed data.

    PubMed

    Wright, E M; Royston, P

    1997-12-30

    Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.

  10. Determination of Flavonoids, Phenolic Acids, and Xanthines in Mate Tea (Ilex paraguariensis St.-Hil.)

    PubMed Central

    Bojić, Mirza; Simon Haas, Vicente; Maleš, Željan

    2013-01-01

    Raw material, different formulations of foods, and dietary supplements of mate demands control of the content of bioactive substances for which high performance thin layer chromatography (TLC), described here, presents simple and rapid approach for detections as well as quantification. Using TLC densitometry, the following bioactive compounds were identified and quantified: chlorogenic acid (2.1 mg/g), caffeic acid (1.5 mg/g), rutin (5.2 mg/g), quercetin (2.2 mg/g), and kaempferol (4.5 mg/g). The results obtained with TLC densitometry for caffeine (5.4 mg/g) and theobromine (2.7 mg/g) show no statistical difference to the content of total xanthines (7.6 mg/g) obtained by UV-Vis spectrophotometry. Thus, TLC remains a technique of choice for simple and rapid analysis of great number of samples as well as a primary screening technique in plant analysis. PMID:23841023

  11. Scaling images using their background ratio. An application in statistical comparisons of images.

    PubMed

    Kalemis, A; Binnie, D; Bailey, D L; Flower, M A; Ott, R J

    2003-06-07

    Comparison of two medical images often requires image scaling as a pre-processing step. This is usually done with the scaling-to-the-mean or scaling-to-the-maximum techniques which, under certain circumstances, in quantitative applications may contribute a significant amount of bias. In this paper, we present a simple scaling method which assumes only that the most predominant values in the corresponding images belong to their background structure. The ratio of the two images to be compared is calculated and its frequency histogram is plotted. The scaling factor is given by the position of the peak in this histogram which belongs to the background structure. The method was tested against the traditional scaling-to-the-mean technique on simulated planar gamma-camera images which were compared using pixelwise statistical parametric tests. Both sensitivity and specificity for each condition were measured over a range of different contrasts and sizes of inhomogeneity for the two scaling techniques. The new method was found to preserve sensitivity in all cases while the traditional technique resulted in significant degradation of sensitivity in certain cases.

  12. Basic principles of Hasse diagram technique in chemistry.

    PubMed

    Brüggemann, Rainer; Voigt, Kristina

    2008-11-01

    Principles of partial order applied to ranking are explained. The Hasse diagram technique (HDT) is the application of partial order theory based on a data matrix. In this paper, HDT is introduced in a stepwise procedure, and some elementary theorems are exemplified. The focus is to show how the multivariate character of a data matrix is realized by HDT and in which cases one should apply other mathematical or statistical methods. Many simple examples illustrate the basic theoretical ideas. Finally, it is shown that HDT is a useful alternative for the evaluation of antifouling agents, which was originally performed by amoeba diagrams.

  13. LandScape: a simple method to aggregate p-values and other stochastic variables without a priori grouping.

    PubMed

    Wiuf, Carsten; Schaumburg-Müller Pallesen, Jonatan; Foldager, Leslie; Grove, Jakob

    2016-08-01

    In many areas of science it is custom to perform many, potentially millions, of tests simultaneously. To gain statistical power it is common to group tests based on a priori criteria such as predefined regions or by sliding windows. However, it is not straightforward to choose grouping criteria and the results might depend on the chosen criteria. Methods that summarize, or aggregate, test statistics or p-values, without relying on a priori criteria, are therefore desirable. We present a simple method to aggregate a sequence of stochastic variables, such as test statistics or p-values, into fewer variables without assuming a priori defined groups. We provide different ways to evaluate the significance of the aggregated variables based on theoretical considerations and resampling techniques, and show that under certain assumptions the FWER is controlled in the strong sense. Validity of the method was demonstrated using simulations and real data analyses. Our method may be a useful supplement to standard procedures relying on evaluation of test statistics individually. Moreover, by being agnostic and not relying on predefined selected regions, it might be a practical alternative to conventionally used methods of aggregation of p-values over regions. The method is implemented in Python and freely available online (through GitHub, see the Supplementary information).

  14. Evaluation of IOTA Simple Ultrasound Rules to Distinguish Benign and Malignant Ovarian Tumours.

    PubMed

    Garg, Sugandha; Kaur, Amarjit; Mohi, Jaswinder Kaur; Sibia, Preet Kanwal; Kaur, Navkiran

    2017-08-01

    IOTA stands for International Ovarian Tumour Analysis group. Ovarian cancer is one of the common cancers in women and is diagnosed at later stage in majority. The limiting factor for early diagnosis is lack of standardized terms and procedures in gynaecological sonography. Introduction of IOTA rules has provided some consistency in defining morphological features of ovarian masses through a standardized examination technique. To evaluate the efficacy of IOTA simple ultrasound rules in distinguishing benign and malignant ovarian tumours and establishing their use as a tool in early diagnosis of ovarian malignancy. A hospital based case control prospective study was conducted. Patients with suspected ovarian pathology were evaluated using IOTA ultrasound rules and designated as benign or malignant. Findings were correlated with histopathological findings. Collected data was statistically analysed using chi-square test and kappa statistical method. Out of initial 55 patients, 50 patients were included in the final analysis who underwent surgery. IOTA simple rules were applicable in 45 out of these 50 patients (90%). The sensitivity for the detection of malignancy in cases where IOTA simple rules were applicable was 91.66% and the specificity was 84.84%. Accuracy was 86.66%. Classifying inconclusive cases as malignant, the sensitivity and specificity was 93% and 80% respectively. High level of agreement was found between USG and histopathological diagnosis with Kappa value as 0.323. IOTA simple ultrasound rules were highly sensitive and specific in predicting ovarian malignancy preoperatively yet being reproducible, easy to train and use.

  15. Estimation of critical behavior from the density of states in classical statistical models

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Peratzakis, A.; Fytas, N. G.

    2004-12-01

    We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.

  16. Fitting mechanistic epidemic models to data: A comparison of simple Markov chain Monte Carlo approaches.

    PubMed

    Li, Michael; Dushoff, Jonathan; Bolker, Benjamin M

    2018-07-01

    Simple mechanistic epidemic models are widely used for forecasting and parameter estimation of infectious diseases based on noisy case reporting data. Despite the widespread application of models to emerging infectious diseases, we know little about the comparative performance of standard computational-statistical frameworks in these contexts. Here we build a simple stochastic, discrete-time, discrete-state epidemic model with both process and observation error and use it to characterize the effectiveness of different flavours of Bayesian Markov chain Monte Carlo (MCMC) techniques. We use fits to simulated data, where parameters (and future behaviour) are known, to explore the limitations of different platforms and quantify parameter estimation accuracy, forecasting accuracy, and computational efficiency across combinations of modeling decisions (e.g. discrete vs. continuous latent states, levels of stochasticity) and computational platforms (JAGS, NIMBLE, Stan).

  17. Variation in reaction norms: Statistical considerations and biological interpretation.

    PubMed

    Morrissey, Michael B; Liefting, Maartje

    2016-09-01

    Analysis of reaction norms, the functions by which the phenotype produced by a given genotype depends on the environment, is critical to studying many aspects of phenotypic evolution. Different techniques are available for quantifying different aspects of reaction norm variation. We examine what biological inferences can be drawn from some of the more readily applicable analyses for studying reaction norms. We adopt a strongly biologically motivated view, but draw on statistical theory to highlight strengths and drawbacks of different techniques. In particular, consideration of some formal statistical theory leads to revision of some recently, and forcefully, advocated opinions on reaction norm analysis. We clarify what simple analysis of the slope between mean phenotype in two environments can tell us about reaction norms, explore the conditions under which polynomial regression can provide robust inferences about reaction norm shape, and explore how different existing approaches may be used to draw inferences about variation in reaction norm shape. We show how mixed model-based approaches can provide more robust inferences than more commonly used multistep statistical approaches, and derive new metrics of the relative importance of variation in reaction norm intercepts, slopes, and curvatures. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  18. Statistical mechanics of broadcast channels using low-density parity-check codes.

    PubMed

    Nakamura, Kazutaka; Kabashima, Yoshiyuki; Morelos-Zaragoza, Robert; Saad, David

    2003-03-01

    We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

  19. Analytic score distributions for a spatially continuous tridirectional Monte Carol transport problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Booth, T.E.

    1996-01-01

    The interpretation of the statistical error estimates produced by Monte Carlo transport codes is still somewhat of an art. Empirically, there are variance reduction techniques whose error estimates are almost always reliable, and there are variance reduction techniques whose error estimates are often unreliable. Unreliable error estimates usually result from inadequate large-score sampling from the score distribution`s tail. Statisticians believe that more accurate confidence interval statements are possible if the general nature of the score distribution can be characterized. Here, the analytic score distribution for the exponential transform applied to a simple, spatially continuous Monte Carlo transport problem is provided.more » Anisotropic scattering and implicit capture are included in the theory. In large part, the analytic score distributions that are derived provide the basis for the ten new statistical quality checks in MCNP.« less

  20. A simple blind placement of the left-sided double-lumen tubes.

    PubMed

    Zong, Zhi Jun; Shen, Qi Ying; Lu, Yao; Li, Yuan Hai

    2016-11-01

    One-lung ventilation (OLV) has been commonly provided by using a double-lumen tube (DLT). Previous reports have indicated the high incidence of inappropriate DLT positioning in conventional maneuvers.After obtaining approval from the medical ethics committee of First Affiliated Hospital of Anhui Medical University and written consent from patients, 88 adult patients belonging to American society of anesthesiologists (ASA) physical status grade I or II, and undergoing elective thoracic surgery requiring a left-side DLT for OLV were enrolled in this prospective, single-blind, randomized controlled study. Patients were randomly allocated to 1 of 2 groups: simple maneuver group or conventional maneuver group. The simple maneuver is a method that relies on partially inflating the bronchial balloon and recreating the effect of a carinal hook on the DLTs to give an idea of orientation and depth. After the induction of anesthesia the patients were intubated with a left-sided Robertshaw DLT using one of the 2 intubation techniques. After intubation of each DLT, an anesthesiologist used flexible bronchoscopy to evaluate the patient while the patient lay in a supine position. The number of optimal position and the time required to place DLT in correct position were recorded.Time for the intubation of DLT took 100 ± 16.2 seconds (mean ± SD) in simple maneuver group and 95.1 ± 20.8 seconds in conventional maneuver group. The difference was not statistically significant (P = 0.221). Time for fiberoptic bronchoscope (FOB) took 22 ± 4.8 seconds in simple maneuver group and was statistically faster than that in conventional maneuver group (43.6 ± 23.7 seconds, P < 0.001). Nearly 98% of the 44 intubations in simple maneuver group were considered as in optimal position while only 52% of the 44 intubations in conventional maneuver group were in optimal position, and the difference was statistically significant (P < 0.001).This simple maneuver is more rapid and more accurate to position left-sided DLTs, it may be substituted for FOB during positioning of a left-sided DLT in condition that FOB is unavailable or inapplicable.

  1. Combining heuristic and statistical techniques in landslide hazard assessments

    NASA Astrophysics Data System (ADS)

    Cepeda, Jose; Schwendtner, Barbara; Quan, Byron; Nadim, Farrokh; Diaz, Manuel; Molina, Giovanni

    2014-05-01

    As a contribution to the Global Assessment Report 2013 - GAR2013, coordinated by the United Nations International Strategy for Disaster Reduction - UNISDR, a drill-down exercise for landslide hazard assessment was carried out by entering the results of both heuristic and statistical techniques into a new but simple combination rule. The data available for this evaluation included landslide inventories, both historical and event-based. In addition to the application of a heuristic method used in the previous editions of GAR, the availability of inventories motivated the use of statistical methods. The heuristic technique is largely based on the Mora & Vahrson method, which estimates hazard as the product of susceptibility and triggering factors, where classes are weighted based on expert judgment and experience. Two statistical methods were also applied: the landslide index method, which estimates weights of the classes for the susceptibility and triggering factors based on the evidence provided by the density of landslides in each class of the factors; and the weights of evidence method, which extends the previous technique to include both positive and negative evidence of landslide occurrence in the estimation of weights for the classes. One key aspect during the hazard evaluation was the decision on the methodology to be chosen for the final assessment. Instead of opting for a single methodology, it was decided to combine the results of the three implemented techniques using a combination rule based on a normalization of the results of each method. The hazard evaluation was performed for both earthquake- and rainfall-induced landslides. The country chosen for the drill-down exercise was El Salvador. The results indicate that highest hazard levels are concentrated along the central volcanic chain and at the centre of the northern mountains.

  2. Physical models of collective cell motility: from cell to tissue

    NASA Astrophysics Data System (ADS)

    Camley, B. A.; Rappel, W.-J.

    2017-03-01

    In this article, we review physics-based models of collective cell motility. We discuss a range of techniques at different scales, ranging from models that represent cells as simple self-propelled particles to phase field models that can represent a cell’s shape and dynamics in great detail. We also extensively review the ways in which cells within a tissue choose their direction, the statistics of cell motion, and some simple examples of how cell-cell signaling can interact with collective cell motility. This review also covers in more detail selected recent works on collective cell motion of small numbers of cells on micropatterns, in wound healing, and the chemotaxis of clusters of cells.

  3. Using statistical correlation to compare geomagnetic data sets

    NASA Astrophysics Data System (ADS)

    Stanton, T.

    2009-04-01

    The major features of data curves are often matched, to a first order, by bump and wiggle matching to arrive at an offset between data sets. This poster describes a simple statistical correlation program that has proved useful during this stage by determining the optimal correlation between geomagnetic curves using a variety of fixed and floating windows. Its utility is suggested by the fact that it is simple to run, yet generates meaningful data comparisons, often when data noise precludes the obvious matching of curve features. Data sets can be scaled, smoothed, normalised and standardised, before all possible correlations are carried out between selected overlapping portions of each curve. Best-fit offset curves can then be displayed graphically. The program was used to cross-correlate directional and palaeointensity data from Holocene lake sediments (Stanton et al., submitted) and Holocene lava flows. Some example curve matches are shown, including some that illustrate the potential of this technique when examining particularly sparse data sets. Stanton, T., Snowball, I., Zillén, L. and Wastegård, S., submitted. Detecting potential errors in varve chronology and 14C ages using palaeosecular variation curves, lead pollution history and statistical correlation. Quaternary Geochronology.

  4. Improving Robot Locomotion Through Learning Methods for Expensive Black-Box Systems

    DTIC Science & Technology

    2013-11-01

    development of a class of “gradient free” optimization techniques; these include local approaches, such as a Nelder- Mead simplex search (c.f. [73]), and global...1Note that this simple method differs from the Nelder Mead constrained nonlinear optimization method [73]. 39 the Non-dominated Sorting Genetic Algorithm...Kober, and Jan Peters. Model-free inverse reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2011. [12] George

  5. Statistical mechanics of the vertex-cover problem

    NASA Astrophysics Data System (ADS)

    Hartmann, Alexander K.; Weigt, Martin

    2003-10-01

    We review recent progress in the study of the vertex-cover problem (VC). The VC belongs to the class of NP-complete graph theoretical problems, which plays a central role in theoretical computer science. On ensembles of random graphs, VC exhibits a coverable-uncoverable phase transition. Very close to this transition, depending on the solution algorithm, easy-hard transitions in the typical running time of the algorithms occur. We explain a statistical mechanics approach, which works by mapping the VC to a hard-core lattice gas, and then applying techniques such as the replica trick or the cavity approach. Using these methods, the phase diagram of the VC could be obtained exactly for connectivities c < e, where the VC is replica symmetric. Recently, this result could be confirmed using traditional mathematical techniques. For c > e, the solution of the VC exhibits full replica symmetry breaking. The statistical mechanics approach can also be used to study analytically the typical running time of simple complete and incomplete algorithms for the VC. Finally, we describe recent results for the VC when studied on other ensembles of finite- and infinite-dimensional graphs.

  6. Optimal spatial sampling techniques for ground truth data in microwave remote sensing of soil moisture

    NASA Technical Reports Server (NTRS)

    Rao, R. G. S.; Ulaby, F. T.

    1977-01-01

    The paper examines optimal sampling techniques for obtaining accurate spatial averages of soil moisture, at various depths and for cell sizes in the range 2.5-40 acres, with a minimum number of samples. Both simple random sampling and stratified sampling procedures are used to reach a set of recommended sample sizes for each depth and for each cell size. Major conclusions from statistical sampling test results are that (1) the number of samples required decreases with increasing depth; (2) when the total number of samples cannot be prespecified or the moisture in only one single layer is of interest, then a simple random sample procedure should be used which is based on the observed mean and SD for data from a single field; (3) when the total number of samples can be prespecified and the objective is to measure the soil moisture profile with depth, then stratified random sampling based on optimal allocation should be used; and (4) decreasing the sensor resolution cell size leads to fairly large decreases in samples sizes with stratified sampling procedures, whereas only a moderate decrease is obtained in simple random sampling procedures.

  7. Statistical methods for efficient design of community surveys of response to noise: Random coefficients regression models

    NASA Technical Reports Server (NTRS)

    Tomberlin, T. J.

    1985-01-01

    Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.

  8. Surveillance of Arthropod Vector-Borne Infectious Diseases Using Remote Sensing Techniques: A Review

    PubMed Central

    Kalluri, Satya; Gilruth, Peter; Rogers, David; Szczur, Martha

    2007-01-01

    Epidemiologists are adopting new remote sensing techniques to study a variety of vector-borne diseases. Associations between satellite-derived environmental variables such as temperature, humidity, and land cover type and vector density are used to identify and characterize vector habitats. The convergence of factors such as the availability of multi-temporal satellite data and georeferenced epidemiological data, collaboration between remote sensing scientists and biologists, and the availability of sophisticated, statistical geographic information system and image processing algorithms in a desktop environment creates a fertile research environment. The use of remote sensing techniques to map vector-borne diseases has evolved significantly over the past 25 years. In this paper, we review the status of remote sensing studies of arthropod vector-borne diseases due to mosquitoes, ticks, blackflies, tsetse flies, and sandflies, which are responsible for the majority of vector-borne diseases in the world. Examples of simple image classification techniques that associate land use and land cover types with vector habitats, as well as complex statistical models that link satellite-derived multi-temporal meteorological observations with vector biology and abundance, are discussed here. Future improvements in remote sensing applications in epidemiology are also discussed. PMID:17967056

  9. Optimisation of radiation dose and image quality in mobile neonatal chest radiography.

    PubMed

    Hinojos-Armendáriz, V I; Mejía-Rosales, S J; Franco-Cabrera, M C

    2018-05-01

    To optimise the radiation dose and image quality for chest radiography in the neonatal intensive care unit (NICU) by increasing the mean beam energy. Two techniques for the acquisition of NICU AP chest X-ray images were compared for image quality and radiation dose. 73 images were acquired using a standard technique (56 kV, 3.2 mAs and no additional filtration) and 90 images with a new technique (62 kV, 2 mAs and 2 mm Al filtration). The entrance surface air kerma (ESAK) was measured using a phantom and compared between the techniques and against established diagnostic reference levels (DRL). Images were evaluated using seven image quality criteria independently by three radiologists. Images quality and radiation dose were compared statistically between the standard and new techniques. The maximum ESAK for the new technique was 40.20 μGy, 43.7% of the ESAK of the standard technique. Statistical evaluation demonstrated no significant differences in image quality between the two acquisition techniques. Based on the techniques and acquisition factors investigated within this study, it is possible to lower the radiation dose without any significant effects on image quality by adding filtration (2 mm Al) and increasing the tube potential. Such steps are relatively simple to undertake and as such, other departments should consider testing and implementing this dose reduction strategy within clinical practice where appropriate. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.

  10. Monte Carlo investigation of transient acoustic fields in partially or completely bounded medium. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Thanedar, B. D.

    1972-01-01

    A simple repetitive calculation was used to investigate what happens to the field in terms of the signal paths of disturbances originating from the energy source. The computation allowed the field to be reconstructed as a function of space and time on a statistical basis. The suggested Monte Carlo method is in response to the need for a numerical method to supplement analytical methods of solution which are only valid when the boundaries have simple shapes, rather than for a medium that is bounded. For the analysis, a suitable model was created from which was developed an algorithm for the estimation of acoustic pressure variations in the region under investigation. The validity of the technique was demonstrated by analysis of simple physical models with the aid of a digital computer. The Monte Carlo method is applicable to a medium which is homogeneous and is enclosed by either rectangular or curved boundaries.

  11. The Problem of Auto-Correlation in Parasitology

    PubMed Central

    Pollitt, Laura C.; Reece, Sarah E.; Mideo, Nicole; Nussey, Daniel H.; Colegrave, Nick

    2012-01-01

    Explaining the contribution of host and pathogen factors in driving infection dynamics is a major ambition in parasitology. There is increasing recognition that analyses based on single summary measures of an infection (e.g., peak parasitaemia) do not adequately capture infection dynamics and so, the appropriate use of statistical techniques to analyse dynamics is necessary to understand infections and, ultimately, control parasites. However, the complexities of within-host environments mean that tracking and analysing pathogen dynamics within infections and among hosts poses considerable statistical challenges. Simple statistical models make assumptions that will rarely be satisfied in data collected on host and parasite parameters. In particular, model residuals (unexplained variance in the data) should not be correlated in time or space. Here we demonstrate how failure to account for such correlations can result in incorrect biological inference from statistical analysis. We then show how mixed effects models can be used as a powerful tool to analyse such repeated measures data in the hope that this will encourage better statistical practices in parasitology. PMID:22511865

  12. Evaluation of IOTA Simple Ultrasound Rules to Distinguish Benign and Malignant Ovarian Tumours

    PubMed Central

    Kaur, Amarjit; Mohi, Jaswinder Kaur; Sibia, Preet Kanwal; Kaur, Navkiran

    2017-01-01

    Introduction IOTA stands for International Ovarian Tumour Analysis group. Ovarian cancer is one of the common cancers in women and is diagnosed at later stage in majority. The limiting factor for early diagnosis is lack of standardized terms and procedures in gynaecological sonography. Introduction of IOTA rules has provided some consistency in defining morphological features of ovarian masses through a standardized examination technique. Aim To evaluate the efficacy of IOTA simple ultrasound rules in distinguishing benign and malignant ovarian tumours and establishing their use as a tool in early diagnosis of ovarian malignancy. Materials and Methods A hospital based case control prospective study was conducted. Patients with suspected ovarian pathology were evaluated using IOTA ultrasound rules and designated as benign or malignant. Findings were correlated with histopathological findings. Collected data was statistically analysed using chi-square test and kappa statistical method. Results Out of initial 55 patients, 50 patients were included in the final analysis who underwent surgery. IOTA simple rules were applicable in 45 out of these 50 patients (90%). The sensitivity for the detection of malignancy in cases where IOTA simple rules were applicable was 91.66% and the specificity was 84.84%. Accuracy was 86.66%. Classifying inconclusive cases as malignant, the sensitivity and specificity was 93% and 80% respectively. High level of agreement was found between USG and histopathological diagnosis with Kappa value as 0.323. Conclusion IOTA simple ultrasound rules were highly sensitive and specific in predicting ovarian malignancy preoperatively yet being reproducible, easy to train and use. PMID:28969237

  13. Hemostasis in tubeless PNL: point of technique.

    PubMed

    Aron, Monish; Goel, Rajiv; Kesarwani, Pawan K; Gupta, Narmada P

    2004-01-01

    Tubeless PNL (percutaneous nephrolithotomy), in a highly selected group of patients, is a modification aimed at reducing the morbidity of PNL. We present a simple technique of achieving tract hemostasis as an adjunct to the safe performance of a tubeless PNL. Charts of 40 consecutive patients who underwent tubeless PNL at our center were reviewed. In the latter 20 consecutive patients, diathermy coagulation of the intrarenal bleeders and tract was done and these patients were compared with the earlier 20 patients in whom fulguration was not done. Drop in hemoglobin, postoperative analgesic requirement, operating time, length of hospitalization and postoperative complications were compared. No statistical difference was found between the operative times and drop in hemoglobin for both the groups. The length of hospitalization and postoperative analgesic requirement were significantly less in the fulguration group. No significant complications were noted in either of the two groups. Fulguration of visible intrarenal and tract bleeders is a simple, safe and effective hemostatic adjunct in patients undergoing tubeless PNL. copyright 2004 S. Karger AG, Basel

  14. Wave propagation in a random medium

    NASA Technical Reports Server (NTRS)

    Lee, R. W.; Harp, J. C.

    1969-01-01

    A simple technique is used to derive statistical characterizations of the perturbations imposed upon a wave (plane, spherical or beamed) propagating through a random medium. The method is essentially physical rather than mathematical, and is probably equivalent to the Rytov method. The limitations of the method are discussed in some detail; in general they are restrictive only for optical paths longer than a few hundred meters, and for paths at the lower microwave frequencies. Situations treated include arbitrary path geometries, finite transmitting and receiving apertures, and anisotropic media. Results include, in addition to the usual statistical quantities, time-lagged functions, mixed functions involving amplitude and phase fluctuations, angle-of-arrival covariances, frequency covariances, and other higher-order quantities.

  15. Acute effect of scapular proprioceptive neuromuscular facilitation (PNF) techniques and classic exercises in adhesive capsulitis: a randomized controlled trial

    PubMed Central

    Balcı, Nilay Comuk; Yuruk, Zeliha Ozlem; Zeybek, Aslican; Gulsen, Mustafa; Tekindal, Mustafa Agah

    2016-01-01

    [Purpose] The aim of our study was to compare the initial effects of scapular proprioceptive neuromuscular facilitation techniques and classic exercise interventions with physiotherapy modalities on pain, scapular dyskinesis, range of motion, and function in adhesive capsulitis. [Subjects and Methods] Fifty-three subjects were allocated to 3 groups: scapular proprioceptive neuromuscular facilitation exercies and physiotherapy modalities, classic exercise and physiotherapy modalities, and only physiotherapy modalities. The intervention was applied in a single session. The Visual Analog Scale, Lateral Scapular Slide Test, range of motion and Simple Shoulder Test were evaluated before and just after the one-hour intervention in the same session (all in one session). [Results] All of the groups showed significant differences in shoulder flexion and abduction range of motion and Simple Shoulder Test scores. There were statistically significant differences in Visual Analog Scale scores in the proprioceptive neuromuscular facilitation and control groups, and no treatment method had significant effect on the Lateral Scapular Slide Test results. There were no statistically significant differences between the groups before and after the intervention. [Conclusion] Proprioceptive neuromuscular facilitation, classic exercise, and physiotherapy modalities had immediate effects on adhesive capsulitis in our study. However, there was no additional benefit of exercises in one session over physiotherapy modalities. Also, an effective treatment regimen for shoulder rehabilitation of adhesive capsulitis patients should include scapular exercises. PMID:27190456

  16. Biomechanical evaluation of knotless anatomical double-layer double-row rotator cuff repair: a comparative ex vivo study.

    PubMed

    Hepp, Pierre; Osterhoff, Georg; Engel, Thomas; Marquass, Bastian; Klink, Thomas; Josten, Christoph

    2009-07-01

    The layered configuration of the rotator cuff tendon is not taken into account in classic rotator cuff tendon repair techniques. The mechanical properties of (1) the classic double-row technique, (2) a double-layer double-row (DLDR) technique in simple suture configuration, and (3) a DLDR technique in mattress suture configuration are significantly different. Controlled laboratory study. Twenty-four sheep shoulders were assigned to 3 repair groups of full-thickness infraspinatus tears: group 1, traditional double-row repair; group 2, DLDR anchor repair with simple suture configuration; and group 3, DLDR knotless repair with mattress suture configuration. After ultrasound evaluation of the repair, each specimen was cyclically loaded with 10 to 100 N for 50 cycles. Each specimen was then loaded to failure at a rate of 1 mm/s. There were no statistically significant differences among the 3 testing groups for the mean footprint area. The cyclic loading test revealed no significant difference among the 3 groups with regard to elongation. For the load-to-failure test, groups 2 and 3 showed no differences in ultimate tensile load when compared with group 1. However, when compared to group 2, group 3 was found to have significantly higher values regarding ultimate load, ultimate elongation, and energy absorbed. The DLDR fixation techniques may provide strength of initial repair comparable with that of commonly used double-row techniques. When compared with the knotless technique with mattress sutures, simple suture configuration of DLDR repair may be too weak. Knotless DLDR rotator cuff repair may (1) restore the footprint by the use of double-row principles and (2) enable restoration of the shape and profile. Double-layer double-row fixation in mattress suture configuration has initial fixation strength comparable with that of the classic double-row fixation and so may potentially improve functional results of rotator cuff repair.

  17. Comparing the Efficiency of Two Different Extraction Techniques in Removal of Maxillary Third Molars: A Randomized Controlled Trial.

    PubMed

    Edward, Joseph; Aziz, Mubarak A; Madhu Usha, Arjun; Narayanan, Jyothi K

    2017-12-01

    Extractions are routine procedures in dental surgery. Traditional extraction techniques use a combination of severing the periodontal attachment, luxation with an elevator, and removal with forceps. A new technique of extraction of maxillary third molar is introduced in this study-Joedds technique, which is compared with the conventional technique. One hundred people were included in the study, the people were divided into two groups by means of simple random sampling. In one group conventional technique of maxillary third molar extraction was used and on second Joedds technique was used. Statistical analysis was carried out with student's t test. Analysis of 100 patients based on parameters showed that the novel joedds technique had minimal trauma to surrounding tissues, less tuberosity and root fractures and the time taken for extraction was <2 min while compared to other group of patients. This novel technique has proved to be better than conventional third molar extraction technique, with minimal complications. If Proper selection of cases and right technique are used.

  18. The Zombie Plot: A Simple Graphic Method for Visualizing the Efficacy of a Diagnostic Test.

    PubMed

    Richardson, Michael L

    2016-08-09

    One of the most important jobs of a radiologist is to pick the most appropriate imaging test for a particular clinical situation. Making a proper selection sometimes requires statistical analysis. The objective of this article is to introduce a simple graphic technique, an ROC plot that has been divided into zones of mostly bad imaging efficacy (ZOMBIE, hereafter referred to as the "zombie plot"), that transforms information about imaging efficacy from the numeric domain into the visual domain. The numeric rationale for the use of zombie plots is given, as are several examples of the clinical use of these plots. Two online calculators are described that simplify the process of producing a zombie plot.

  19. Holmium laser enucleation versus laparoscopic simple prostatectomy for large adenomas.

    PubMed

    Juaneda, R; Thanigasalam, R; Rizk, J; Perrot, E; Theveniaud, P E; Baumert, H

    2016-01-01

    The aim of this study is to compare Holmium laser enucleation of the prostate with another minimally invasive technique, the laparoscopic simple prostatectomy. We compared outcomes of a series of 40 patients who underwent laparoscopic simple prostatectomy (n=20) with laser enucleation of the prostate (n=20) for large adenomas (>100 grams) at our institution. Study variables included operative time and catheterization time, hospital stay, pre- and post-operative International Prostate Symptom Score and maximum urinary flow rate, complications and economic evaluation. Statistical analyses were performed using the Student t test and Fisher test. There were no significant differences in patient age, preoperative prostatic size, operating time or specimen weight between the 2 groups. Duration of catheterization (P=.0008) and hospital stay (P<.0001) were significantly less in the laser group. Both groups showed a statistically significant improvement in functional variables at 3 months post operatively. The cost utility analysis for Holmium per case was 2589 euros versus 4706 per laparoscopic case. In the laser arm, 4 patients (20%) experienced complications according to the modified Clavien classification system versus 5 (25%) in the laparoscopic group (P>.99). Holmium enucleation of the prostate has similar short term functional results and complication rates compared to laparoscopic simple prostatectomy performed in large glands with the advantage of less catheterization time, lower economic costs and a reduced hospital stay. Copyright © 2015 AEU. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Random noise effects in pulse-mode digital multilayer neural networks.

    PubMed

    Kim, Y C; Shanblatt, M A

    1995-01-01

    A pulse-mode digital multilayer neural network (DMNN) based on stochastic computing techniques is implemented with simple logic gates as basic computing elements. The pulse-mode signal representation and the use of simple logic gates for neural operations lead to a massively parallel yet compact and flexible network architecture, well suited for VLSI implementation. Algebraic neural operations are replaced by stochastic processes using pseudorandom pulse sequences. The distributions of the results from the stochastic processes are approximated using the hypergeometric distribution. Synaptic weights and neuron states are represented as probabilities and estimated as average pulse occurrence rates in corresponding pulse sequences. A statistical model of the noise (error) is developed to estimate the relative accuracy associated with stochastic computing in terms of mean and variance. Computational differences are then explained by comparison to deterministic neural computations. DMNN feedforward architectures are modeled in VHDL using character recognition problems as testbeds. Computational accuracy is analyzed, and the results of the statistical model are compared with the actual simulation results. Experiments show that the calculations performed in the DMNN are more accurate than those anticipated when Bernoulli sequences are assumed, as is common in the literature. Furthermore, the statistical model successfully predicts the accuracy of the operations performed in the DMNN.

  1. A multi-resolution approach for optimal mass transport

    NASA Astrophysics Data System (ADS)

    Dominitz, Ayelet; Angenent, Sigurd; Tannenbaum, Allen

    2007-09-01

    Optimal mass transport is an important technique with numerous applications in econometrics, fluid dynamics, automatic control, statistical physics, shape optimization, expert systems, and meteorology. Motivated by certain problems in image registration and medical image visualization, in this note, we describe a simple gradient descent methodology for computing the optimal L2 transport mapping which may be easily implemented using a multiresolution scheme. We also indicate how the optimal transport map may be computed on the sphere. A numerical example is presented illustrating our ideas.

  2. An effective visualization technique for depth perception in augmented reality-based surgical navigation.

    PubMed

    Choi, Hyunseok; Cho, Byunghyun; Masamune, Ken; Hashizume, Makoto; Hong, Jaesung

    2016-03-01

    Depth perception is a major issue in augmented reality (AR)-based surgical navigation. We propose an AR and virtual reality (VR) switchable visualization system with distance information, and evaluate its performance in a surgical navigation set-up. To improve depth perception, seamless switching from AR to VR was implemented. In addition, the minimum distance between the tip of the surgical tool and the nearest organ was provided in real time. To evaluate the proposed techniques, five physicians and 20 non-medical volunteers participated in experiments. Targeting error, time taken, and numbers of collisions were measured in simulation experiments. There was a statistically significant difference between a simple AR technique and the proposed technique. We confirmed that depth perception in AR could be improved by the proposed seamless switching between AR and VR, and providing an indication of the minimum distance also facilitated the surgical tasks. Copyright © 2015 John Wiley & Sons, Ltd.

  3. Orbit dynamics and geographical coverage capabilities of satellite-based solar occultation experiments for global monitoring of stratospheric constituents

    NASA Technical Reports Server (NTRS)

    Brooks, D. R.

    1980-01-01

    Orbit dynamics of the solar occultation technique for satellite measurements of the Earth's atmosphere are described. A one-year mission is simulated and the orbit and mission design implications are discussed in detail. Geographical coverage capabilities are examined parametrically for a range of orbit conditions. The hypothetical mission is used to produce a simulated one-year data base of solar occultation measurements; each occultation event is assumed to produce a single number, or 'measurement' and some statistical properties of the data set are examined. A simple model is fitted to the data to demonstrate a procedure for examining global distributions of atmospheric constitutents with the solar occultation technique.

  4. Wide-Field Imaging of Single-Nanoparticle Extinction with Sub-nm2 Sensitivity

    NASA Astrophysics Data System (ADS)

    Payne, Lukas M.; Langbein, Wolfgang; Borri, Paola

    2018-03-01

    We report on a highly sensitive wide-field imaging technique for quantitative measurement of the optical extinction cross section σext of single nanoparticles. The technique is simple and high speed, and it enables the simultaneous acquisition of hundreds of nanoparticles for statistical analysis. Using rapid referencing, fast acquisition, and a deconvolution analysis, a shot-noise-limited sensitivity down to 0.4 nm2 is achieved. Measurements on a set of individual gold nanoparticles of 5 nm diameter using this method yield σext=(10.0 ±3.1 ) nm2, which is consistent with theoretical expectations and well above the background fluctuations of 0.9 nm2 .

  5. Statistical Inference of a RANS closure for a Jet-in-Crossflow simulation

    NASA Astrophysics Data System (ADS)

    Heyse, Jan; Edeling, Wouter; Iaccarino, Gianluca

    2016-11-01

    The jet-in-crossflow is found in several engineering applications, such as discrete film cooling for turbine blades, where a coolant injected through hols in the blade's surface protects the component from the hot gases leaving the combustion chamber. Experimental measurements using MRI techniques have been completed for a single hole injection into a turbulent crossflow, providing full 3D averaged velocity field. For such flows of engineering interest, Reynolds-Averaged Navier-Stokes (RANS) turbulence closure models are often the only viable computational option. However, RANS models are known to provide poor predictions in the region close to the injection point. Since these models are calibrated on simple canonical flow problems, the obtained closure coefficient estimates are unlikely to extrapolate well to more complex flows. We will therefore calibrate the parameters of a RANS model using statistical inference techniques informed by the experimental jet-in-crossflow data. The obtained probabilistic parameter estimates can in turn be used to compute flow fields with quantified uncertainty. Stanford Graduate Fellowship in Science and Engineering.

  6. Keep it simple - A case study of model development in the context of the Dynamic Stocks and Flows (DSF) task

    NASA Astrophysics Data System (ADS)

    Halbrügge, Marc

    2010-12-01

    This paper describes the creation of a cognitive model submitted to the ‘Dynamic Stocks and Flows’ (DSF) modeling challenge. This challenge aims at comparing computational cognitive models for human behavior during an open ended control task. Participants in the modeling competition were provided with a simulation environment and training data for benchmarking their models while the actual specification of the competition task was withheld. To meet this challenge, the cognitive model described here was designed and optimized for generalizability. Only two simple assumptions about human problem solving were used to explain the empirical findings of the training data. In-depth analysis of the data set prior to the development of the model led to the dismissal of correlations or other parametric statistics as goodness-of-fit indicators. A new statistical measurement based on rank orders and sequence matching techniques is being proposed instead. This measurement, when being applied to the human sample, also identifies clusters of subjects that use different strategies for the task. The acceptability of the fits achieved by the model is verified using permutation tests.

  7. New approach in the quantum statistical parton distribution

    NASA Astrophysics Data System (ADS)

    Sohaily, Sozha; Vaziri (Khamedi), Mohammad

    2017-12-01

    An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.

  8. [A comparison of convenience sampling and purposive sampling].

    PubMed

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  9. Multivariate model of female black bear habitat use for a Geographic Information System

    USGS Publications Warehouse

    Clark, Joseph D.; Dunn, James E.; Smith, Kimberly G.

    1993-01-01

    Simple univariate statistical techniques may not adequately assess the multidimensional nature of habitats used by wildlife. Thus, we developed a multivariate method to model habitat-use potential using a set of female black bear (Ursus americanus) radio locations and habitat data consisting of forest cover type, elevation, slope, aspect, distance to roads, distance to streams, and forest cover type diversity score in the Ozark Mountains of Arkansas. The model is based on the Mahalanobis distance statistic coupled with Geographic Information System (GIS) technology. That statistic is a measure of dissimilarity and represents a standardized squared distance between a set of sample variates and an ideal based on the mean of variates associated with animal observations. Calculations were made with the GIS to produce a map containing Mahalanobis distance values within each cell on a 60- × 60-m grid. The model identified areas of high habitat use potential that could not otherwise be identified by independent perusal of any single map layer. This technique avoids many pitfalls that commonly affect typical multivariate analyses of habitat use and is a useful tool for habitat manipulation or mitigation to favor terrestrial vertebrates that use habitats on a landscape scale.

  10. Bootstrap Methods: A Very Leisurely Look.

    ERIC Educational Resources Information Center

    Hinkle, Dennis E.; Winstead, Wayland H.

    The Bootstrap method, a computer-intensive statistical method of estimation, is illustrated using a simple and efficient Statistical Analysis System (SAS) routine. The utility of the method for generating unknown parameters, including standard errors for simple statistics, regression coefficients, discriminant function coefficients, and factor…

  11. Logistic regression for risk factor modelling in stuttering research.

    PubMed

    Reed, Phil; Wu, Yaqionq

    2013-06-01

    To outline the uses of logistic regression and other statistical methods for risk factor analysis in the context of research on stuttering. The principles underlying the application of a logistic regression are illustrated, and the types of questions to which such a technique has been applied in the stuttering field are outlined. The assumptions and limitations of the technique are discussed with respect to existing stuttering research, and with respect to formulating appropriate research strategies to accommodate these considerations. Finally, some alternatives to the approach are briefly discussed. The way the statistical procedures are employed are demonstrated with some hypothetical data. Research into several practical issues concerning stuttering could benefit if risk factor modelling were used. Important examples are early diagnosis, prognosis (whether a child will recover or persist) and assessment of treatment outcome. After reading this article you will: (a) Summarize the situations in which logistic regression can be applied to a range of issues about stuttering; (b) Follow the steps in performing a logistic regression analysis; (c) Describe the assumptions of the logistic regression technique and the precautions that need to be checked when it is employed; (d) Be able to summarize its advantages over other techniques like estimation of group differences and simple regression. Copyright © 2012 Elsevier Inc. All rights reserved.

  12. Renormalization Group Tutorial

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.

    2004-01-01

    Complex physical systems sometimes have statistical behavior characterized by power- law dependence on the parameters of the system and spatial variability with no particular characteristic scale as the parameters approach critical values. The renormalization group (RG) approach was developed in the fields of statistical mechanics and quantum field theory to derive quantitative predictions of such behavior in cases where conventional methods of analysis fail. Techniques based on these ideas have since been extended to treat problems in many different fields, and in particular, the behavior of turbulent fluids. This lecture will describe a relatively simple but nontrivial example of the RG approach applied to the diffusion of photons out of a stellar medium when the photons have wavelengths near that of an emission line of atoms in the medium.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Akhan, Okan, E-mail: akhano@tr.net; Islim, Filiz, E-mail: fislim@yahoo.com; Balci, Sinan, E-mail: snnbalci@gmail.com

    PurposeThe purpose of our study is to evaluate results of percutaneous aspiration with alcohol sclerotherapy in symptomatic patients with simple hepatic cysts by employing single-session techniques either by a needle or a catheter.Materials and MethodsWe retrospectively included 39 simple hepatic cysts in 35 patients treated via percutaneous aspiration and single-session alcohol sclerotherapy between years 1993 and 2012. Indications were pain (n = 28) or ruling out cystic echinococcus (CE) disease (n = 7). 29 cysts in 26 patients were treated by needle technique (Group A) and ten cysts in nine patients were treated by single-session catheter technique (Group B). Patients were followed for 4–173 monthsmore » (median: 38 months).ResultsAll patients were successfully treated. Before procedure, cyst volumes were 21–676 cc (median: 94 cc). Post-procedure cyst volumes at last follow-up were 0-40 cc (median: 1 cc). The mean decrease in cyst volume was 95.92 ± 2.86 % in all patients (95.96 ± 3.26 % in Group A and 95.80 ± 6.20 % in Group B). There was no statistically significant difference between the volume reduction rates of Group A and Group B. Only one patient, in Group B, developed a major complication, an abscess. Hospitalization period was 1 day for all patients.ConclusionsFor patients with symptomatic simple hepatic cysts smaller than 500 cc in volume by using puncture, aspiration, injection, and reaspiration (PAIR) technique with only needle, single-session alcohol sclerotherapy of 10 min is a safe and effective procedure with high success rate.« less

  14. Statistical Aspects of the North Atlantic Basin Tropical Cyclones: Trends, Natural Variability, and Global Warming

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.

    2007-01-01

    Statistical aspects of the North Atlantic basin tropical cyclones for the interval 1945- 2005 are examined, including the variation of the yearly frequency of occurrence for various subgroups of storms (all tropical cyclones, hurricanes, major hurricanes, U.S. landfalling hurricanes, and category 4/5 hurricanes); the yearly variation of the mean latitude and longitude (genesis location) of all tropical cyclones and hurricanes; and the yearly variation of the mean peak wind speeds, lowest pressures, and durations for all tropical cyclones, hurricanes, and major hurricanes. Also examined is the relationship between inferred trends found in the North Atlantic basin tropical cyclonic activity and natural variability and global warming, the latter described using surface air temperatures from the Armagh Observatory Armagh, Northern Ireland. Lastly, a simple statistical technique is employed to ascertain the expected level of North Atlantic basin tropical cyclonic activity for the upcoming 2007 season.

  15. Artificial neural network study on organ-targeting peptides

    NASA Astrophysics Data System (ADS)

    Jung, Eunkyoung; Kim, Junhyoung; Choi, Seung-Hoon; Kim, Minkyoung; Rhee, Hokyoung; Shin, Jae-Min; Choi, Kihang; Kang, Sang-Kee; Lee, Nam Kyung; Choi, Yun-Jaie; Jung, Dong Hyun

    2010-01-01

    We report a new approach to studying organ targeting of peptides on the basis of peptide sequence information. The positive control data sets consist of organ-targeting peptide sequences identified by the peroral phage-display technique for four organs, and the negative control data are prepared from random sequences. The capacity of our models to make appropriate predictions is validated by statistical indicators including sensitivity, specificity, enrichment curve, and the area under the receiver operating characteristic (ROC) curve (the ROC score). VHSE descriptor produces statistically significant training models and the models with simple neural network architectures show slightly greater predictive power than those with complex ones. The training and test set statistics indicate that our models could discriminate between organ-targeting and random sequences. We anticipate that our models will be applicable to the selection of organ-targeting peptides for generating peptide drugs or peptidomimetics.

  16. Using ontology network structure in text mining.

    PubMed

    Berndt, Donald J; McCart, James A; Luther, Stephen L

    2010-11-13

    Statistical text mining treats documents as bags of words, with a focus on term frequencies within documents and across document collections. Unlike natural language processing (NLP) techniques that rely on an engineered vocabulary or a full-featured ontology, statistical approaches do not make use of domain-specific knowledge. The freedom from biases can be an advantage, but at the cost of ignoring potentially valuable knowledge. The approach proposed here investigates a hybrid strategy based on computing graph measures of term importance over an entire ontology and injecting the measures into the statistical text mining process. As a starting point, we adapt existing search engine algorithms such as PageRank and HITS to determine term importance within an ontology graph. The graph-theoretic approach is evaluated using a smoking data set from the i2b2 National Center for Biomedical Computing, cast as a simple binary classification task for categorizing smoking-related documents, demonstrating consistent improvements in accuracy.

  17. Discrimination Method of the Volatiles from Fresh Mushrooms by an Electronic Nose Using a Trapping System and Statistical Standardization to Reduce Sensor Value Variation

    PubMed Central

    Fujioka, Kouki; Shimizu, Nobuo; Manome, Yoshinobu; Ikeda, Keiichi; Yamamoto, Kenji; Tomizawa, Yasuko

    2013-01-01

    Electronic noses have the benefit of obtaining smell information in a simple and objective manner, therefore, many applications have been developed for broad analysis areas such as food, drinks, cosmetics, medicine, and agriculture. However, measurement values from electronic noses have a tendency to vary under humidity or alcohol exposure conditions, since several types of sensors in the devices are affected by such variables. Consequently, we show three techniques for reducing the variation of sensor values: (1) using a trapping system to reduce the infering components; (2) performing statistical standardization (calculation of z-score); and (3) selecting suitable sensors. With these techniques, we discriminated the volatiles of four types of fresh mushrooms: golden needle (Flammulina velutipes), white mushroom (Agaricus bisporus), shiitake (Lentinus edodes), and eryngii (Pleurotus eryngii) among six fresh mushrooms (hen of the woods (Grifola frondosa), shimeji (Hypsizygus marmoreus) plus the above mushrooms). Additionally, we succeeded in discrimination of white mushroom, only comparing with artificial mushroom flavors, such as champignon flavor and truffle flavor. In conclusion, our techniques will expand the options to reduce variations in sensor values. PMID:24233028

  18. The "Guidewire-Coil"-Technique to prevent retrograde stone migration of ureteric calculi during intracorporeal lithothripsy.

    PubMed

    Dreger, Nici Markus; von Rundstedt, Friedrich Carl; Roth, Stephan; Brandt, Alexander Sascha; Degener, Stephan

    2017-01-05

    Stone retropulsion represents a challenge for intracorporeal lithotripsy of ureteral calculi. The consequences are an increased duration and cost of surgery as well as decreased stone-free rates. The use of additional tools to prevent proximal stone migration entails further costs and risks for ureteral injuries. We present the simple technique of using a coil of the routinely used guidewire to prevent stone retropulsion. We retrospectively evaluated all patients with mid-to-proximal ureteral stones in 2014, which were treated by ureteroscopic lithotripsy (Ho: YAG and/or pneumatic lithotripsy). The preoperative stone burden was routinely assessed using low dose CT scan (if available) and/or intravenous pyelogram. The study population consisted of 55 patients with 61 mid-to-proximal calculi. Twentyseven patients underwent semirigid ureterorenoscopy using the "Guidewire-Coil-Technique", the second group (n = 28) served as control group using the guidewire as usual. There has been a statistically significant reduction of accidental stone retropulsion (2/27 vs. 8/28, p < 0.05) as well as a decreased use of auxiliary procedures (p < 0.05) compared to the control group. No difference was observed in operative time. One ureteral injury in the control group required a prolonged ureteral stenting. The "Guidewire-Coil-Technique" is a simple and safe procedure that may help to prevent proximal calculus migration and therefore may increase stone-free rates without causing additional costs.

  19. A simple iterative independent component analysis algorithm for vibration source signal identification of complex structures

    NASA Astrophysics Data System (ADS)

    Lee, Dong-Sup; Cho, Dae-Seung; Kim, Kookhyun; Jeon, Jae-Jin; Jung, Woo-Jin; Kang, Myeng-Hwan; Kim, Jae-Ho

    2015-01-01

    Independent Component Analysis (ICA), one of the blind source separation methods, can be applied for extracting unknown source signals only from received signals. This is accomplished by finding statistical independence of signal mixtures and has been successfully applied to myriad fields such as medical science, image processing, and numerous others. Nevertheless, there are inherent problems that have been reported when using this technique: instability and invalid ordering of separated signals, particularly when using a conventional ICA technique in vibratory source signal identification of complex structures. In this study, a simple iterative algorithm of the conventional ICA has been proposed to mitigate these problems. The proposed method to extract more stable source signals having valid order includes an iterative and reordering process of extracted mixing matrix to reconstruct finally converged source signals, referring to the magnitudes of correlation coefficients between the intermediately separated signals and the signals measured on or nearby sources. In order to review the problems of the conventional ICA technique and to validate the proposed method, numerical analyses have been carried out for a virtual response model and a 30 m class submarine model. Moreover, in order to investigate applicability of the proposed method to real problem of complex structure, an experiment has been carried out for a scaled submarine mockup. The results show that the proposed method could resolve the inherent problems of a conventional ICA technique.

  20. Patient's pain perception during mandibular molar extraction with articaine: a comparison study between infiltration and inferior alveolar nerve block.

    PubMed

    Bataineh, Anwar B; Alwarafi, Majid A

    2016-11-01

    The aim of this study was to investigate the effectiveness of a local anesthetic agent comprising of 4 % articaine with 1:100,000 adrenaline, administered through an infiltration technique prior to the extraction of mandibular permanent first molar teeth. The study adopted a split mouth approach and involved patients who needed simple extractions of permanent mandibular first molar teeth on both sides. A combination of buccal and lingual infiltrations was used on one side, while the conventional inferior alveolar nerve block (IANB) technique, with a 1.8-ml cartridge of 4 % articaine with 1:100,000 epinephrine, was administered to the other. The patients' pain perception was assessed using visual analogue scale (VAS) and verbal rating scale (VRS) after the injection, followed by extraction. As a part of the study, 104 teeth were extracted from mouths of 52 patients. The difference in pain perception was statistically insignificant (p > .05) regarding the local anesthetic injection between the two techniques. The difference in pain perception regarding the extraction between the two techniques was also statistically insignificant (p < .05). No difference in pain perception between the two techniques among the study population was noted. This indicates that the extraction of permanent mandibular first molar teeth is possible without the administration of an IANB with the use of 4 % articaine with 1:100,000 epinephrine. The buccal and lingual infiltrations are slightly less painful than the conventional IANB technique.

  1. On estimating scale invariance in stratocumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Seze, Genevieve; Smith, Leonard A.

    1990-01-01

    Examination of cloud radiance fields derived from satellite observations sometimes indicates the existence of a range of scales over which the statistics of the field are scale invariant. Many methods were developed to quantify this scaling behavior in geophysics. The usefulness of such techniques depends both on the physics of the process being robust over a wide range of scales and on the availability of high resolution, low noise observations over these scales. These techniques (area perimeter relation, distribution of areas, estimation of the capacity, d0, through box counting, correlation exponent) are applied to the high resolution satellite data taken during the FIRE experiment and provides initial estimates of the quality of data required by analyzing simple sets. The results of the observed fields are contrasted with those of images of objects with known characteristics (e.g., dimension) where the details of the constructed image simulate current observational limits. Throughout when cloud elements and cloud boundaries are mentioned; it should be clearly understood that by this structures in the radiance field are meant: all the boundaries considered are defined by simple threshold arguments.

  2. Development of Supersonic Combustion Experiments for CFD Modeling

    NASA Technical Reports Server (NTRS)

    Baurle, Robert; Bivolaru, Daniel; Tedder, Sarah; Danehy, Paul M.; Cutler, Andrew D.; Magnotti, Gaetano

    2007-01-01

    This paper describes the development of an experiment to acquire data for developing and validating computational fluid dynamics (CFD) models for turbulence in supersonic combusting flows. The intent is that the flow field would be simple yet relevant to flows within hypersonic air-breathing engine combustors undergoing testing in vitiated-air ground-testing facilities. Specifically, it describes development of laboratory-scale hardware to produce a supersonic combusting coaxial jet, discusses design calculations, operability and types of flames observed. These flames are studied using the dual-pump coherent anti- Stokes Raman spectroscopy (CARS) - interferometric Rayleigh scattering (IRS) technique. This technique simultaneously and instantaneously measures temperature, composition, and velocity in the flow, from which many of the important turbulence statistics can be found. Some preliminary CARS data are presented.

  3. Detection of Anomalies in Hydrometric Data Using Artificial Intelligence Techniques

    NASA Astrophysics Data System (ADS)

    Lauzon, N.; Lence, B. J.

    2002-12-01

    This work focuses on the detection of anomalies in hydrometric data sequences, such as 1) outliers, which are individual data having statistical properties that differ from those of the overall population; 2) shifts, which are sudden changes over time in the statistical properties of the historical records of data; and 3) trends, which are systematic changes over time in the statistical properties. For the purpose of the design and management of water resources systems, it is important to be aware of these anomalies in hydrometric data, for they can induce a bias in the estimation of water quantity and quality parameters. These anomalies may be viewed as specific patterns affecting the data, and therefore pattern recognition techniques can be used for identifying them. However, the number of possible patterns is very large for each type of anomaly and consequently large computing capacities are required to account for all possibilities using the standard statistical techniques, such as cluster analysis. Artificial intelligence techniques, such as the Kohonen neural network and fuzzy c-means, are clustering techniques commonly used for pattern recognition in several areas of engineering and have recently begun to be used for the analysis of natural systems. They require much less computing capacity than the standard statistical techniques, and therefore are well suited for the identification of outliers, shifts and trends in hydrometric data. This work constitutes a preliminary study, using synthetic data representing hydrometric data that can be found in Canada. The analysis of the results obtained shows that the Kohonen neural network and fuzzy c-means are reasonably successful in identifying anomalies. This work also addresses the problem of uncertainties inherent to the calibration procedures that fit the clusters to the possible patterns for both the Kohonen neural network and fuzzy c-means. Indeed, for the same database, different sets of clusters can be established with these calibration procedures. A simple method for analyzing uncertainties associated with the Kohonen neural network and fuzzy c-means is developed here. The method combines the results from several sets of clusters, either from the Kohonen neural network or fuzzy c-means, so as to provide an overall diagnosis as to the identification of outliers, shifts and trends. The results indicate an improvement in the performance for identifying anomalies when the method of combining cluster sets is used, compared with when only one cluster set is used.

  4. A simple biota removal algorithm for 35 GHz cloud radar measurements

    NASA Astrophysics Data System (ADS)

    Kalapureddy, Madhu Chandra R.; Sukanya, Patra; Das, Subrata K.; Deshpande, Sachin M.; Pandithurai, Govindan; Pazamany, Andrew L.; Ambuj K., Jha; Chakravarty, Kaustav; Kalekar, Prasad; Krishna Devisetty, Hari; Annam, Sreenivas

    2018-03-01

    Cloud radar reflectivity profiles can be an important measurement for the investigation of cloud vertical structure (CVS). However, extracting intended meteorological cloud content from the measurement often demands an effective technique or algorithm that can reduce error and observational uncertainties in the recorded data. In this work, a technique is proposed to identify and separate cloud and non-hydrometeor echoes using the radar Doppler spectral moments profile measurements. The point and volume target-based theoretical radar sensitivity curves are used for removing the receiver noise floor and identified radar echoes are scrutinized according to the signal decorrelation period. Here, it is hypothesized that cloud echoes are observed to be temporally more coherent and homogenous and have a longer correlation period than biota. That can be checked statistically using ˜ 4 s sliding mean and standard deviation value of reflectivity profiles. The above step helps in screen out clouds critically by filtering out the biota. The final important step strives for the retrieval of cloud height. The proposed algorithm potentially identifies cloud height solely through the systematic characterization of Z variability using the local atmospheric vertical structure knowledge besides to the theoretical, statistical and echo tracing tools. Thus, characterization of high-resolution cloud radar reflectivity profile measurements has been done with the theoretical echo sensitivity curves and observed echo statistics for the true cloud height tracking (TEST). TEST showed superior performance in screening out clouds and filtering out isolated insects. TEST constrained with polarimetric measurements was found to be more promising under high-density biota whereas TEST combined with linear depolarization ratio and spectral width perform potentially to filter out biota within the highly turbulent shallow cumulus clouds in the convective boundary layer (CBL). This TEST technique is promisingly simple in realization but powerful in performance due to the flexibility in constraining, identifying and filtering out the biota and screening out the true cloud content, especially the CBL clouds. Therefore, the TEST algorithm is superior for screening out the low-level clouds that are strongly linked to the rainmaking mechanism associated with the Indian Summer Monsoon region's CVS.

  5. Weighted Statistical Binning: Enabling Statistically Consistent Genome-Scale Phylogenetic Analyses

    PubMed Central

    Bayzid, Md Shamsuzzoha; Mirarab, Siavash; Boussau, Bastien; Warnow, Tandy

    2015-01-01

    Because biological processes can result in different loci having different evolutionary histories, species tree estimation requires multiple loci from across multiple genomes. While many processes can result in discord between gene trees and species trees, incomplete lineage sorting (ILS), modeled by the multi-species coalescent, is considered to be a dominant cause for gene tree heterogeneity. Coalescent-based methods have been developed to estimate species trees, many of which operate by combining estimated gene trees, and so are called "summary methods". Because summary methods are generally fast (and much faster than more complicated coalescent-based methods that co-estimate gene trees and species trees), they have become very popular techniques for estimating species trees from multiple loci. However, recent studies have established that summary methods can have reduced accuracy in the presence of gene tree estimation error, and also that many biological datasets have substantial gene tree estimation error, so that summary methods may not be highly accurate in biologically realistic conditions. Mirarab et al. (Science 2014) presented the "statistical binning" technique to improve gene tree estimation in multi-locus analyses, and showed that it improved the accuracy of MP-EST, one of the most popular coalescent-based summary methods. Statistical binning, which uses a simple heuristic to evaluate "combinability" and then uses the larger sets of genes to re-calculate gene trees, has good empirical performance, but using statistical binning within a phylogenomic pipeline does not have the desirable property of being statistically consistent. We show that weighting the re-calculated gene trees by the bin sizes makes statistical binning statistically consistent under the multispecies coalescent, and maintains the good empirical performance. Thus, "weighted statistical binning" enables highly accurate genome-scale species tree estimation, and is also statistically consistent under the multi-species coalescent model. New data used in this study are available at DOI: http://dx.doi.org/10.6084/m9.figshare.1411146, and the software is available at https://github.com/smirarab/binning. PMID:26086579

  6. Long-range prediction of Indian summer monsoon rainfall using data mining and statistical approaches

    NASA Astrophysics Data System (ADS)

    H, Vathsala; Koolagudi, Shashidhar G.

    2017-10-01

    This paper presents a hybrid model to better predict Indian summer monsoon rainfall. The algorithm considers suitable techniques for processing dense datasets. The proposed three-step algorithm comprises closed itemset generation-based association rule mining for feature selection, cluster membership for dimensionality reduction, and simple logistic function for prediction. The application of predicting rainfall into flood, excess, normal, deficit, and drought based on 36 predictors consisting of land and ocean variables is presented. Results show good accuracy in the considered study period of 37years (1969-2005).

  7. Nanoarchitectonics

    NASA Astrophysics Data System (ADS)

    Ariga, Katsuhiko; Aono, Masakazu

    2016-11-01

    The construction of functional systems with nanosized parts would not possible by simple technology (nanotechnology). It can be handled by certain kinds of more sophisticated carpenter work or artistic architectonics (nanoarchitectonics). However, architecting materials in the nanoscale is not very simple because of various unexpected and uncontrollable thermal/statistical fluctuations and mutual interactions. The latter factors inevitably disturb the interactions between component building blocks. Therefore, several techniques and actions, including the regulation of atomic/molecular manipulation, molecular modification by organic chemistry, control of physicochemical interactions, self-assembly/organization, and application of external physical stimuli, must be well combined. This short review describes the historical backgrounds and essences of nanoarchitectonics, followed by a brief introduction of recent examples related to nanoarchitectonics. These examples are categorized in accordance with their physical usages: (i) atom/molecule control; (ii) devices and sensors; (iii) the other applications based on interfacial nanoarchitectonics.

  8. Forecasting in foodservice: model development, testing, and evaluation.

    PubMed

    Miller, J L; Thompson, P A; Orabella, M M

    1991-05-01

    This study was designed to develop, test, and evaluate mathematical models appropriate for forecasting menu-item production demand in foodservice. Data were collected from residence and dining hall foodservices at Ohio State University. Objectives of the study were to collect, code, and analyze the data; develop and test models using actual operation data; and compare forecasting results with current methods in use. Customer count was forecast using deseasonalized simple exponential smoothing. Menu-item demand was forecast by multiplying the count forecast by a predicted preference statistic. Forecasting models were evaluated using mean squared error, mean absolute deviation, and mean absolute percentage error techniques. All models were more accurate than current methods. A broad spectrum of forecasting techniques could be used by foodservice managers with access to a personal computer and spread-sheet and database-management software. The findings indicate that mathematical forecasting techniques may be effective in foodservice operations to control costs, increase productivity, and maximize profits.

  9. Thermal radiation characteristics of nonisothermal cylindrical enclosures using a numerical ray tracing technique

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1990-01-01

    Analysis of energy emitted from simple or complex cavity designs can lead to intricate solutions due to nonuniform radiosity and irradiation within a cavity. A numerical ray tracing technique was applied to simulate radiation propagating within and from various cavity designs. To obtain the energy balance relationships between isothermal and nonisothermal cavity surfaces and space, the computer code NEVADA was utilized for its statistical technique applied to numerical ray tracing. The analysis method was validated by comparing results with known theoretical and limiting solutions, and the electrical resistance network method. In general, for nonisothermal cavities the performance (apparent emissivity) is a function of cylinder length-to-diameter ratio, surface emissivity, and cylinder surface temperatures. The extent of nonisothermal conditions in a cylindrical cavity significantly affects the overall cavity performance. Results are presented over a wide range of parametric variables for use as a possible design reference.

  10. Characterizing multi-photon quantum interference with practical light sources and threshold single-photon detectors

    NASA Astrophysics Data System (ADS)

    Navarrete, Álvaro; Wang, Wenyuan; Xu, Feihu; Curty, Marcos

    2018-04-01

    The experimental characterization of multi-photon quantum interference effects in optical networks is essential in many applications of photonic quantum technologies, which include quantum computing and quantum communication as two prominent examples. However, such characterization often requires technologies which are beyond our current experimental capabilities, and today's methods suffer from errors due to the use of imperfect sources and photodetectors. In this paper, we introduce a simple experimental technique to characterize multi-photon quantum interference by means of practical laser sources and threshold single-photon detectors. Our technique is based on well-known methods in quantum cryptography which use decoy settings to tightly estimate the statistics provided by perfect devices. As an illustration of its practicality, we use this technique to obtain a tight estimation of both the generalized Hong‑Ou‑Mandel dip in a beamsplitter with six input photons and the three-photon coincidence probability at the output of a tritter.

  11. Persistence Mapping Using EUV Solar Imager Data

    NASA Technical Reports Server (NTRS)

    Thompson, B. J.; Young, C. A.

    2016-01-01

    We describe a simple image processing technique that is useful for the visualization and depiction of gradually evolving or intermittent structures in solar physics extreme-ultraviolet imagery. The technique is an application of image segmentation, which we call "Persistence Mapping," to isolate extreme values in a data set, and is particularly useful for the problem of capturing phenomena that are evolving in both space and time. While integration or "time-lapse" imaging uses the full sample (of size N ), Persistence Mapping rejects (N - 1)/N of the data set and identifies the most relevant 1/N values using the following rule: if a pixel reaches an extreme value, it retains that value until that value is exceeded. The simplest examples isolate minima and maxima, but any quantile or statistic can be used. This paper demonstrates how the technique has been used to extract the dynamics in long-term evolution of comet tails, erupting material, and EUV dimming regions.

  12. Investigating effects of communications modulation technique on targeting performance

    NASA Astrophysics Data System (ADS)

    Blasch, Erik; Eusebio, Gerald; Huling, Edward

    2006-05-01

    One of the key challenges facing the global war on terrorism (GWOT) and urban operations is the increased need for rapid and diverse information from distributed sources. For users to get adequate information on target types and movements, they would need reliable data. In order to facilitate reliable computational intelligence, we seek to explore the communication modulation tradeoffs affecting information distribution and accumulation. In this analysis, we explore the modulation techniques of Orthogonal Frequency Division Multiplexing (OFDM), Direct Sequence Spread Spectrum (DSSS), and statistical time-division multiple access (TDMA) as a function of the bit error rate and jitter that affect targeting performance. In the analysis, we simulate a Link 16 with a simple bandpass frequency shift keying (PSK) technique using different Signal-to-Noise ratios. The communications transfer delay and accuracy tradeoffs are assessed as to the effects incurred in targeting performance.

  13. Maximizing Macromolecule Crystal Size for Neutron Diffraction Experiments

    NASA Technical Reports Server (NTRS)

    Judge, R. A.; Kephart, R.; Leardi, R.; Myles, D. A.; Snell, E. H.; vanderWoerd, M.; Curreri, Peter A. (Technical Monitor)

    2002-01-01

    A challenge in neutron diffraction experiments is growing large (greater than 1 cu mm) macromolecule crystals. In taking up this challenge we have used statistical experiment design techniques to quickly identify crystallization conditions under which the largest crystals grow. These techniques provide the maximum information for minimal experimental effort, allowing optimal screening of crystallization variables in a simple experimental matrix, using the minimum amount of sample. Analysis of the results quickly tells the investigator what conditions are the most important for the crystallization. These can then be used to maximize the crystallization results in terms of reducing crystal numbers and providing large crystals of suitable habit. We have used these techniques to grow large crystals of Glucose isomerase. Glucose isomerase is an industrial enzyme used extensively in the food industry for the conversion of glucose to fructose. The aim of this study is the elucidation of the enzymatic mechanism at the molecular level. The accurate determination of hydrogen positions, which is critical for this, is a requirement that neutron diffraction is uniquely suited for. Preliminary neutron diffraction experiments with these crystals conducted at the Institute Laue-Langevin (Grenoble, France) reveal diffraction to beyond 2.5 angstrom. Macromolecular crystal growth is a process involving many parameters, and statistical experimental design is naturally suited to this field. These techniques are sample independent and provide an experimental strategy to maximize crystal volume and habit for neutron diffraction studies.

  14. Housing decision making methods for initiation development phase process

    NASA Astrophysics Data System (ADS)

    Zainal, Rozlin; Kasim, Narimah; Sarpin, Norliana; Wee, Seow Ta; Shamsudin, Zarina

    2017-10-01

    Late delivery and sick housing project problems were attributed to poor decision making. These problems are the string of housing developer that prefers to create their own approach based on their experiences and expertise with the simplest approach by just applying the obtainable standards and rules in decision making. This paper seeks to identify the decision making methods for housing development at the initiation phase in Malaysia. The research involved Delphi method by using questionnaire survey which involved 50 numbers of developers as samples for the primary stage of collect data. However, only 34 developers contributed to the second stage of the information gathering process. At the last stage, only 12 developers were left for the final data collection process. Finding affirms that Malaysian developers prefer to make their investment decisions based on simple interpolation of historical data and using simple statistical or mathematical techniques in producing the required reports. It was suggested that they seemed to skip several important decision-making functions at the primary development stage. These shortcomings were mainly due to time and financial constraints and the lack of statistical or mathematical expertise among the professional and management groups in the developer organisations.

  15. A Simple Illustration for the Need of Multiple Comparison Procedures

    ERIC Educational Resources Information Center

    Carter, Rickey E.

    2010-01-01

    Statistical adjustments to accommodate multiple comparisons are routinely covered in introductory statistical courses. The fundamental rationale for such adjustments, however, may not be readily understood. This article presents a simple illustration to help remedy this.

  16. Evidence of complex contagion of information in social media: An experiment using Twitter bots.

    PubMed

    Mønsted, Bjarke; Sapieżyński, Piotr; Ferrara, Emilio; Lehmann, Sune

    2017-01-01

    It has recently become possible to study the dynamics of information diffusion in techno-social systems at scale, due to the emergence of online platforms, such as Twitter, with millions of users. One question that systematically recurs is whether information spreads according to simple or complex dynamics: does each exposure to a piece of information have an independent probability of a user adopting it (simple contagion), or does this probability depend instead on the number of sources of exposure, increasing above some threshold (complex contagion)? Most studies to date are observational and, therefore, unable to disentangle the effects of confounding factors such as social reinforcement, homophily, limited attention, or network community structure. Here we describe a novel controlled experiment that we performed on Twitter using 'social bots' deployed to carry out coordinated attempts at spreading information. We propose two Bayesian statistical models describing simple and complex contagion dynamics, and test the competing hypotheses. We provide experimental evidence that the complex contagion model describes the observed information diffusion behavior more accurately than simple contagion. Future applications of our results include more effective defenses against malicious propaganda campaigns on social media, improved marketing and advertisement strategies, and design of effective network intervention techniques.

  17. Statistical normalization techniques for magnetic resonance imaging.

    PubMed

    Shinohara, Russell T; Sweeney, Elizabeth M; Goldsmith, Jeff; Shiee, Navid; Mateen, Farrah J; Calabresi, Peter A; Jarso, Samson; Pham, Dzung L; Reich, Daniel S; Crainiceanu, Ciprian M

    2014-01-01

    While computed tomography and other imaging techniques are measured in absolute units with physical meaning, magnetic resonance images are expressed in arbitrary units that are difficult to interpret and differ between study visits and subjects. Much work in the image processing literature on intensity normalization has focused on histogram matching and other histogram mapping techniques, with little emphasis on normalizing images to have biologically interpretable units. Furthermore, there are no formalized principles or goals for the crucial comparability of image intensities within and across subjects. To address this, we propose a set of criteria necessary for the normalization of images. We further propose simple and robust biologically motivated normalization techniques for multisequence brain imaging that have the same interpretation across acquisitions and satisfy the proposed criteria. We compare the performance of different normalization methods in thousands of images of patients with Alzheimer's disease, hundreds of patients with multiple sclerosis, and hundreds of healthy subjects obtained in several different studies at dozens of imaging centers.

  18. A scanning electron microscope study and statistical analysis of adipocyte morphology in lipofilling: comparing the effects of harvesting and purification procedures with 2 different techniques.

    PubMed

    Rubino, Corrado; Mazzarello, Vittorio; Faenza, Mario; Montella, Andrea; Santanelli, Fabio; Farace, Francesco

    2015-06-01

    The aim of this study was to evaluate the effects on adipocyte morphology of 2 techniques of fat harvesting and of fat purification in lipofilling, considering that the number of viable healthy adipocytes is important in fat survival in recipient areas of lipofilling. Fat harvesting was performed in 10 female patients from flanks, on one side with a 2-mm Coleman cannula and on the other side with a 3-mm Mercedes cannula. Thirty milliliter of fat tissue from each side was collected and divided into three 10 mL syringes: A, B, and C. The fat inside syringe A was left untreated, the fat in syringe B underwent simple sedimentation, and the fat inside syringe C underwent centrifugation at 3000 rpm for 3 minutes. Each fat graft specimen was processed for examination under low-vacuum scanning electron microscope. Diameter (μ) and number of adipocytes per square millimeter and number of altered adipocytes per square millimeter were evaluated. Untreated specimens harvested with the 2 different techniques were first compared, then sedimented versus centrifuged specimens harvested with the same technique were compared. Statistical analysis was performed using Wilcoxon signed rank test. The number of adipocytes per square millimeter was statistically higher in specimens harvested with the 3-mm Mercedes cannula (P = 0.0310). The number of altered cells was statistically higher in centrifuged specimens than in sedimented ones using both methods of fat harvesting (P = 0.0080) with a 2-mm Coleman cannula and (P = 0.0050) with a 3-mm Mercedes cannula. Alterations in adipocyte morphology consisted in wrinkling of the membrane, opening of pore with leakage of oily material, reduction of cellular diameter, and total collapse of the cellular membrane. Fat harvesting by a 3-mm cannula results in a higher number of adipocytes and centrifugation of the harvested fat results in a higher number of morphologic altered cells than sedimentation.

  19. Comparing and combining process-based crop models and statistical models with some implications for climate change

    NASA Astrophysics Data System (ADS)

    Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram

    2017-09-01

    We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.

  20. A methodology for treating missing data applied to daily rainfall data in the Candelaro River Basin (Italy).

    PubMed

    Lo Presti, Rossella; Barca, Emanuele; Passarella, Giuseppe

    2010-01-01

    Environmental time series are often affected by the "presence" of missing data, but when dealing statistically with data, the need to fill in the gaps estimating the missing values must be considered. At present, a large number of statistical techniques are available to achieve this objective; they range from very simple methods, such as using the sample mean, to very sophisticated ones, such as multiple imputation. A brand new methodology for missing data estimation is proposed, which tries to merge the obvious advantages of the simplest techniques (e.g. their vocation to be easily implemented) with the strength of the newest techniques. The proposed method consists in the application of two consecutive stages: once it has been ascertained that a specific monitoring station is affected by missing data, the "most similar" monitoring stations are identified among neighbouring stations on the basis of a suitable similarity coefficient; in the second stage, a regressive method is applied in order to estimate the missing data. In this paper, four different regressive methods are applied and compared, in order to determine which is the most reliable for filling in the gaps, using rainfall data series measured in the Candelaro River Basin located in South Italy.

  1. Sampling methods to the statistical control of the production of blood components.

    PubMed

    Pereira, Paulo; Seghatchian, Jerard; Caldeira, Beatriz; Santos, Paula; Castro, Rosa; Fernandes, Teresa; Xavier, Sandra; de Sousa, Gracinda; de Almeida E Sousa, João Paulo

    2017-12-01

    The control of blood components specifications is a requirement generalized in Europe by the European Commission Directives and in the US by the AABB standards. The use of a statistical process control methodology is recommended in the related literature, including the EDQM guideline. The control reliability is dependent of the sampling. However, a correct sampling methodology seems not to be systematically applied. Commonly, the sampling is intended to comply uniquely with the 1% specification to the produced blood components. Nevertheless, on a purely statistical viewpoint, this model could be argued not to be related to a consistent sampling technique. This could be a severe limitation to detect abnormal patterns and to assure that the production has a non-significant probability of producing nonconforming components. This article discusses what is happening in blood establishments. Three statistical methodologies are proposed: simple random sampling, sampling based on the proportion of a finite population, and sampling based on the inspection level. The empirical results demonstrate that these models are practicable in blood establishments contributing to the robustness of sampling and related statistical process control decisions for the purpose they are suggested for. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Non-parametric PCM to ADM conversion. [Pulse Code to Adaptive Delta Modulation

    NASA Technical Reports Server (NTRS)

    Locicero, J. L.; Schilling, D. L.

    1977-01-01

    An all-digital technique to convert pulse code modulated (PCM) signals into adaptive delta modulation (ADM) format is presented. The converter developed is shown to be independent of the statistical parameters of the encoded signal and can be constructed with only standard digital hardware. The structure of the converter is simple enough to be fabricated on a large scale integrated circuit where the advantages of reliability and cost can be optimized. A concise evaluation of this PCM to ADM translation technique is presented and several converters are simulated on a digital computer. A family of performance curves is given which displays the signal-to-noise ratio for sinusoidal test signals subjected to the conversion process, as a function of input signal power for several ratios of ADM rate to Nyquist rate.

  3. Laparoscopic versus open pyloromyotomy for infantile hypertropic pyloric stenosis: an early experience.

    PubMed

    Saha, N; Saha, D K; Rahman, M A; Aziz, M A; Islam, M K

    2012-07-01

    This prospective comparative study was conducted with an initial experience in the Department of Pediatric Surgery, Dhaka Shishu (Children) Hospital during the period of December 2007 to January 2009, with the infants of 2-12 weeks age, diagnosed as Hypertrophic pyloric stenosis. Patients selection was done by simple random technique by means of lottery. For open pyloromyotomy conventional method & for laparoscopic pyloromyotomy three trocher techniques was applied. In this study, among 60 cases with infantile hypertrophic pyloric stenosis, 30 cases were finally selected for analysis irrespectively both in laparoscopic (Group A) & in open pyloromyotomy (Group B) group. Patients were studied under variables of operative time, required time of full feeds after operation, post operative hospital stay & both per and post operative complications. Regarding operative time, in Group A, mean±SD operating time (in minutes) was 61.59±51.73 whereas in Group B it was 28.33±8.40 & P value was 0.001. The result was statistically significant. The mean±SD time (in hours) of full feeds (ad libitum) was 35.00±31.70 hours in Group A compared to 28.95±10.99 hours in Group B and P value was found 0.342ns which was not statistically significant. On study of total length (in days) of post operative hospital stay, mean±SD was 3.09±2.25 & 2.58±1.15days in laparoscopic group & open pyloromyotomy group respectively. The p value was 0.355ns, which was statistically insignificant. Again, on study of complications, per operatively 6(19.5%) patients had developed haemorrage, 1(3.33%) had mucosal perforation & 4(13.36%) had developed duodenal serosal injury in laparoscopic group whereas only 1(3.33%) patient in open pyloromyotomy group had nothing else except simple hemorrhage. The p value (0.051ns) was also statistically insignificant. In regard to post operative complications, 2(6.6%) patients had developed wound hematoma, 2(6.6%) had wound infection, 1(3.33 %) had developed wound dehiscence and incisional hernia respectively in Group A. But in group B there was no subject with any complication. This result was also statistically insignificant. So, the overall study results denote that, laparoscopic pyloromyotomy would not be considered as a superior procedure or as safe as that of traditional open pyloromyotomy for the beginners.

  4. Comparative assessment of several post-processing methods for correcting evapotranspiration forecasts derived from TIGGE datasets.

    NASA Astrophysics Data System (ADS)

    Tian, D.; Medina, H.

    2017-12-01

    Post-processing of medium range reference evapotranspiration (ETo) forecasts based on numerical weather prediction (NWP) models has the potential of improving the quality and utility of these forecasts. This work compares the performance of several post-processing methods for correcting ETo forecasts over the continental U.S. generated from The Observing System Research and Predictability Experiment (THORPEX) Interactive Grand Global Ensemble (TIGGE) database using data from Europe (EC), the United Kingdom (MO), and the United States (NCEP). The pondered post-processing techniques are: simple bias correction, the use of multimodels, the Ensemble Model Output Statistics (EMOS, Gneitting et al., 2005) and the Bayesian Model Averaging (BMA, Raftery et al., 2005). ETo estimates based on quality-controlled U.S. Regional Climate Reference Network measurements, and computed with the FAO 56 Penman Monteith equation, are adopted as baseline. EMOS and BMA are generally the most efficient post-processing techniques of the ETo forecasts. Nevertheless, the simple bias correction of the best model is commonly much more rewarding than using multimodel raw forecasts. Our results demonstrate the potential of different forecasting and post-processing frameworks in operational evapotranspiration and irrigation advisory systems at national scale.

  5. Recent applications of boxed molecular dynamics: a simple multiscale technique for atomistic simulations.

    PubMed

    Booth, Jonathan; Vazquez, Saulo; Martinez-Nunez, Emilio; Marks, Alison; Rodgers, Jeff; Glowacki, David R; Shalashilin, Dmitrii V

    2014-08-06

    In this paper, we briefly review the boxed molecular dynamics (BXD) method which allows analysis of thermodynamics and kinetics in complicated molecular systems. BXD is a multiscale technique, in which thermodynamics and long-time dynamics are recovered from a set of short-time simulations. In this paper, we review previous applications of BXD to peptide cyclization, solution phase organic reaction dynamics and desorption of ions from self-assembled monolayers (SAMs). We also report preliminary results of simulations of diamond etching mechanisms and protein unfolding in atomic force microscopy experiments. The latter demonstrate a correlation between the protein's structural motifs and its potential of mean force. Simulations of these processes by standard molecular dynamics (MD) is typically not possible, because the experimental time scales are very long. However, BXD yields well-converged and physically meaningful results. Compared with other methods of accelerated MD, our BXD approach is very simple; it is easy to implement, and it provides an integrated approach for simultaneously obtaining both thermodynamics and kinetics. It also provides a strategy for obtaining statistically meaningful dynamical results in regions of configuration space that standard MD approaches would visit only very rarely.

  6. Effects of a new mild shampoo for preventing hair loss in Asian by a simple hand-held phototrichogram technique.

    PubMed

    Baek, J H; Lee, S Y; Yoo, M; Park, W-S; Lee, S J; Boo, Y C; Koh, J-S

    2011-12-01

    This study was conducted to evaluate the effects of a commercially available shampoo in Korean subjects with alopecia using a simple hand-held phototrichogram technique. Forty-four subjects with alopecia were enrolled and forty subjects continued for 16 weeks. In the test group, total hair counts increased significantly at weeks 8 and 16, and the number of shedding hair significantly decreased at week 16. Terminal hair counts significantly increased at week 8. In the control group, hair thickness and the number of vellus hairs significantly decreased at week 16. The number of total hairs significantly increased in the test group than in the control group at weeks 8 and 16. The number of shedding hairs significantly decreased in the test group than in the control group at week 16. Visual assessment using clinical digital images showed that the number of total hairs appeared to increase although there was no statistical significance. In this study, it was found that the test shampoo could prevent hair loss. © 2011 DERMAPRO Co Ltd. ICS © 2011 Society of Cosmetic Scientists and the Société Française de Cosmétologie.

  7. Assuring reliability program effectiveness.

    NASA Technical Reports Server (NTRS)

    Ball, L. W.

    1973-01-01

    An attempt is made to provide simple identification and description of techniques that have proved to be most useful either in developing a new product or in improving reliability of an established product. The first reliability task is obtaining and organizing parts failure rate data. Other tasks are parts screening, tabulation of general failure rates, preventive maintenance, prediction of new product reliability, and statistical demonstration of achieved reliability. Five principal tasks for improving reliability involve the physics of failure research, derating of internal stresses, control of external stresses, functional redundancy, and failure effects control. A final task is the training and motivation of reliability specialist engineers.

  8. [Conservative anal fistula treatment with collagenic plug and human fibrin sealant. Preliminary results].

    PubMed

    Gubitosi, A; Moccia, G; Malinconico, F A; Docimo, G; Ruggiero, R; Iside, G; Avenia, N; Docimo, L; Foroni, F; Gilio, F; Sparavigna, L; Agresti, M

    2009-01-01

    The authors, on the basis of a long clinical experience with human fibrin glue in general surgery, compared two different extracellular matrix (collagen), Surgisis and TissueDura, with human fibrin glue, applied during the operation, and sometimes in postoperative, to obtain the healing of perianal fistulas. The collagenic extracellular matrix provides, according to the rationale suggested, an optimal three-dimensional structure for the fibroblastic implant and neoangiogenesis, hence for the fistula "fibrotizzation" and closure. The encouraging results for transphincteric fistulas and a simple and easy technique push to researchers on samples statistically significant.

  9. An Application of Epidemiological Modeling to Information Diffusion

    NASA Astrophysics Data System (ADS)

    McCormack, Robert; Salter, William

    Messages often spread within a population through unofficial - particularly web-based - media. Such ideas have been termed "memes." To impede the flow of terrorist messages and to promote counter messages within a population, intelligence analysts must understand how messages spread. We used statistical language processing technologies to operationalize "memes" as latent topics in electronic text and applied epidemiological techniques to describe and analyze patterns of message propagation. We developed our methods and applied them to English-language newspapers and blogs in the Arab world. We found that a relatively simple epidemiological model can reproduce some dynamics of observed empirical relationships.

  10. A methodology for the design of experiments in computational intelligence with multiple regression models.

    PubMed

    Fernandez-Lozano, Carlos; Gestal, Marcos; Munteanu, Cristian R; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable.

  11. A methodology for the design of experiments in computational intelligence with multiple regression models

    PubMed Central

    Gestal, Marcos; Munteanu, Cristian R.; Dorado, Julian; Pazos, Alejandro

    2016-01-01

    The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented in an R package called RRegrs. This package includes ten simple and complex regression models to carry out predictive modeling using Machine Learning and well-known regression algorithms. The framework for experimental design presented herein is evaluated and validated against RRegrs. Our results are different for three out of five state-of-the-art simple datasets and it can be stated that the selection of the best model according to our proposal is statistically significant and relevant. It is of relevance to use a statistical approach to indicate whether the differences are statistically significant using this kind of algorithms. Furthermore, our results with three real complex datasets report different best models than with the previously published methodology. Our final goal is to provide a complete methodology for the use of different steps in order to compare the results obtained in Computational Intelligence problems, as well as from other fields, such as for bioinformatics, cheminformatics, etc., given that our proposal is open and modifiable. PMID:27920952

  12. 21 CFR 820.250 - Statistical techniques.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Statistical techniques. 820.250 Section 820.250...) MEDICAL DEVICES QUALITY SYSTEM REGULATION Statistical Techniques § 820.250 Statistical techniques. (a... statistical techniques required for establishing, controlling, and verifying the acceptability of process...

  13. 21 CFR 820.250 - Statistical techniques.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Statistical techniques. 820.250 Section 820.250...) MEDICAL DEVICES QUALITY SYSTEM REGULATION Statistical Techniques § 820.250 Statistical techniques. (a... statistical techniques required for establishing, controlling, and verifying the acceptability of process...

  14. Adaptive correction of ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Pelosi, Anna; Battista Chirico, Giovanni; Van den Bergh, Joris; Vannitsem, Stephane

    2017-04-01

    Forecasts from numerical weather prediction (NWP) models often suffer from both systematic and non-systematic errors. These are present in both deterministic and ensemble forecasts, and originate from various sources such as model error and subgrid variability. Statistical post-processing techniques can partly remove such errors, which is particularly important when NWP outputs concerning surface weather variables are employed for site specific applications. Many different post-processing techniques have been developed. For deterministic forecasts, adaptive methods such as the Kalman filter are often used, which sequentially post-process the forecasts by continuously updating the correction parameters as new ground observations become available. These methods are especially valuable when long training data sets do not exist. For ensemble forecasts, well-known techniques are ensemble model output statistics (EMOS), and so-called "member-by-member" approaches (MBM). Here, we introduce a new adaptive post-processing technique for ensemble predictions. The proposed method is a sequential Kalman filtering technique that fully exploits the information content of the ensemble. One correction equation is retrieved and applied to all members, however the parameters of the regression equations are retrieved by exploiting the second order statistics of the forecast ensemble. We compare our new method with two other techniques: a simple method that makes use of a running bias correction of the ensemble mean, and an MBM post-processing approach that rescales the ensemble mean and spread, based on minimization of the Continuous Ranked Probability Score (CRPS). We perform a verification study for the region of Campania in southern Italy. We use two years (2014-2015) of daily meteorological observations of 2-meter temperature and 10-meter wind speed from 18 ground-based automatic weather stations distributed across the region, comparing them with the corresponding COSMO-LEPS ensemble forecasts. Deterministic verification scores (e.g., mean absolute error, bias) and probabilistic scores (e.g., CRPS) are used to evaluate the post-processing techniques. We conclude that the new adaptive method outperforms the simpler running bias-correction. The proposed adaptive method often outperforms the MBM method in removing bias. The MBM method has the advantage of correcting the ensemble spread, although it needs more training data.

  15. Statistical complexity without explicit reference to underlying probabilities

    NASA Astrophysics Data System (ADS)

    Pennini, F.; Plastino, A.

    2018-06-01

    We show that extremely simple systems of a not too large number of particles can be simultaneously thermally stable and complex. To such an end, we extend the statistical complexity's notion to simple configurations of non-interacting particles, without appeal to probabilities, and discuss configurational properties.

  16. Comparative study between manual injection intraosseous anesthesia and conventional oral anesthesia.

    PubMed

    Peñarrocha-Oltra, D; Ata-Ali, J; Oltra-Moscardó, M-J; Peñarrocha-Diago, M-A; Peñarrocha, M

    2012-03-01

    To compare intraosseous anesthesia (IA) with the conventional oral anesthesia techniques. A simple-blind, prospective clinical study was carried out. Each patient underwent two anesthetic techniques: conventional (local infiltration and locoregional anesthetic block) and intraosseous, for respective dental operations. In order to allow comparison of IA versus conventional anesthesia, the two operations were similar and affected the same two teeth in opposite quadrants. A total of 200 oral anesthetic procedures were carried out in 100 patients. The mean patient age was 28.6±9.92 years. Fifty-five vestibular infiltrations and 45 mandibular blocks were performed. All patients were also subjected to IA. The type of intervention (conservative or endodontic) exerted no significant influence (p=0.58 and p=0.62, respectively). The latency period was 8.52±2.44 minutes for the conventional techniques and 0.89±0.73 minutes for IA - the difference being statistically significant (p<0.05). Regarding patient anesthesia sensation, the infiltrative techniques lasted a maximum of one hour, the inferior alveolar nerve blocks lasted between 1-3 hours, and IA lasted only 2.5 minutes - the differences being statistically significant (p≤0.0000, Φ=0.29). Anesthetic success was recorded in 89% of the conventional procedures and in 78% of the IA. Most patients preferred IA (61%)(p=0.0032). The two anesthetic procedures have been compared for latency, duration of anesthetic effect, anesthetic success rate and patient preference. Intraosseous anesthesia has been shown to be a technique to be taken into account when planning conservative and endodontic treatments.

  17. A proposed technique for vehicle tracking, direction, and speed determination

    NASA Astrophysics Data System (ADS)

    Fisher, Paul S.; Angaye, Cleopas O.; Fisher, Howard P.

    2004-12-01

    A technique for recognition of vehicles in terms of direction, distance, and rate of change is presented. This represents very early work on this problem with significant hurdles still to be addressed. These are discussed in the paper. However, preliminary results also show promise for this technique for use in security and defense environments where the penetration of a perimeter is of concern. The material described herein indicates a process whereby the protection of a barrier could be augmented by computers and installed cameras assisting the individuals charged with this responsibility. The technique we employ is called Finite Inductive Sequences (FI) and is proposed as a means for eliminating data requiring storage and recognition where conventional mathematical models don"t eliminate enough and statistical models eliminate too much. FI is a simple idea and is based upon a symbol push-out technique that allows the order (inductive base) of the model to be set to an a priori value for all derived rules. The rules are obtained from exemplar data sets, and are derived by a technique called Factoring, yielding a table of rules called a Ruling. These rules can then be used in pattern recognition applications such as described in this paper.

  18. Prediction of Down-Gradient Impacts of DNAPL Source Depletion Using Tracer Techniques

    NASA Astrophysics Data System (ADS)

    Basu, N. B.; Fure, A. D.; Jawitz, J. W.

    2006-12-01

    Four simplified DNAPL source depletion models that have been discussed in the literature recently are evaluated for the prediction of long-term effects of source depletion under natural gradient flow. These models are simple in form (a power function equation is an example) but are shown here to serve as mathematical analogs to complex multiphase flow and transport simulators. One of the source depletion models, the equilibrium streamtube model, is shown to be relatively easily parameterized using non-reactive and reactive tracers. Non-reactive tracers are used to characterize the aquifer heterogeneity while reactive tracers are used to describe the mean DNAPL mass and its distribution. This information is then used in a Lagrangian framework to predict source remediation performance. In a Lagrangian approach the source zone is conceptualized as a collection of non-interacting streamtubes with hydrodynamic and DNAPL heterogeneity represented by the variation of the travel time and DNAPL saturation among the streamtubes. The travel time statistics are estimated from the non-reactive tracer data while the DNAPL distribution statistics are estimated from the reactive tracer data. The combined statistics are used to define an analytical solution for contaminant dissolution under natural gradient flow. The tracer prediction technique compared favorably with results from a multiphase flow and transport simulator UTCHEM in domains with different hydrodynamic heterogeneity (variance of the log conductivity field = 0.2, 1 and 3).

  19. A simple method to estimate vegetation indices and crop canopy factors using field spectroscopy for solanum tuberosum during the whole phenological cycle

    NASA Astrophysics Data System (ADS)

    Perdikou, S.; Papadavid, G.; Hadjimitsis, M.; Hadjimitsis, D.; Neofytou, N.

    2013-08-01

    Field spectroscopy is a part of the remote sensing techniques and very important for studies in agriculture. A GER-1500 field spectro-radiometer was used in this study in order to retrieve the necessary spectrum data of the spring potatoes for estimating spectral vegetation indices (SVI's). A field campaign was undertaken from September to the end of November 2012 for the collection of spectro-radiometric measurements. The study area was in the Mandria Village in Paphos district in Cyprus. This paper demonstrates how crop canopy factors can be statistically related to remotely sensed data, namely vegetation indices. The paper is a part of an EU cofounded project regarding estimating crop water requirements using remote sensing techniques and informing the farmers through 3G smart telephony.

  20. Flux control coefficients determined by inhibitor titration: the design and analysis of experiments to minimize errors.

    PubMed Central

    Small, J R

    1993-01-01

    This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434

  1. A simple rain attenuation model for earth-space radio links operating at 10-35 GHz

    NASA Technical Reports Server (NTRS)

    Stutzman, W. L.; Yon, K. M.

    1986-01-01

    The simple attenuation model has been improved from an earlier version and now includes the effect of wave polarization. The model is for the prediction of rain attenuation statistics on earth-space communication links operating in the 10-35 GHz band. Simple calculations produce attenuation values as a function of average rain rate. These together with rain rate statistics (either measured or predicted) can be used to predict annual rain attenuation statistics. In this paper model predictions are compared to measured data from a data base of 62 experiments performed in the U.S., Europe, and Japan. Comparisons are also made to predictions from other models.

  2. Evaluating the performance of a fault detection and diagnostic system for vapor compression equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less

  3. The Taylor saddle effacement: a new technique for correction of saddle nose deformity.

    PubMed

    Taylor, S Mark; Rigby, Matthew H

    2008-02-01

    To describe a novel technique, the Taylor saddle effacement (TSE), for correction of saddle nose deformity using autologous grafts from the lower lateral cartilages. A prospective evaluation of six patients, all of whom had the TSE performed. Photographs were taken in combination with completion of a rhinoplasty outcomes questionnaire preoperatively and at 6 months. The questionnaire included a visual analogue scale (VAS) of nasal breathing and a rhinoplasty outcomes evaluation (ROE) of nasal function and esthetics. All six patients had improvement in both their global nasal airflow on the VAS and on their ROE that was statistically significant. The mean preoperative VAS score was 5.8 compared with our postoperative mean of 8.5 of a possible 10. Mean ROE scores improved from 34.7 to 85.5. At 6 months, all patients felt that their nasal appearance had improved. The TSE is a simple and reliable technique for correction of saddle nose deformity. This prospective study has demonstrated improvement in both nasal function and esthetics when it is employed.

  4. PERSISTENCE MAPPING USING EUV SOLAR IMAGER DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, B. J.; Young, C. A., E-mail: barbara.j.thompson@nasa.gov

    We describe a simple image processing technique that is useful for the visualization and depiction of gradually evolving or intermittent structures in solar physics extreme-ultraviolet imagery. The technique is an application of image segmentation, which we call “Persistence Mapping,” to isolate extreme values in a data set, and is particularly useful for the problem of capturing phenomena that are evolving in both space and time. While integration or “time-lapse” imaging uses the full sample (of size N ), Persistence Mapping rejects ( N − 1)/ N of the data set and identifies the most relevant 1/ N values using themore » following rule: if a pixel reaches an extreme value, it retains that value until that value is exceeded. The simplest examples isolate minima and maxima, but any quantile or statistic can be used. This paper demonstrates how the technique has been used to extract the dynamics in long-term evolution of comet tails, erupting material, and EUV dimming regions.« less

  5. Advances in the microrheology of complex fluids

    NASA Astrophysics Data System (ADS)

    Waigh, Thomas Andrew

    2016-07-01

    New developments in the microrheology of complex fluids are considered. Firstly the requirements for a simple modern particle tracking microrheology experiment are introduced, the error analysis methods associated with it and the mathematical techniques required to calculate the linear viscoelasticity. Progress in microrheology instrumentation is then described with respect to detectors, light sources, colloidal probes, magnetic tweezers, optical tweezers, diffusing wave spectroscopy, optical coherence tomography, fluorescence correlation spectroscopy, elastic- and quasi-elastic scattering techniques, 3D tracking, single molecule methods, modern microscopy methods and microfluidics. New theoretical techniques are also reviewed such as Bayesian analysis, oversampling, inversion techniques, alternative statistical tools for tracks (angular correlations, first passage probabilities, the kurtosis, motor protein step segmentation etc), issues in micro/macro rheological agreement and two particle methodologies. Applications where microrheology has begun to make some impact are also considered including semi-flexible polymers, gels, microorganism biofilms, intracellular methods, high frequency viscoelasticity, comb polymers, active motile fluids, blood clots, colloids, granular materials, polymers, liquid crystals and foods. Two large emergent areas of microrheology, non-linear microrheology and surface microrheology are also discussed.

  6. Electroencephalography signatures of attention-deficit/hyperactivity disorder: clinical utility.

    PubMed

    Alba, Guzmán; Pereda, Ernesto; Mañas, Soledad; Méndez, Leopoldo D; González, Almudena; González, Julián J

    2015-01-01

    The techniques and the most important results on the use of electroencephalography (EEG) to extract different measures are reviewed in this work, which can be clinically useful to study subjects with attention-deficit/hyperactivity disorder (ADHD). First, we discuss briefly and in simple terms the EEG analysis and processing techniques most used in the context of ADHD. We review techniques that both analyze individual EEG channels (univariate measures) and study the statistical interdependence between different EEG channels (multivariate measures), the so-called functional brain connectivity. Among the former ones, we review the classical indices of absolute and relative spectral power and estimations of the complexity of the channels, such as the approximate entropy and the Lempel-Ziv complexity. Among the latter ones, we focus on the magnitude square coherence and on different measures based on the concept of generalized synchronization and its estimation in the state space. Second, from a historical point of view, we present the most important results achieved with these techniques and their clinical utility (sensitivity, specificity, and accuracy) to diagnose ADHD. Finally, we propose future research lines based on these results.

  7. Modeling epidemics on adaptively evolving networks: A data-mining perspective.

    PubMed

    Kattis, Assimakis A; Holiday, Alexander; Stoica, Ana-Andreea; Kevrekidis, Ioannis G

    2016-01-01

    The exploration of epidemic dynamics on dynamically evolving ("adaptive") networks poses nontrivial challenges to the modeler, such as the determination of a small number of informative statistics of the detailed network state (that is, a few "good observables") that usefully summarize the overall (macroscopic, systems-level) behavior. Obtaining reduced, small size accurate models in terms of these few statistical observables--that is, trying to coarse-grain the full network epidemic model to a small but useful macroscopic one--is even more daunting. Here we describe a data-based approach to solving the first challenge: the detection of a few informative collective observables of the detailed epidemic dynamics. This is accomplished through Diffusion Maps (DMAPS), a recently developed data-mining technique. We illustrate the approach through simulations of a simple mathematical model of epidemics on a network: a model known to exhibit complex temporal dynamics. We discuss potential extensions of the approach, as well as possible shortcomings.

  8. Constructing networks with correlation maximization methods.

    PubMed

    Mellor, Joseph C; Wu, Jie; Delisi, Charles

    2004-01-01

    Problems of inference in systems biology are ideally reduced to formulations which can efficiently represent the features of interest. In the case of predicting gene regulation and pathway networks, an important feature which describes connected genes and proteins is the relationship between active and inactive forms, i.e. between the "on" and "off" states of the components. While not optimal at the limits of resolution, these logical relationships between discrete states can often yield good approximations of the behavior in larger complex systems, where exact representation of measurement relationships may be intractable. We explore techniques for extracting binary state variables from measurement of gene expression, and go on to describe robust measures for statistical significance and information that can be applied to many such types of data. We show how statistical strength and information are equivalent criteria in limiting cases, and demonstrate the application of these measures to simple systems of gene regulation.

  9. Optimizing Integrated Terminal Airspace Operations Under Uncertainty

    NASA Technical Reports Server (NTRS)

    Bosson, Christabelle; Xue, Min; Zelinski, Shannon

    2014-01-01

    In the terminal airspace, integrated departures and arrivals have the potential to increase operations efficiency. Recent research has developed geneticalgorithm- based schedulers for integrated arrival and departure operations under uncertainty. This paper presents an alternate method using a machine jobshop scheduling formulation to model the integrated airspace operations. A multistage stochastic programming approach is chosen to formulate the problem and candidate solutions are obtained by solving sample average approximation problems with finite sample size. Because approximate solutions are computed, the proposed algorithm incorporates the computation of statistical bounds to estimate the optimality of the candidate solutions. A proof-ofconcept study is conducted on a baseline implementation of a simple problem considering a fleet mix of 14 aircraft evolving in a model of the Los Angeles terminal airspace. A more thorough statistical analysis is also performed to evaluate the impact of the number of scenarios considered in the sampled problem. To handle extensive sampling computations, a multithreading technique is introduced.

  10. Statistical mechanics of an ideal active fluid confined in a channel

    NASA Astrophysics Data System (ADS)

    Wagner, Caleb; Baskaran, Aparna; Hagan, Michael

    The statistical mechanics of ideal active Brownian particles (ABPs) confined in a channel is studied by obtaining the exact solution of the steady-state Smoluchowski equation for the 1-particle distribution function. The solution is derived using results from the theory of two-way diffusion equations, combined with an iterative procedure that is justified by numerical results. Using this solution, we quantify the effects of confinement on the spatial and orientational order of the ensemble. Moreover, we rigorously show that both the bulk density and the fraction of particles on the channel walls obey simple scaling relations as a function of channel width. By considering a constant-flux steady state, an effective diffusivity for ABPs is derived which shows signatures of the persistent motion that characterizes ABP trajectories. Finally, we discuss how our techniques generalize to other active models, including systems whose activity is modeled in terms of an Ornstein-Uhlenbeck process.

  11. Effect of autogenic relaxation on depression among menopausal women in rural areas of Thiruvallur District (Tamil Nadu).

    PubMed

    Sujithra, S

    2014-01-01

    An experimental study was conducted among 60 menopausal women, 30 each in experimental and control group who met inclusion criteria. The menopausal women were identified in both the groups and level of depression was assessed using Cornell Dysthmia rating scale. Simple random sampling technique by lottery method was used for selecting the sample. Autogenic relaxation was practiced by the menopausal women for four weeks. The findings revealed that in experimental group, after intervention of autogenic relaxation on depression among menopausal women, 23 (76.7%) had mild depression. There was a statistically significant effectiveness in experimental group at the level of p < 0.05. There was a statistically significant association between the effectiveness of autogenic relaxation on depression among menopausal women in the post-experimental group with the type of family at the level of p < 0.05.

  12. Quantitative methods used in Australian health promotion research: a review of publications from 1992-2002.

    PubMed

    Smith, Ben J; Zehle, Katharina; Bauman, Adrian E; Chau, Josephine; Hawkshaw, Barbara; Frost, Steven; Thomas, Margaret

    2006-04-01

    This study examined the use of quantitative methods in Australian health promotion research in order to identify methodological trends and priorities for strengthening the evidence base for health promotion. Australian health promotion articles were identified by hand searching publications from 1992-2002 in six journals: Health Promotion Journal of Australia, Australian and New Zealand journal of Public Health, Health Promotion International, Health Education Research, Health Education and Behavior and the American Journal of Health Promotion. The study designs and statistical methods used in articles presenting quantitative research were recorded. 591 (57.7%) of the 1,025 articles used quantitative methods. Cross-sectional designs were used in the majority (54.3%) of studies with pre- and post-test (14.6%) and post-test only (9.5%) the next most common designs. Bivariate statistical methods were used in 45.9% of papers, multivariate methods in 27.1% and simple numbers and proportions in 25.4%. Few studies used higher-level statistical techniques. While most studies used quantitative methods, the majority were descriptive in nature. The study designs and statistical methods used provided limited scope for demonstrating intervention effects or understanding the determinants of change.

  13. Statistics of Optical Coherence Tomography Data From Human Retina

    PubMed Central

    de Juan, Joaquín; Ferrone, Claudia; Giannini, Daniela; Huang, David; Koch, Giorgio; Russo, Valentina; Tan, Ou; Bruni, Carlo

    2010-01-01

    Optical coherence tomography (OCT) has recently become one of the primary methods for noninvasive probing of the human retina. The pseudoimage formed by OCT (the so-called B-scan) varies probabilistically across pixels due to complexities in the measurement technique. Hence, sensitive automatic procedures of diagnosis using OCT may exploit statistical analysis of the spatial distribution of reflectance. In this paper, we perform a statistical study of retinal OCT data. We find that the stretched exponential probability density function can model well the distribution of intensities in OCT pseudoimages. Moreover, we show a small, but significant correlation between neighbor pixels when measuring OCT intensities with pixels of about 5 µm. We then develop a simple joint probability model for the OCT data consistent with known retinal features. This model fits well the stretched exponential distribution of intensities and their spatial correlation. In normal retinas, fit parameters of this model are relatively constant along retinal layers, but varies across layers. However, in retinas with diabetic retinopathy, large spikes of parameter modulation interrupt the constancy within layers, exactly where pathologies are visible. We argue that these results give hope for improvement in statistical pathology-detection methods even when the disease is in its early stages. PMID:20304733

  14. A new surgical technique for the treatment of scaphotrapezial arthritis associated with trapeziometacarpal arthritis: the narrow pseudoarthrosis.

    PubMed

    Rubino, M; Cavagnaro, L; Sansone, V

    2016-09-01

    We describe a technique for treating Eaton stage IV osteoarthritis of the first ray, which is a development of our previously published technique for treating trapeziometacarpal arthritis. This simple technique is based on a limited resection arthroplasty of the first trapeziometacarpal and the scaphotrapezial joints, with the aim of inducing the formation of a narrow pseudoarthrosis at both sites. A total of 26 consecutive patients were treated for Eaton stage IV arthritis at a mean follow-up of 4.7 years (range 3.2-6.6). There were statistically significant improvements in all clinical parameters: mean appositional and oppositional pinch strength, mean DASH score (65 points pre-operatively to 8.7 points at final follow-up), and in mean visual analogue scale score (8.6 to 0.2 points). Although a larger cohort and a longer follow-up will be necessary to evaluate this new technique fully, these results encourage us to believe that the limited excision arthroplasty of the trapeziometacarpal and scaphotrapezial joints is a viable alternative to the existing surgical treatments for stage IV thumb arthritis. 4. © The Author(s) 2015.

  15. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.

  16. The change and development of statistical methods used in research articles in child development 1930-2010.

    PubMed

    Køppe, Simo; Dammeyer, Jesper

    2014-09-01

    The evolution of developmental psychology has been characterized by the use of different quantitative and qualitative methods and procedures. But how does the use of methods and procedures change over time? This study explores the change and development of statistical methods used in articles published in Child Development from 1930 to 2010. The methods used in every article in the first issue of every volume were categorized into four categories. Until 1980 relatively simple statistical methods were used. During the last 30 years there has been an explosive use of more advanced statistical methods employed. The absence of statistical methods or use of simple methods had been eliminated.

  17. Weak-value amplification and optimal parameter estimation in the presence of correlated noise

    NASA Astrophysics Data System (ADS)

    Sinclair, Josiah; Hallaji, Matin; Steinberg, Aephraim M.; Tollaksen, Jeff; Jordan, Andrew N.

    2017-11-01

    We analytically and numerically investigate the performance of weak-value amplification (WVA) and related parameter estimation methods in the presence of temporally correlated noise. WVA is a special instance of a general measurement strategy that involves sorting data into separate subsets based on the outcome of a second "partitioning" measurement. Using a simplified correlated noise model that can be analyzed exactly together with optimal statistical estimators, we compare WVA to a conventional measurement method. We find that WVA indeed yields a much lower variance of the parameter of interest than the conventional technique does, optimized in the absence of any partitioning measurements. In contrast, a statistically optimal analysis that employs partitioning measurements, incorporating all partitioned results and their known correlations, is found to yield an improvement—typically slight—over the noise reduction achieved by WVA. This result occurs because the simple WVA technique is not tailored to any specific noise environment and therefore does not make use of correlations between the different partitions. We also compare WVA to traditional background subtraction, a familiar technique where measurement outcomes are partitioned to eliminate unknown offsets or errors in calibration. Surprisingly, for the cases we consider, background subtraction turns out to be a special case of the optimal partitioning approach, possessing a similar typically slight advantage over WVA. These results give deeper insight into the role of partitioning measurements (with or without postselection) in enhancing measurement precision, which some have found puzzling. They also resolve previously made conflicting claims about the usefulness of weak-value amplification to precision measurement in the presence of correlated noise. We finish by presenting numerical results to model a more realistic laboratory situation of time-decaying correlations, showing that our conclusions hold for a wide range of statistical models.

  18. An approach to quality and performance control in a computer-assisted clinical chemistry laboratory.

    PubMed Central

    Undrill, P E; Frazer, S C

    1979-01-01

    A locally developed, computer-based clinical chemistry laboratory system has been in operation since 1970. This utilises a Digital Equipment Co Ltd PDP 12 and an interconnected PDP 8/F computer. Details are presented of the performance and quality control techniques incorporated into the system. Laboratory performance is assessed through analysis of results from fixed-level control sera as well as from cumulative sum methods. At a simple level the presentation may be considered purely indicative, while at a more sophisticated level statistical concepts have been introduced to aid the laboratory controller in decision-making processes. PMID:438340

  19. Low-light divergence in photovoltaic parameter fluctuations

    NASA Astrophysics Data System (ADS)

    Shvydka, Diana; Karpov, V. G.; Compaan, A. D.

    2003-03-01

    We study statistics of the major photovoltaic (PV) parameters, such as open-circuit voltage, short-circuit current, etc., versus light intensity on a set of nominally identical thin-film CdTe/CdS solar cells. A crossover light intensity is found, below which the relative fluctuations of the PV parameters diverge inversely proportional to the square root of the light intensity. We propose a model in which the observed fluctuations are due to lateral nonuniformities in the device structure. The crossover is attributed to the lateral nonuniformity screening length exceeding the device size. From the practical standpoint, our study introduces a simple uniformity diagnostic technique.

  20. Prediction of rain effects on earth-space communication links operating in the 10 to 35 GHz frequency range

    NASA Technical Reports Server (NTRS)

    Stutzman, Warren L.

    1989-01-01

    This paper reviews the effects of precipitation on earth-space communication links operating the 10 to 35 GHz frequency range. Emphasis is on the quantitative prediction of rain attenuation and depolarization. Discussions center on the models developed at Virginia Tech. Comments on other models are included as well as literature references to key works. Also included is the system level modeling for dual polarized communication systems with techniques for calculating antenna and propagation medium effects. Simple models for the calculation of average annual attenuation and cross-polarization discrimination (XPD) are presented. Calculation of worst month statistics are also presented.

  1. Employee resourcing strategies and universities' corporate image: A survey dataset.

    PubMed

    Falola, Hezekiah Olubusayo; Oludayo, Olumuyiwa Akinrole; Olokundun, Maxwell Ayodele; Salau, Odunayo Paul; Ibidunni, Ayodotun Stephen; Igbinoba, Ebe

    2018-06-01

    The data examined the effect of employee resourcing strategies on corporate image. The data were generated from a total of 500 copies of questionnaire administered to the academic staff of the six (6) selected private Universities in Southwest, Nigeria, out of which four hundred and forty-three (443) were retrieved. Stratified and simple random sampling techniques were used to select the respondents for this study. Descriptive and Linear Regression, were used for the presentation of the data. Mean score was used as statistical tool of analysis. Therefore, the data presented in this article is made available to facilitate further and more comprehensive investigation on the subject matter.

  2. General statistical considerations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eberhardt, L L; Gilbert, R O

    From NAEG plutonium environmental studies program meeting; Las Vegas, Nevada, USA (2 Oct 1973). The high sampling variability encountered in environmental plutonium studies along with high analytical costs makes it very important that efficient soil sampling plans be used. However, efficient sampling depends on explicit and simple statements of the objectives of the study. When there are multiple objectives it may be difficult to devise a wholly suitable sampling scheme. Sampling for long-term changes in plutonium concentration in soils may also be complex and expensive. Further attention to problems associated with compositing samples is recommended, as is the consistent usemore » of random sampling as a basic technique. (auth)« less

  3. In defence of model-based inference in phylogeography

    PubMed Central

    Beaumont, Mark A.; Nielsen, Rasmus; Robert, Christian; Hey, Jody; Gaggiotti, Oscar; Knowles, Lacey; Estoup, Arnaud; Panchal, Mahesh; Corander, Jukka; Hickerson, Mike; Sisson, Scott A.; Fagundes, Nelson; Chikhi, Lounès; Beerli, Peter; Vitalis, Renaud; Cornuet, Jean-Marie; Huelsenbeck, John; Foll, Matthieu; Yang, Ziheng; Rousset, Francois; Balding, David; Excoffier, Laurent

    2017-01-01

    Recent papers have promoted the view that model-based methods in general, and those based on Approximate Bayesian Computation (ABC) in particular, are flawed in a number of ways, and are therefore inappropriate for the analysis of phylogeographic data. These papers further argue that Nested Clade Phylogeographic Analysis (NCPA) offers the best approach in statistical phylogeography. In order to remove the confusion and misconceptions introduced by these papers, we justify and explain the reasoning behind model-based inference. We argue that ABC is a statistically valid approach, alongside other computational statistical techniques that have been successfully used to infer parameters and compare models in population genetics. We also examine the NCPA method and highlight numerous deficiencies, either when used with single or multiple loci. We further show that the ages of clades are carelessly used to infer ages of demographic events, that these ages are estimated under a simple model of panmixia and population stationarity but are then used under different and unspecified models to test hypotheses, a usage the invalidates these testing procedures. We conclude by encouraging researchers to study and use model-based inference in population genetics. PMID:29284924

  4. A simple rapid process for semi-automated brain extraction from magnetic resonance images of the whole mouse head.

    PubMed

    Delora, Adam; Gonzales, Aaron; Medina, Christopher S; Mitchell, Adam; Mohed, Abdul Faheem; Jacobs, Russell E; Bearer, Elaine L

    2016-01-15

    Magnetic resonance imaging (MRI) is a well-developed technique in neuroscience. Limitations in applying MRI to rodent models of neuropsychiatric disorders include the large number of animals required to achieve statistical significance, and the paucity of automation tools for the critical early step in processing, brain extraction, which prepares brain images for alignment and voxel-wise statistics. This novel timesaving automation of template-based brain extraction ("skull-stripping") is capable of quickly and reliably extracting the brain from large numbers of whole head images in a single step. The method is simple to install and requires minimal user interaction. This method is equally applicable to different types of MR images. Results were evaluated with Dice and Jacquard similarity indices and compared in 3D surface projections with other stripping approaches. Statistical comparisons demonstrate that individual variation of brain volumes are preserved. A downloadable software package not otherwise available for extraction of brains from whole head images is included here. This software tool increases speed, can be used with an atlas or a template from within the dataset, and produces masks that need little further refinement. Our new automation can be applied to any MR dataset, since the starting point is a template mask generated specifically for that dataset. The method reliably and rapidly extracts brain images from whole head images, rendering them useable for subsequent analytical processing. This software tool will accelerate the exploitation of mouse models for the investigation of human brain disorders by MRI. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Simple heuristics and rules of thumb: where psychologists and behavioural biologists might meet.

    PubMed

    Hutchinson, John M C; Gigerenzer, Gerd

    2005-05-31

    The Centre for Adaptive Behaviour and Cognition (ABC) has hypothesised that much human decision-making can be described by simple algorithmic process models (heuristics). This paper explains this approach and relates it to research in biology on rules of thumb, which we also review. As an example of a simple heuristic, consider the lexicographic strategy of Take The Best for choosing between two alternatives: cues are searched in turn until one discriminates, then search stops and all other cues are ignored. Heuristics consist of building blocks, and building blocks exploit evolved or learned abilities such as recognition memory; it is the complexity of these abilities that allows the heuristics to be simple. Simple heuristics have an advantage in making decisions fast and with little information, and in avoiding overfitting. Furthermore, humans are observed to use simple heuristics. Simulations show that the statistical structures of different environments affect which heuristics perform better, a relationship referred to as ecological rationality. We contrast ecological rationality with the stronger claim of adaptation. Rules of thumb from biology provide clearer examples of adaptation because animals can be studied in the environments in which they evolved. The range of examples is also much more diverse. To investigate them, biologists have sometimes used similar simulation techniques to ABC, but many examples depend on empirically driven approaches. ABC's theoretical framework can be useful in connecting some of these examples, particularly the scattered literature on how information from different cues is integrated. Optimality modelling is usually used to explain less detailed aspects of behaviour but might more often be redirected to investigate rules of thumb.

  6. Comparative study between manual injection intraosseous anesthesia and conventional oral anesthesia

    PubMed Central

    Ata-Ali, Javier; Oltra-Moscardó, María J.; Peñarrocha-Diago, María; Peñarrocha, Miguel

    2012-01-01

    Objective: To compare intraosseous anesthesia (IA) with the conventional oral anesthesia techniques. Materials and methods: A simple-blind, prospective clinical study was carried out. Each patient underwent two anesthetic techniques: conventional (local infiltration and locoregional anesthetic block) and intraosseous, for res-pective dental operations. In order to allow comparison of IA versus conventional anesthesia, the two operations were similar and affected the same two teeth in opposite quadrants. Results: A total of 200 oral anesthetic procedures were carried out in 100 patients. The mean patient age was 28.6±9.92 years. Fifty-five vestibular infiltrations and 45 mandibular blocks were performed. All patients were also subjected to IA. The type of intervention (conservative or endodontic) exerted no significant influence (p=0.58 and p=0.62, respectively). The latency period was 8.52±2.44 minutes for the conventional techniques and 0.89±0.73 minutes for IA – the difference being statistically significant (p<0.05). Regarding patient anesthesia sensation, the infiltrative techniques lasted a maximum of one hour, the inferior alveolar nerve blocks lasted between 1-3 hours, and IA lasted only 2.5 minutes – the differences being statistically significant (p≤0.0000, Φ=0.29). Anesthetic success was recorded in 89% of the conventional procedures and in 78% of the IA. Most patients preferred IA (61%) (p=0.0032). Conclusions: The two anesthetic procedures have been compared for latency, duration of anesthetic effect, anesthetic success rate and patient preference. Intraosseous anesthesia has been shown to be a technique to be taken into account when planning conservative and endodontic treatments. Key words: Anesthesia, intraosseous, oral anesthesia, Stabident®, infiltrative, mandibular block. PMID:22143700

  7. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    PubMed

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Draft SEI Program Plans: 1995-1999

    DTIC Science & Technology

    1994-08-01

    risk management because we believe that (a) structured techniques, even quite simple ones, can be effective in identifying and quantifying risk ; and (b...belief that (1) structured techniques, even quite simple ones, could be effective in identifying and quantifying risk ; and (2) techniques existed to

  9. Rapid determination of Swiss cheese composition by Fourier transform infrared/attenuated total reflectance spectroscopy.

    PubMed

    Rodriguez-Saona, L E; Koca, N; Harper, W J; Alvarez, V B

    2006-05-01

    There is a need for rapid and simple techniques that can be used to predict the quality of cheese. The aim of this research was to develop a simple and rapid screening tool for monitoring Swiss cheese composition by using Fourier transform infrared spectroscopy. Twenty Swiss cheese samples from different manufacturers and degree of maturity were evaluated. Direct measurements of Swiss cheese slices (approximately 0.5 g) were made using a MIRacle 3-reflection diamond attenuated total reflectance (ATR) accessory. Reference methods for moisture (vacuum oven), protein content (Kjeldahl), and fat (Babcock) were used. Calibration models were developed based on a cross-validated (leave-one-out approach) partial least squares regression. The information-rich infrared spectral range for Swiss cheese samples was from 3,000 to 2,800 cm(-1) and 1,800 to 900 cm(-1). The performance statistics for cross-validated models gave estimates for standard error of cross-validation of 0.45, 0.25, and 0.21% for moisture, protein, and fat respectively, and correlation coefficients r > 0.96. Furthermore, the ATR infrared protocol allowed for the classification of cheeses according to manufacturer and aging based on unique spectral information, especially of carbonyl groups, probably due to their distinctive lipid composition. Attenuated total reflectance infrared spectroscopy allowed for the rapid (approximately 3-min analysis time) and accurate analysis of the composition of Swiss cheese. This technique could contribute to the development of simple and rapid protocols for monitoring complex biochemical changes, and predicting the final quality of the cheese.

  10. Some practical universal noiseless coding techniques, part 2

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Lee, J. J.

    1983-01-01

    This report is an extension of earlier work (Part 1) which provided practical adaptive techniques for the efficient noiseless coding of a broad class of data sources characterized by only partially known and varying statistics (JPL Publication 79-22). The results here, while still claiming such general applicability, focus primarily on the noiseless coding of image data. A fairly complete and self-contained treatment is provided. Particular emphasis is given to the requirements of the forthcoming Voyager II encounters of Uranus and Neptune. Performance evaluations are supported both graphically and pictorially. Expanded definitions of the algorithms in Part 1 yield a computationally improved set of options for applications requiring efficient performance at entropies above 4 bits/sample. These expanded definitions include as an important subset, a somewhat less efficient but extremely simple "FAST' compressor which will be used at the Voyager Uranus encounter. Additionally, options are provided which enhance performance when atypical data spikes may be present.

  11. Exploitation of immunofluorescence for the quantification and characterization of small numbers of Pasteuria endospores.

    PubMed

    Costa, Sofia R; Kerry, Brian R; Bardgett, Richard D; Davies, Keith G

    2006-12-01

    The Pasteuria group of endospore-forming bacteria has been studied as a biocontrol agent of plant-parasitic nematodes. Techniques have been developed for its detection and quantification in soil samples, and these mainly focus on observations of endospore attachment to nematodes. Characterization of Pasteuria populations has recently been performed with DNA-based techniques, which usually require the extraction of large numbers of spores. We describe a simple immunological method for the quantification and characterization of Pasteuria populations. Bayesian statistics were used to determine an extraction efficiency of 43% and a threshold of detection of 210 endospores g(-1) sand. This provided a robust means of estimating numbers of endospores in small-volume samples from a natural system. Based on visual assessment of endospore fluorescence, a quantitative method was developed to characterize endospore populations, which were shown to vary according to their host.

  12. Assessment of higher order structure comparability in therapeutic proteins using nuclear magnetic resonance spectroscopy.

    PubMed

    Amezcua, Carlos A; Szabo, Christina M

    2013-06-01

    In this work, we applied nuclear magnetic resonance (NMR) spectroscopy to rapidly assess higher order structure (HOS) comparability in protein samples. Using a variation of the NMR fingerprinting approach described by Panjwani et al. [2010. J Pharm Sci 99(8):3334-3342], three nonglycosylated proteins spanning a molecular weight range of 6.5-67 kDa were analyzed. A simple statistical method termed easy comparability of HOS by NMR (ECHOS-NMR) was developed. In this method, HOS similarity between two samples is measured via the correlation coefficient derived from linear regression analysis of binned NMR spectra. Applications of this method include HOS comparability assessment during new product development, manufacturing process changes, supplier changes, next-generation products, and the development of biosimilars to name just a few. We foresee ECHOS-NMR becoming a routine technique applied to comparability exercises used to complement data from other analytical techniques. Copyright © 2013 Wiley Periodicals, Inc.

  13. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    PubMed

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Approximate Model Checking of PCTL Involving Unbounded Path Properties

    NASA Astrophysics Data System (ADS)

    Basu, Samik; Ghosh, Arka P.; He, Ru

    We study the problem of applying statistical methods for approximate model checking of probabilistic systems against properties encoded as PCTL formulas. Such approximate methods have been proposed primarily to deal with state-space explosion that makes the exact model checking by numerical methods practically infeasible for large systems. However, the existing statistical methods either consider a restricted subset of PCTL, specifically, the subset that can only express bounded until properties; or rely on user-specified finite bound on the sample path length. We propose a new method that does not have such restrictions and can be effectively used to reason about unbounded until properties. We approximate probabilistic characteristics of an unbounded until property by that of a bounded until property for a suitably chosen value of the bound. In essence, our method is a two-phase process: (a) the first phase is concerned with identifying the bound k 0; (b) the second phase computes the probability of satisfying the k 0-bounded until property as an estimate for the probability of satisfying the corresponding unbounded until property. In both phases, it is sufficient to verify bounded until properties which can be effectively done using existing statistical techniques. We prove the correctness of our technique and present its prototype implementations. We empirically show the practical applicability of our method by considering different case studies including a simple infinite-state model, and large finite-state models such as IPv4 zeroconf protocol and dining philosopher protocol modeled as Discrete Time Markov chains.

  15. Audio Classification in Speech and Music: A Comparison between a Statistical and a Neural Approach

    NASA Astrophysics Data System (ADS)

    Bugatti, Alessandro; Flammini, Alessandra; Migliorati, Pierangelo

    2002-12-01

    We focus the attention on the problem of audio classification in speech and music for multimedia applications. In particular, we present a comparison between two different techniques for speech/music discrimination. The first method is based on Zero crossing rate and Bayesian classification. It is very simple from a computational point of view, and gives good results in case of pure music or speech. The simulation results show that some performance degradation arises when the music segment contains also some speech superimposed on music, or strong rhythmic components. To overcome these problems, we propose a second method, that uses more features, and is based on neural networks (specifically a multi-layer Perceptron). In this case we obtain better performance, at the expense of a limited growth in the computational complexity. In practice, the proposed neural network is simple to be implemented if a suitable polynomial is used as the activation function, and a real-time implementation is possible even if low-cost embedded systems are used.

  16. Theory and simulations of covariance mapping in multiple dimensions for data analysis in high-event-rate experiments

    NASA Astrophysics Data System (ADS)

    Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.

    2014-05-01

    Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.

  17. Single Spore Isolation as a Simple and Efficient Technique to obtain fungal pure culture

    NASA Astrophysics Data System (ADS)

    Noman, E.; Al-Gheethi, AA; Rahman, N. K.; Talip, B.; Mohamed, R.; H, N.; Kadir, O. A.

    2018-04-01

    The successful identification of fungi by phenotypic methods or molecular technique depends mainly on the using an advanced technique for purifying the isolates. The most efficient is the single spore technique due to the simple requirements and the efficiency in preventing the contamination by yeast, mites or bacteria. The method described in the present work is depends on the using of a light microscope to transfer one spore into a new culture medium. The present work describes a simple and efficient procedure for single spore isolation to purify of fungi recovered from the clinical wastes.

  18. Simple Technique for Dark-Field Photography of Immunodiffusion Bands

    PubMed Central

    Jensh, Ronald P.; Brent, Robert L.

    1969-01-01

    A simple dark-field photographic technique was developed which enables laboratory personnel with minimal photographic training to easily record antigen-antibody patterns on immunodiffusion plates. Images PMID:4979944

  19. Validation of PC-based Sound Card with Biopac for Digitalization of ECG Recording in Short-term HRV Analysis.

    PubMed

    Maheshkumar, K; Dilara, K; Maruthy, K N; Sundareswaren, L

    2016-07-01

    Heart rate variability (HRV) analysis is a simple and noninvasive technique capable of assessing autonomic nervous system modulation on heart rate (HR) in healthy as well as disease conditions. The aim of the present study was to compare (validate) the HRV using a temporal series of electrocardiograms (ECG) obtained by simple analog amplifier with PC-based sound card (audacity) and Biopac MP36 module. Based on the inclusion criteria, 120 healthy participants, including 72 males and 48 females, participated in the present study. Following standard protocol, 5-min ECG was recorded after 10 min of supine rest by Portable simple analog amplifier PC-based sound card as well as by Biopac module with surface electrodes in Leads II position simultaneously. All the ECG data was visually screened and was found to be free of ectopic beats and noise. RR intervals from both ECG recordings were analyzed separately in Kubios software. Short-term HRV indexes in both time and frequency domain were used. The unpaired Student's t-test and Pearson correlation coefficient test were used for the analysis using the R statistical software. No statistically significant differences were observed when comparing the values analyzed by means of the two devices for HRV. Correlation analysis revealed perfect positive correlation (r = 0.99, P < 0.001) between the values in time and frequency domain obtained by the devices. On the basis of the results of the present study, we suggest that the calculation of HRV values in the time and frequency domains by RR series obtained from the PC-based sound card is probably as reliable as those obtained by the gold standard Biopac MP36.

  20. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  1. Equivalence between Step Selection Functions and Biased Correlated Random Walks for Statistical Inference on Animal Movement.

    PubMed

    Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul

    2015-01-01

    Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.

  2. Predicting survival of Escherichia coli O157:H7 in dry fermented sausage using artificial neural networks.

    PubMed

    Palanichamy, A; Jayas, D S; Holley, R A

    2008-01-01

    The Canadian Food Inspection Agency required the meat industry to ensure Escherichia coli O157:H7 does not survive (experiences > or = 5 log CFU/g reduction) in dry fermented sausage (salami) during processing after a series of foodborne illness outbreaks resulting from this pathogenic bacterium occurred. The industry is in need of an effective technique like predictive modeling for estimating bacterial viability, because traditional microbiological enumeration is a time-consuming and laborious method. The accuracy and speed of artificial neural networks (ANNs) for this purpose is an attractive alternative (developed from predictive microbiology), especially for on-line processing in industry. Data from a study of interactive effects of different levels of pH, water activity, and the concentrations of allyl isothiocyanate at various times during sausage manufacture in reducing numbers of E. coli O157:H7 were collected. Data were used to develop predictive models using a general regression neural network (GRNN), a form of ANN, and a statistical linear polynomial regression technique. Both models were compared for their predictive error, using various statistical indices. GRNN predictions for training and test data sets had less serious errors when compared with the statistical model predictions. GRNN models were better and slightly better for training and test sets, respectively, than was the statistical model. Also, GRNN accurately predicted the level of allyl isothiocyanate required, ensuring a 5-log reduction, when an appropriate production set was created by interpolation. Because they are simple to generate, fast, and accurate, ANN models may be of value for industrial use in dry fermented sausage manufacture to reduce the hazard associated with E. coli O157:H7 in fresh beef and permit production of consistently safe products from this raw material.

  3. Assessment of apically extruded debris produced by the single-file ProTaper F2 technique under reciprocating movement.

    PubMed

    De-Deus, Gustavo; Brandão, Maria Claudia; Barino, Bianca; Di Giorgi, Karina; Fidel, Rivail Antonio Sergio; Luna, Aderval Severino

    2010-09-01

    This study was designed to quantitatively evaluate the amount of dentin debris extruded from the apical foramen by comparing the conventional sequence of the ProTaper Universal nickel-titanium (NiTi) files with the single-file ProTaper F2 technique. Thirty mesial roots of lower molars were selected, and the use of different instrumentation techniques resulted in 3 groups (n=10 each). In G1, a crown-down hand-file technique was used, and in G2 conventional ProTaper Universal technique was used. In G3, ProTaper F2 file was used in a reciprocating motion. The apical finish preparation was equivalent to ISO size 25. An apparatus was used to evaluate the apically extruded debris. Statistical analysis was performed using 1-way analysis of variance and Tukey multiple comparisons. No significant difference was found in the amount of the debris extruded between the conventional sequence of the ProTaper Universal NiTi files and the single-file ProTaper F2 technique (P>.05). In contrast, the hand instrumentation group extruded significantly more debris than both NiTi groups (P<.05). The present results yielded favorable input for the F2 single-file technique in terms of apically extruded debris, inasmuch as it is the most simple and cost-effective instrumentation approach. Copyright (c) 2010 Mosby, Inc. All rights reserved.

  4. Horizontal Running Mattress Suture Modified with Intermittent Simple Loops

    PubMed Central

    Chacon, Anna H; Shiman, Michael I; Strozier, Narissa; Zaiac, Martin N

    2013-01-01

    Using the combination of a horizontal running mattress suture with intermittent loops achieves both good eversion with the horizontal running mattress plus the ease of removal of the simple loops. This combination technique also avoids the characteristic railroad track marks that result from prolonged non-absorbable suture retention. The unique feature of our technique is the incorporation of one simple running suture after every two runs of the horizontal running mattress suture. To demonstrate its utility, we used the suturing technique on several patients and analyzed the cosmetic outcome with post-operative photographs in comparison to other suturing techniques. In summary, the combination of running horizontal mattress suture with simple intermittent loops demonstrates functional and cosmetic benefits that can be readily taught, comprehended, and employed, leading to desirable aesthetic results and wound edge eversion. PMID:23723610

  5. Noise-gating to Clean Astrophysical Image Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    DeForest, C. E.

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to nomore » apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.« less

  6. To t-Test or Not to t-Test? A p-Values-Based Point of View in the Receiver Operating Characteristic Curve Framework.

    PubMed

    Vexler, Albert; Yu, Jihnhee

    2018-04-13

    A common statistical doctrine supported by many introductory courses and textbooks is that t-test type procedures based on normally distributed data points are anticipated to provide a standard in decision-making. In order to motivate scholars to examine this convention, we introduce a simple approach based on graphical tools of receiver operating characteristic (ROC) curve analysis, a well-established biostatistical methodology. In this context, we propose employing a p-values-based method, taking into account the stochastic nature of p-values. We focus on the modern statistical literature to address the expected p-value (EPV) as a measure of the performance of decision-making rules. During the course of our study, we extend the EPV concept to be considered in terms of the ROC curve technique. This provides expressive evaluations and visualizations of a wide spectrum of testing mechanisms' properties. We show that the conventional power characterization of tests is a partial aspect of the presented EPV/ROC technique. We desire that this explanation of the EPV/ROC approach convinces researchers of the usefulness of the EPV/ROC approach for depicting different characteristics of decision-making procedures, in light of the growing interest regarding correct p-values-based applications.

  7. Quantitative identification of riverine nitrogen from point, direct runoff and base flow sources.

    PubMed

    Huang, Hong; Zhang, Baifa; Lu, Jun

    2014-01-01

    We present a methodological example for quantifying the contributions of riverine total nitrogen (TN) from point, direct runoff and base flow sources by combining a recursive digital filter technique and statistical methods. First, we separated daily riverine flow into direct runoff and base flow using a recursive digital filter technique; then, a statistical model was established using daily simultaneous data for TN load, direct runoff rate, base flow rate, and temperature; and finally, the TN loading from direct runoff and base flow sources could be inversely estimated. As a case study, this approach was adopted to identify the TN source contributions in Changle River, eastern China. Results showed that, during 2005-2009, the total annual TN input to the river was 1,700.4±250.2 ton, and the contributions of point, direct runoff and base flow sources were 17.8±2.8%, 45.0±3.6%, and 37.2±3.9%, respectively. The innovation of the approach is that the nitrogen from direct runoff and base flow sources could be separately quantified. The approach is simple but detailed enough to take the major factors into account, providing an effective and reliable method for riverine nitrogen loading estimation and source apportionment.

  8. Noise-gating to Clean Astrophysical Image Data

    NASA Astrophysics Data System (ADS)

    DeForest, C. E.

    2017-04-01

    I present a family of algorithms to reduce noise in astrophysical images and image sequences, preserving more information from the original data than is retained by conventional techniques. The family uses locally adaptive filters (“noise gates”) in the Fourier domain to separate coherent image structure from background noise based on the statistics of local neighborhoods in the image. Processing of solar data limited by simple shot noise or by additive noise reveals image structure not easily visible in the originals, preserves photometry of observable features, and reduces shot noise by a factor of 10 or more with little to no apparent loss of resolution. This reveals faint features that were either not directly discernible or not sufficiently strongly detected for quantitative analysis. The method works best on image sequences containing related subjects, for example movies of solar evolution, but is also applicable to single images provided that there are enough pixels. The adaptive filter uses the statistical properties of noise and of local neighborhoods in the data to discriminate between coherent features and incoherent noise without reference to the specific shape or evolution of those features. The technique can potentially be modified in a straightforward way to exploit additional a priori knowledge about the functional form of the noise.

  9. Neural net diagnostics for VLSI test

    NASA Technical Reports Server (NTRS)

    Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.

    1990-01-01

    This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.

  10. Bayesian aggregation versus majority vote in the characterization of non-specific arm pain based on quantitative needle electromyography

    PubMed Central

    2010-01-01

    Background Methods for the calculation and application of quantitative electromyographic (EMG) statistics for the characterization of EMG data detected from forearm muscles of individuals with and without pain associated with repetitive strain injury are presented. Methods A classification procedure using a multi-stage application of Bayesian inference is presented that characterizes a set of motor unit potentials acquired using needle electromyography. The utility of this technique in characterizing EMG data obtained from both normal individuals and those presenting with symptoms of "non-specific arm pain" is explored and validated. The efficacy of the Bayesian technique is compared with simple voting methods. Results The aggregate Bayesian classifier presented is found to perform with accuracy equivalent to that of majority voting on the test data, with an overall accuracy greater than 0.85. Theoretical foundations of the technique are discussed, and are related to the observations found. Conclusions Aggregation of motor unit potential conditional probability distributions estimated using quantitative electromyographic analysis, may be successfully used to perform electrodiagnostic characterization of "non-specific arm pain." It is expected that these techniques will also be able to be applied to other types of electrodiagnostic data. PMID:20156353

  11. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    NASA Astrophysics Data System (ADS)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  12. A simple technique for laparoscopic gastrostomy.

    PubMed

    Murphy, C; Rosemurgy, A S; Albrink, M H; Carey, L C

    1992-05-01

    While endoscopically placed gastrostomy tubes are routinely simple, they are not always feasible. Endoscopic technique also does not uniformly secure the tube to the abdominal wall, which presents possible complications, including leakage, accidental early tube removal, intraperitoneal catheter migration and necrosis of the stomach or abdominal wall because of excessive traction. Presented herein is a technique that is rapid, simple and eliminates some of these potential complications. The technique is easily combined with other operative procedures, such as tracheostomy, is done under direct vision, can be performed quickly with intravenous sedation and local anesthetic and is a safe method of tube placement for enteral feeding or gastric decompression.

  13. A simple method for processing data with least square method

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning

    2017-08-01

    The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.

  14. Challenges in the automated classification of variable stars in large databases

    NASA Astrophysics Data System (ADS)

    Graham, Matthew; Drake, Andrew; Djorgovski, S. G.; Mahabal, Ashish; Donalek, Ciro

    2017-09-01

    With ever-increasing numbers of astrophysical transient surveys, new facilities and archives of astronomical time series, time domain astronomy is emerging as a mainstream discipline. However, the sheer volume of data alone - hundreds of observations for hundreds of millions of sources - necessitates advanced statistical and machine learning methodologies for scientific discovery: characterization, categorization, and classification. Whilst these techniques are slowly entering the astronomer's toolkit, their application to astronomical problems is not without its issues. In this paper, we will review some of the challenges posed by trying to identify variable stars in large data collections, including appropriate feature representations, dealing with uncertainties, establishing ground truths, and simple discrete classes.

  15. Multivariate analysis techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bendavid, Josh; Fisher, Wade C.; Junk, Thomas R.

    2016-01-01

    The end products of experimental data analysis are designed to be simple and easy to understand: hypothesis tests and measurements of parameters. But, the experimental data themselves are voluminous and complex. Furthermore, in modern collider experiments, many petabytes of data must be processed in search of rare new processes which occur together with much more copious background processes that are of less interest to the task at hand. The systematic uncertainties on the background may be larger than the expected signal in many cases. The statistical power of an analysis and its sensitivity to systematic uncertainty can therefore usually bothmore » be improved by separating signal events from background events with higher efficiency and purity.« less

  16. A second-order Budkyo-type parameterization of landsurface hydrology

    NASA Technical Reports Server (NTRS)

    Andreou, S. A.; Eagleson, P. S.

    1982-01-01

    A simple, second order parameterization of the water fluxes at a land surface for use as the appropriate boundary condition in general circulation models of the global atmosphere was developed. The derived parameterization incorporates the high nonlinearities in the relationship between the near surface soil moisture and the evaporation, runoff and percolation fluxes. Based on the one dimensional statistical dynamic derivation of the annual water balance, it makes the transition to short term prediction of the moisture fluxes, through a Taylor expansion around the average annual soil moisture. A comparison of the suggested parameterization is made with other existing techniques and available measurements. A thermodynamic coupling is applied in order to obtain estimations of the surface ground temperature.

  17. Does strand configuration and number of purchase points affect the biomechanical behavior of a tendon repair? A biomechanical evaluation using different kessler methods of flexor tendon repair.

    PubMed

    Dogramaci, Yunus; Kalaci, Aydiner; Sevinç, Teoman Toni; Esen, Erdinc; Komurcu, Mahmut; Yanat, Ahmet Nedim

    2008-09-01

    This study compares the mechanical properties of modified Kessler and double-modified Kessler flexor tendon repair techniques and evaluates simple modifications on both methods. Forty fresh sheep flexor tendons were divided equally into four groups. A transverse sharp cut was done in the middle of each tendon and then repaired with modified Kessler technique, modified Kessler with additional purchase point in the midpoint of each longitudinal strand, double-modified Kessler technique, or a combination of outer Kessler and inner cruciate configuration based on double-modified Kessler technique. The tendons were tested in a tensile testing machine to assess the mechanical performance of the repairs. Outcome measures included gap formation and ultimate forces. The gap strengths of the double-modified Kessler technique (30.85 N, SD 1.90) and double-modified Kessler technique with inner cruciate configuration (33.60 N, SD 4.64) were statistically significantly greater than that of the two-strand modified Kessler (22.56 N, SD 3.44) and modified Kessler with additional purchase configuration (21.75 N, SD 4.03; Tukey honestly significant difference test, P < 0.000). There were statistically significant differences in failure strengths of the all groups (analysis of variance, P < 0.000). With an identical number of strands, the gap formation and ultimate forces of the repairs were not changed by additional locking purchase point in modified Kessler repair or changing the inner strand configuration in double-modified Kessler repair. The results of this study show that the number of strands across the repair site together with the number of locking loops clearly affects the strength of the repair; meanwhile, the longitudinal strand orientation and number of purchase points in a single loop did not affect its strength.

  18. Open vs Laparoscopic Simple Prostatectomy: A Comparison of Initial Outcomes and Cost.

    PubMed

    Demir, Aslan; Günseren, Kadir Ömür; Kordan, Yakup; Yavaşçaoğlu, İsmet; Vuruşkan, Berna Aytaç; Vuruşkan, Hakan

    2016-08-01

    We compared the cost-effectiveness of laparoscopic simple prostatectomy (LSP) vs open prostatectomy (OP). A total of 73 men treated for benign prostatic hyperplasia were enrolled for OP and LSP in groups 1 and 2, respectively. The findings were recorded perioperative, including operation time (OT), blood lost, transfusion rate, conversion to the open surgery, and the complications according to the Clavien Classification. The postoperative findings, including catheterization and drainage time, the amount of analgesic used, hospitalization time, postoperative complications, international prostate symptom score (IPSS) and International Index of Erectile Function (IIEF) scores, the extracted prostate weight, the uroflowmeter, as well as postvoiding residual (PVR) and quality of life (QoL) score at the postoperative third month, were analyzed. The cost of both techniques was also compared statistically. No statistical differences were found in the preoperative parameters, including age, IPSS and QoL score, maximum flow rate (Qmax), PVR, IIEF score, and prostate volumes, as measured by transabdominal ultrasonography. No statistical differences were established in terms of the OT and the weight of the extracted prostate. No differences were established with regard to complications according to Clavien's classification in groups. However, the bleeding rate was significantly lower in group 2. The drainage, catheterization, and hospitalization times and the amount of analgesics were significantly lower in the second group. The postoperative third month findings were not different statistically. Only the Qmax values were significantly greater in group 2. While there was only a $52 difference between groups with regard to operation cost, this difference was significantly different. The use of LSP for the prostates over 80 g is more effective than the OP in terms of OT, bleeding amount, transfusion rates, catheterization time, drain removal time, hospitalization time, consumed analgesic amount, and Qmax values. On the other hand, the mean cost of the LSP is higher than OP. Better effectiveness comes with higher cost.

  19. Assessing the effect of a partly unobserved, exogenous, binary time-dependent covariate on survival probabilities using generalised pseudo-values.

    PubMed

    Pötschger, Ulrike; Heinzl, Harald; Valsecchi, Maria Grazia; Mittlböck, Martina

    2018-01-19

    Investigating the impact of a time-dependent intervention on the probability of long-term survival is statistically challenging. A typical example is stem-cell transplantation performed after successful donor identification from registered donors. Here, a suggested simple analysis based on the exogenous donor availability status according to registered donors would allow the estimation and comparison of survival probabilities. As donor search is usually ceased after a patient's event, donor availability status is incompletely observed, so that this simple comparison is not possible and the waiting time to donor identification needs to be addressed in the analysis to avoid bias. It is methodologically unclear, how to directly address cumulative long-term treatment effects without relying on proportional hazards while avoiding waiting time bias. The pseudo-value regression technique is able to handle the first two issues; a novel generalisation of this technique also avoids waiting time bias. Inverse-probability-of-censoring weighting is used to account for the partly unobserved exogenous covariate donor availability. Simulation studies demonstrate unbiasedness and satisfying coverage probabilities of the new method. A real data example demonstrates that study results based on generalised pseudo-values have a clear medical interpretation which supports the clinical decision making process. The proposed generalisation of the pseudo-value regression technique enables to compare survival probabilities between two independent groups where group membership becomes known over time and remains partly unknown. Hence, cumulative long-term treatment effects are directly addressed without relying on proportional hazards while avoiding waiting time bias.

  20. Uncertainty Quantification Techniques for Population Density Estimates Derived from Sparse Open Source Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stewart, Robert N; White, Devin A; Urban, Marie L

    2013-01-01

    The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort whichmore » considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.« less

  1. Analysis of respiratory events in obstructive sleep apnea syndrome: Inter-relations and association to simple nocturnal features.

    PubMed

    Ghandeharioun, H; Rezaeitalab, F; Lotfi, R

    2016-01-01

    This study carefully evaluates the association of different respiration-related events to each other and to simple nocturnal features in obstructive sleep apnea-hypopnea syndrome (OSAS). The events include apneas, hypopneas, respiratory event-related arousals and snores. We conducted a statistical study on 158 adults who underwent polysomnography between July 2012 and May 2014. To monitor relevance, along with linear statistical strategies like analysis of variance and bootstrapping a correlation coefficient standard error, the non-linear method of mutual information is also applied to illuminate vague results of linear techniques. Based on normalized mutual information weights (NMIW), indices of apnea are 1.3 times more relevant to AHI values than those of hypopnea. NMIW for the number of blood oxygen desaturation below 95% is considerable (0.531). The next relevant feature is "respiratory arousals index" with NMIW of 0.501. Snore indices (0.314), and BMI (0.203) take the next place. Based on NMIW values, snoring events are nearly one-third (29.9%) more dependent to hypopneas than RERAs. 1. The more sever the OSAS is, the more frequently the apneic events happen. 2. The association of snore with hypopnea/RERA revealed which is routinely ignored in regression-based OSAS modeling. 3. The statistical dependencies of oximetry features potentially can lead to home-based screening of OSAS. 4. Poor ESS-AHI relevance in the database under study indicates its disability for the OSA diagnosis compared to oximetry. 5. Based on poor RERA-snore/ESS relevance, detailed history of the symptoms plus polysomnography is suggested for accurate diagnosis of RERAs. Copyright © 2015 Sociedade Portuguesa de Pneumologia. Published by Elsevier España, S.L.U. All rights reserved.

  2. Comparative evaluation of features and techniques for identifying activity type and estimating energy cost from accelerometer data

    PubMed Central

    Kate, Rohit J.; Swartz, Ann M.; Welch, Whitney A.; Strath, Scott J.

    2016-01-01

    Wearable accelerometers can be used to objectively assess physical activity. However, the accuracy of this assessment depends on the underlying method used to process the time series data obtained from accelerometers. Several methods have been proposed that use this data to identify the type of physical activity and estimate its energy cost. Most of the newer methods employ some machine learning technique along with suitable features to represent the time series data. This paper experimentally compares several of these techniques and features on a large dataset of 146 subjects doing eight different physical activities wearing an accelerometer on the hip. Besides features based on statistics, distance based features and simple discrete features straight from the time series were also evaluated. On the physical activity type identification task, the results show that using more features significantly improve results. Choice of machine learning technique was also found to be important. However, on the energy cost estimation task, choice of features and machine learning technique were found to be less influential. On that task, separate energy cost estimation models trained specifically for each type of physical activity were found to be more accurate than a single model trained for all types of physical activities. PMID:26862679

  3. Multi-criterion model ensemble of CMIP5 surface air temperature over China

    NASA Astrophysics Data System (ADS)

    Yang, Tiantian; Tao, Yumeng; Li, Jingjing; Zhu, Qian; Su, Lu; He, Xiaojia; Zhang, Xiaoming

    2018-05-01

    The global circulation models (GCMs) are useful tools for simulating climate change, projecting future temperature changes, and therefore, supporting the preparation of national climate adaptation plans. However, different GCMs are not always in agreement with each other over various regions. The reason is that GCMs' configurations, module characteristics, and dynamic forcings vary from one to another. Model ensemble techniques are extensively used to post-process the outputs from GCMs and improve the variability of model outputs. Root-mean-square error (RMSE), correlation coefficient (CC, or R) and uncertainty are commonly used statistics for evaluating the performances of GCMs. However, the simultaneous achievements of all satisfactory statistics cannot be guaranteed in using many model ensemble techniques. In this paper, we propose a multi-model ensemble framework, using a state-of-art evolutionary multi-objective optimization algorithm (termed MOSPD), to evaluate different characteristics of ensemble candidates and to provide comprehensive trade-off information for different model ensemble solutions. A case study of optimizing the surface air temperature (SAT) ensemble solutions over different geographical regions of China is carried out. The data covers from the period of 1900 to 2100, and the projections of SAT are analyzed with regard to three different statistical indices (i.e., RMSE, CC, and uncertainty). Among the derived ensemble solutions, the trade-off information is further analyzed with a robust Pareto front with respect to different statistics. The comparison results over historical period (1900-2005) show that the optimized solutions are superior over that obtained simple model average, as well as any single GCM output. The improvements of statistics are varying for different climatic regions over China. Future projection (2006-2100) with the proposed ensemble method identifies that the largest (smallest) temperature changes will happen in the South Central China (the Inner Mongolia), the North Eastern China (the South Central China), and the North Western China (the South Central China), under RCP 2.6, RCP 4.5, and RCP 8.5 scenarios, respectively.

  4. [Bayesian statistics in medicine -- part II: main applications and inference].

    PubMed

    Montomoli, C; Nichelatti, M

    2008-01-01

    Bayesian statistics is not only used when one is dealing with 2-way tables, but it can be used for inferential purposes. Using the basic concepts presented in the first part, this paper aims to give a simple overview of Bayesian methods by introducing its foundation (Bayes' theorem) and then applying this rule to a very simple practical example; whenever possible, the elementary processes at the basis of analysis are compared to those of frequentist (classical) statistical analysis. The Bayesian reasoning is naturally connected to medical activity, since it appears to be quite similar to a diagnostic process.

  5. A new anchor augmentation technique with a cancellous screw in osteoporotic rotator cuff repair: an in vitro biomechanical study on sheep humerus specimens.

    PubMed

    Uruc, Vedat; Ozden, Raif; Dogramacı, Yunus; Kalacı, Aydıner; Hallaceli, Hasan; Küçükdurmaz, Fatih

    2014-01-01

    The aim of this study was to test a simple technique to augment the pullout resistance of an anchor in an over-drilled sheep humerus model. Sixty-four paired sheep humeri were harvested from 32 male sheep aged 18 months. Specimens were divided into an augmented group and non-augmented group. FASTIN RC 5-mm titanium screw anchors (DePuy Mitek, Raynham, MA) double loaded with suture material (braided polyester, nonabsorbable USP No. 2) were used in both groups. Osteoporosis was simulated by over-drilling with a 4.5-mm drill. Augmentation was performed by fixing 1 of the sutures 1.5 cm inferior to the anchor insertion site with a washer screw. This was followed by a pull-to-failure test at 50 mm/min. The ultimate load (the highest value of strength before anchor pullout) was recorded. A paired t test was used to compare the biomechanical properties of the augmented and non-augmented groups. In all specimens the failure mode was pullout of the anchor. The ultimate failure loads were statistically significantly higher in the augmented group (P < .0001). The mean pullout strength was 121.1 ± 10.17 N in the non-augmented group and 176.1 ± 10.34 N in the augmented group. The described augmentation technique, which is achieved by inferior-lateral fixation of 1 of the sutures of the double-loaded anchor to a fully threaded 6.5-mm cancellous screw with a washer, significantly increases the ultimate failure loads in the over-drilled sheep humerus model. Our technique is simple, safe, and inexpensive. It can be easily used in all osteoporotic patients and will contribute to the reduction of anchor failure. This technique might be difficult to apply arthroscopically. Cannulated smaller screws would probably be more practical for arthroscopic use. Further clinical studies are needed. Copyright © 2014 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  6. Statistics-related and reliability-physics-related failure processes in electronics devices and products

    NASA Astrophysics Data System (ADS)

    Suhir, E.

    2014-05-01

    The well known and widely used experimental reliability "passport" of a mass manufactured electronic or a photonic product — the bathtub curve — reflects the combined contribution of the statistics-related and reliability-physics (physics-of-failure)-related processes. When time progresses, the first process results in a decreasing failure rate, while the second process associated with the material aging and degradation leads to an increased failure rate. An attempt has been made in this analysis to assess the level of the reliability physics-related aging process from the available bathtub curve (diagram). It is assumed that the products of interest underwent the burn-in testing and therefore the obtained bathtub curve does not contain the infant mortality portion. It has been also assumed that the two random processes in question are statistically independent, and that the failure rate of the physical process can be obtained by deducting the theoretically assessed statistical failure rate from the bathtub curve ordinates. In the carried out numerical example, the Raleigh distribution for the statistical failure rate was used, for the sake of a relatively simple illustration. The developed methodology can be used in reliability physics evaluations, when there is a need to better understand the roles of the statistics-related and reliability-physics-related irreversible random processes in reliability evaluations. The future work should include investigations on how powerful and flexible methods and approaches of the statistical mechanics can be effectively employed, in addition to reliability physics techniques, to model the operational reliability of electronic and photonic products.

  7. Self-affirmation model for football goal distributions

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2007-06-01

    Analyzing football score data with statistical techniques, we investigate how the highly co-operative nature of the game is reflected in averaged properties such as the distributions of scored goals for the home and away teams. It turns out that in particular the tails of the distributions are not well described by independent Bernoulli trials, but rather well modeled by negative binomial or generalized extreme value distributions. To understand this behavior from first principles, we suggest to modify the Bernoulli random process to include a simple component of self-affirmation which seems to describe the data surprisingly well and allows to interpret the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments and found the proposed models to be applicable rather universally. In particular, here we compare men's and women's leagues and the separate German leagues during the cold war times and find some remarkable differences.

  8. An alternative way to evaluate chemistry-transport model variability

    NASA Astrophysics Data System (ADS)

    Menut, Laurent; Mailler, Sylvain; Bessagnet, Bertrand; Siour, Guillaume; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik

    2017-03-01

    A simple and complementary model evaluation technique for regional chemistry transport is discussed. The methodology is based on the concept that we can learn about model performance by comparing the simulation results with observational data available for time periods other than the period originally targeted. First, the statistical indicators selected in this study (spatial and temporal correlations) are computed for a given time period, using colocated observation and simulation data in time and space. Second, the same indicators are used to calculate scores for several other years while conserving the spatial locations and Julian days of the year. The difference between the results provides useful insights on the model capability to reproduce the observed day-to-day and spatial variability. In order to synthesize the large amount of results, a new indicator is proposed, designed to compare several error statistics between all the years of validation and to quantify whether the period and area being studied were well captured by the model for the correct reasons.

  9. A Monte Carlo–Based Bayesian Approach for Measuring Agreement in a Qualitative Scale

    PubMed Central

    Pérez Sánchez, Carlos Javier

    2014-01-01

    Agreement analysis has been an active research area whose techniques have been widely applied in psychology and other fields. However, statistical agreement among raters has been mainly considered from a classical statistics point of view. Bayesian methodology is a viable alternative that allows the inclusion of subjective initial information coming from expert opinions, personal judgments, or historical data. A Bayesian approach is proposed by providing a unified Monte Carlo–based framework to estimate all types of measures of agreement in a qualitative scale of response. The approach is conceptually simple and it has a low computational cost. Both informative and non-informative scenarios are considered. In case no initial information is available, the results are in line with the classical methodology, but providing more information on the measures of agreement. For the informative case, some guidelines are presented to elicitate the prior distribution. The approach has been applied to two applications related to schizophrenia diagnosis and sensory analysis. PMID:29881002

  10. The expectancy-value muddle in the theory of planned behaviour - and some proposed solutions.

    PubMed

    French, David P; Hankins, Matthew

    2003-02-01

    The authors of the Theories of Reasoned Action and Planned Behaviour recommended a method for statistically analysing the relationships between beliefs and the Attitude, Subjective Norm, and Perceived Behavioural Control constructs. This method has been used in the overwhelming majority of studies using these theories. However, there is a growing awareness that this method yields statistically uninterpretable results (Evans, 1991). Despite this, the use of this method is continuing, as is uninformed interpretation of this problematic research literature. This is probably due to the lack of a simple account of where the problem lies, and the large number of alternatives available. This paper therefore summarizes the problem as simply as possible, gives consideration to the conclusions that can be validly drawn from studies that contain this problem, and critically reviews the many alternatives that have been proposed to address this problem. Different techniques are identified as being suitable, according to the purpose of the specific research project.

  11. Detection and Estimation of an Optical Image by Photon-Counting Techniques. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Lily Lee

    1973-01-01

    Statistical description of a photoelectric detector is given. The photosensitive surface of the detector is divided into many small areas, and the moment generating function of the photo-counting statistic is derived for large time-bandwidth product. The detection of a specified optical image in the presence of the background light by using the hypothesis test is discussed. The ideal detector based on the likelihood ratio from a set of numbers of photoelectrons ejected from many small areas of the photosensitive surface is studied and compared with the threshold detector and a simple detector which is based on the likelihood ratio by counting the total number of photoelectrons from a finite area of the surface. The intensity of the image is assumed to be Gaussian distributed spatially against the uniformly distributed background light. The numerical approximation by the method of steepest descent is used, and the calculations of the reliabilities for the detectors are carried out by a digital computer.

  12. Hydrostatic weighing without head submersion in morbidly obese females.

    PubMed

    Evans, P E; Israel, R G; Flickinger, E G; O'Brien, K F; Donnelly, J E

    1989-08-01

    This study tests the validity of hydrostatic weighing without head submersion (HWNS) for determining the body density (Db) of morbidly obese (MO) females. Eighty MO females who were able to perform traditional hydrostatic weighing at residual volume (HW) underwent four counterbalanced trials for each procedure (HW and HWNS) to determine Db. Residual volume was determined by oxygen dilution. Twenty subjects were randomly excluded from the experimental group (EG) and assigned to a cross-validation group (CV). Simple linear regression was performed on EG data (n = 60, means = 36.8 y, means % fat = 50.1) to predict Db from HWNS (Db = 0.569563 [Db HWNS] + 0.408621, SEE = 0.0066). Comparison of the predicted and actual Db for CV group yielded r = 0.69, SEE = 0.0066, E statistic = 0.0067, mean difference = 0.0013 kg/L. The SEE and E statistic for body fat were 3.31 and 3.39, respectively. Mean difference for percent fat was 0.66%. Results indicate that HWNS is a valid technique for assessing body composition in MO females.

  13. Fast Quantum Algorithm for Predicting Descriptive Statistics of Stochastic Processes

    NASA Technical Reports Server (NTRS)

    Williams Colin P.

    1999-01-01

    Stochastic processes are used as a modeling tool in several sub-fields of physics, biology, and finance. Analytic understanding of the long term behavior of such processes is only tractable for very simple types of stochastic processes such as Markovian processes. However, in real world applications more complex stochastic processes often arise. In physics, the complicating factor might be nonlinearities; in biology it might be memory effects; and in finance is might be the non-random intentional behavior of participants in a market. In the absence of analytic insight, one is forced to understand these more complex stochastic processes via numerical simulation techniques. In this paper we present a quantum algorithm for performing such simulations. In particular, we show how a quantum algorithm can predict arbitrary descriptive statistics (moments) of N-step stochastic processes in just O(square root of N) time. That is, the quantum complexity is the square root of the classical complexity for performing such simulations. This is a significant speedup in comparison to the current state of the art.

  14. Statics and Dynamics of Selfish Interactions in Distributed Service Systems

    PubMed Central

    Altarelli, Fabrizio; Braunstein, Alfredo; Dall’Asta, Luca

    2015-01-01

    We study a class of games which models the competition among agents to access some service provided by distributed service units and which exhibits congestion and frustration phenomena when service units have limited capacity. We propose a technique, based on the cavity method of statistical physics, to characterize the full spectrum of Nash equilibria of the game. The analysis reveals a large variety of equilibria, with very different statistical properties. Natural selfish dynamics, such as best-response, usually tend to large-utility equilibria, even though those of smaller utility are exponentially more numerous. Interestingly, the latter actually can be reached by selecting the initial conditions of the best-response dynamics close to the saturation limit of the service unit capacities. We also study a more realistic stochastic variant of the game by means of a simple and effective approximation of the average over the random parameters, showing that the properties of the average-case Nash equilibria are qualitatively similar to the deterministic ones. PMID:26177449

  15. Comparison of fat maintenance in the face with centrifuge versus filtered and washed fat.

    PubMed

    Asilian, Ali; Siadat, Amir Hossein; Iraji, Razieh

    2014-06-01

    Autogenous fat injection of the face is a viable and lasting remedy for soft tissue loss and has become a mainstay in facial rejuvenation. Fat transfer as either a stand-alone technique or as an adjunct to other filler technique and lifting depending on patient needs. Although soft tissue augmentation with autologous fat transfer has been increasingly used by esthetic surgeon, but there is no agreement concerning the best way of processing the harvested fat before injection. This study compared the clinical results obtained using simple filtered and washed fat via metal sieve with those achieved by means of pure centrifuged fat. A prospective single-blind analysis on 32 healthy patients undergoing nasolabial fold fat transplantation from 2009 to 2011 (simple sampling). Patients assigned in two groups randomly. The face of half (16 subjects) was injected with centrifuged, another half with simple filtered and washed fat to evaluate the effect of preparation methods on fat graft viability. Objective method was used to evaluate the results, involving the evaluation of postoperative photographs (in month 1, 6 and 12) by an esthetic surgeon (according to the nasolabial scale). Subjective method was a self-assessment obtained from patients about general level of satisfaction and improvement of skin texture, statistical analysis were performed by means of the Wilcoxon and Mann-Whitney test. Acquired data were analyzed using SPSS version 15 and a value of P > 0.05 was considered as significant. There was no significant difference in the survival of grafted fat between the Group 1 (fat-processing with centrifuge at 3400 rpm for 1-min) and Group 2 (washing the fat in the sieve). Our data suggest that the centrifuge of the fat does not enhance survival of grafted fat (P > 0.05).

  16. Simple and economic colloidal centrifugation protocols may be incorporated into the clinical equine sperm processing procedure.

    PubMed

    Gutiérrez-Cepeda, L; Fernández, A; Crespo, F; Gosálvez, J; Serres, C

    2011-03-01

    For many years in human assisted-reproduction procedures there have been special protocols to prepare and improve sperm quality. Colloidal centrifugation (CC) is a useful technique that has been proved to enhance semen quality by selection of the best spermatozoa for different species. Its use is recommended to improve fertility of subfertile stallions but current CC protocols are clinically complicated in the equine sperm processing technique due to economic and technical difficulties. The aim of this study was to determine the optimal processing procedures to adapt the use of a CC product (EquiPure™) in the equine reproduction industry. A total of nineteen ejaculates were collected from 10 Purebred Spanish Horses (P.R.E horses) using a Missouri artificial vagina. Gel-free semen aliquots were analyzed prior to treatment (control). Semen was subjected to one of six CC protocols with EquiPure™ and centrifuged samples were statistically evaluated by ANOVA and Duncan tests (p<0.05) for sperm quality and recovery rate. We obtained higher values by colloidal centrifugation in LIN, STR and BCF variables and DNA fragmentation index trended to be lower in most of the CC protocols. The studied protocols were shown to be as efficient in improving equine sperm quality as the current commercial EquiPure™, with the added advantage of being much more economical and simple to use. According to these results it seems to be possible to incorporate single layer and or high colloidal centrifugation volume protocols what would make them simple, economic and clinically viable for the equine sperm processing procedure. Copyright © 2011 Elsevier B.V. All rights reserved.

  17. Statistics without Tears: Complex Statistics with Simple Arithmetic

    ERIC Educational Resources Information Center

    Smith, Brian

    2011-01-01

    One of the often overlooked aspects of modern statistics is the analysis of time series data. Modern introductory statistics courses tend to rush to probabilistic applications involving risk and confidence. Rarely does the first level course linger on such useful and fascinating topics as time series decomposition, with its practical applications…

  18. Applied statistics in ecology: common pitfalls and simple solutions

    Treesearch

    E. Ashley Steel; Maureen C. Kennedy; Patrick G. Cunningham; John S. Stanovick

    2013-01-01

    The most common statistical pitfalls in ecological research are those associated with data exploration, the logic of sampling and design, and the interpretation of statistical results. Although one can find published errors in calculations, the majority of statistical pitfalls result from incorrect logic or interpretation despite correct numerical calculations. There...

  19. The Statistics of wood assays for preservative retention

    Treesearch

    Patricia K. Lebow; Scott W. Conklin

    2011-01-01

    This paper covers general statistical concepts that apply to interpreting wood assay retention values. In particular, since wood assays are typically obtained from a single composited sample, the statistical aspects, including advantages and disadvantages, of simple compositing are covered.

  20. Path-Following Solutions Of Nonlinear Equations

    NASA Technical Reports Server (NTRS)

    Barger, Raymond L.; Walters, Robert W.

    1989-01-01

    Report describes some path-following techniques for solution of nonlinear equations and compares with other methods. Use of multipurpose techniques applicable at more than one stage of path-following computation results in system relatively simple to understand, program, and use. Comparison of techniques with method of parametric differentiation (MPD) reveals definite advantages for path-following methods. Emphasis in investigation on multiuse techniques being applied at more than one stage of path-following computation. Incorporation of multipurpose techniques results in concise computer code relatively simple to use.

  1. Two Simple Models for Fracking

    NASA Astrophysics Data System (ADS)

    Norris, Jaren Quinn

    Recent developments in fracking have enable the recovery of oil and gas from tight shale reservoirs. These developments have also made fracking one of the most controversial environmental issues in the United States. Despite the growing controversy surrounding fracking, there is relatively little publicly available research. This dissertation introduces two simple models for fracking that were developed using techniques from non-linear and statistical physics. The first model assumes that the volume of induced fractures must be equal to the volume of injected fluid. For simplicity, these fractures are assumed to form a spherically symmetric damage region around the borehole. The predicted volumes of water necessary to create a damage region with a given radius are in good agreement with reported values. The second model is a modification of invasion percolation which was previously introduced to model water flooding. The reservoir rock is represented by a regular lattice of local traps that contain oil and/or gas separated by rock barriers. The barriers are assumed to be highly heterogeneous and are assigned random strengths. Fluid is injected from a central site and the weakest rock barrier breaks allowing fluid to flow into the adjacent site. The process repeats with the weakest barrier breaking and fluid flowing to an adjacent site each time step. Extensive numerical simulations were carried out to obtain statistical properties of the growing fracture network. The network was found to be fractal with fractal dimensions differing slightly from the accepted values for traditional percolation. Additionally, the network follows Horton-Strahler and Tokunaga branching statistics which have been used to characterize river networks. As with other percolation models, the growth of the network occurs in bursts. These bursts follow a power-law size distribution similar to observed microseismic events. Reservoir stress anisotropy is incorporated into the model by assigning horizontal bonds weaker strengths on average than vertical bonds. Numerical simulations show that increasing bond strength anisotropy tends to reduce the fractal dimension of the growing fracture network, and decrease the power-law slope of the burst size distribution. Although simple, these two models are useful for making informed decisions about fracking.

  2. Prospective, randomized trial comparing diathermy excision and diathermy coagulation for symptomatic, prolapsed hemorrhoids.

    PubMed

    Quah, H M; Seow-Choen, F

    2004-03-01

    This study was designed to compare diathermy excision and diathermy coagulation in the treatment of symptomatic prolapsed piles. Forty-five consecutive patients were randomly assigned to diathermy excision hemorrhoidectomy (Group A, n = 25) and diathermy coagulation (Group B, n = 20) under general anesthesia. The median duration of surgery was ten minutes for both groups. There was no statistical difference in the severity of postoperative pain at rest between the two groups, but Group A patients felt less pain during defecation on the third postoperative day (median, 5 (interquartile range, 3-7) vs. 8 (4-9); P = 0.04) and on the sixth postoperative day (median, 5 (interquartile range, 2-6) vs. 9 (5-10); P = 0.02). There was, however, no statistical difference in postoperative oral analgesics use and patients' satisfaction scores between the two groups. Complication rates were similar except that diathermy coagulation tended to leave some residual skin components of external hemorrhoid especially in very large prolapsed piles. Group A patients resumed work earlier (mean, 12 (range, 4-20) vs. 17 (11-21) days); however, this was not statistically significant ( P = 0.1). Diathermy coagulation of hemorrhoids is a simple technique and may be considered in suitable cases.

  3. Extraction of phase information in daily stock prices

    NASA Astrophysics Data System (ADS)

    Fujiwara, Yoshi; Maekawa, Satoshi

    2000-06-01

    It is known that, in an intermediate time-scale such as days, stock market fluctuations possess several statistical properties that are common to different markets. Namely, logarithmic returns of an asset price have (i) truncated Pareto-Lévy distribution, (ii) vanishing linear correlation, (iii) volatility clustering and its power-law autocorrelation. The fact (ii) is a consequence of nonexistence of arbitragers with simple strategies, but this does not mean statistical independence of market fluctuations. Little attention has been paid to temporal structure of higher-order statistics, although it contains some important information on market dynamics. We applied a signal separation technique, called Independent Component Analysis (ICA), to actual data of daily stock prices in Tokyo and New York Stock Exchange (TSE/NYSE). ICA does a linear transformation of lag vectors from time-series to find independent components by a nonlinear algorithm. We obtained a similar impulse response for these dataset. If it were a Martingale process, it can be shown that impulse response should be a delta-function under a few conditions that could be numerically checked and as was verified by surrogate data. This result would provide information on the market dynamics including speculative bubbles and arbitrating processes. .

  4. Far-from-Equilibrium Route to Superthermal Light in Bimodal Nanolasers

    NASA Astrophysics Data System (ADS)

    Marconi, Mathias; Javaloyes, Julien; Hamel, Philippe; Raineri, Fabrice; Levenson, Ariel; Yacomotti, Alejandro M.

    2018-02-01

    Microscale and nanoscale lasers inherently exhibit rich photon statistics due to complex light-matter interaction in a strong spontaneous emission noise background. It is well known that they may display superthermal fluctuations—photon superbunching—in specific situations due to either gain competition, leading to mode-switching instabilities, or carrier-carrier coupling in superradiant microcavities. Here we show a generic route to superbunching in bimodal nanolasers by preparing the system far from equilibrium through a parameter quench. We demonstrate, both theoretically and experimentally, that transient dynamics after a short-pump-pulse-induced quench leads to heavy-tailed superthermal statistics when projected onto the weak mode. We implement a simple experimental technique to access the probability density functions that further enables quantifying the distance from thermal equilibrium via the thermodynamic entropy. The universality of this mechanism relies on the far-from-equilibrium dynamical scenario, which can be mapped to a fast cooling process of a suspension of Brownian particles in a liquid. Our results open up new avenues to mold photon statistics in multimode optical systems and may constitute a test bed to investigate out-of-equilibrium thermodynamics using micro or nanocavity arrays.

  5. An Anatomic and Biomechanical Comparison of Bankart Repair Configurations.

    PubMed

    Judson, Christopher H; Voss, Andreas; Obopilwe, Elifho; Dyrna, Felix; Arciero, Robert A; Shea, Kevin P

    2017-11-01

    Suture anchor repair for anterior shoulder instability can be performed using a number of different repair techniques, but none has been proven superior in terms of anatomic and biomechanical properties. Purpose/Hypothesis: The purpose was to compare the anatomic footprint coverage and biomechanical characteristics of 4 different Bankart repair techniques: (1) single row with simple sutures, (2) single row with horizontal mattress sutures, (3) double row with sutures, and (4) double row with labral tape. The hypotheses were as follows: (1) double-row techniques would improve the footprint coverage and biomechanical properties compared with single-row techniques, (2) horizontal mattress sutures would increase the footprint coverage compared with simple sutures, and (3) repair techniques with labral tape and sutures would not show different biomechanical properties. Controlled laboratory study. Twenty-four fresh-frozen cadaveric specimens were dissected. The native labrum was removed and the footprint marked and measured. Repair for each of the 4 groups was performed, and the uncovered footprint was measured using a 3-dimensional digitizer. The strength of the repair sites was assessed using a servohydraulic testing machine and a digital video system to record load to failure, cyclic displacement, and stiffness. The double-row repair techniques with sutures and labral tape covered 73.4% and 77.0% of the footprint, respectively. These percentages were significantly higher than the footprint coverage achieved by single-row repair techniques using simple sutures (38.1%) and horizontal mattress sutures (32.8%) ( P < .001). The footprint coverage of the simple suture and horizontal mattress suture groups was not significantly different ( P = .44). There were no significant differences in load to failure, cyclic displacement, or stiffness between the single-row and double-row groups or between the simple suture and horizontal mattress suture techniques. Likewise, there was no difference in the biomechanical properties of the double-row repair techniques with sutures versus labral tape. Double-row repair techniques provided better coverage of the native footprint of the labrum but did not provide superior biomechanical properties compared with single-row repair techniques. There was no difference in footprint coverage or biomechanical strength between the simple suture and horizontal mattress suture repair techniques. Although the double-row repair techniques had no difference in initial strength, they may improve healing in high-risk patients by improving the footprint coverage.

  6. Prediction of drug transport processes using simple parameters and PLS statistics. The use of ACD/logP and ACD/ChemSketch descriptors.

    PubMed

    Osterberg, T; Norinder, U

    2001-01-01

    A method of modelling and predicting biopharmaceutical properties using simple theoretically computed molecular descriptors and multivariate statistics has been investigated for several data sets related to solubility, IAM chromatography, permeability across Caco-2 cell monolayers, human intestinal perfusion, brain-blood partitioning, and P-glycoprotein ATPase activity. The molecular descriptors (e.g. molar refractivity, molar volume, index of refraction, surface tension and density) and logP were computed with ACD/ChemSketch and ACD/logP, respectively. Good statistical models were derived that permit simple computational prediction of biopharmaceutical properties. All final models derived had R(2) values ranging from 0.73 to 0.95 and Q(2) values ranging from 0.69 to 0.86. The RMSEP values for the external test sets ranged from 0.24 to 0.85 (log scale).

  7. An integrated user-friendly ArcMAP tool for bivariate statistical modeling in geoscience applications

    NASA Astrophysics Data System (ADS)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusof, Z.; Tehrany, M. S.

    2014-10-01

    Modeling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modeling. Bivariate statistical analysis (BSA) assists in hazard modeling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, BSM (bivariate statistical modeler), for BSA technique is proposed. Three popular BSA techniques such as frequency ratio, weights-of-evidence, and evidential belief function models are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and is created by a simple graphical user interface, which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  8. An integrated user-friendly ArcMAP tool for bivariate statistical modelling in geoscience applications

    NASA Astrophysics Data System (ADS)

    Jebur, M. N.; Pradhan, B.; Shafri, H. Z. M.; Yusoff, Z. M.; Tehrany, M. S.

    2015-03-01

    Modelling and classification difficulties are fundamental issues in natural hazard assessment. A geographic information system (GIS) is a domain that requires users to use various tools to perform different types of spatial modelling. Bivariate statistical analysis (BSA) assists in hazard modelling. To perform this analysis, several calculations are required and the user has to transfer data from one format to another. Most researchers perform these calculations manually by using Microsoft Excel or other programs. This process is time-consuming and carries a degree of uncertainty. The lack of proper tools to implement BSA in a GIS environment prompted this study. In this paper, a user-friendly tool, bivariate statistical modeler (BSM), for BSA technique is proposed. Three popular BSA techniques, such as frequency ratio, weight-of-evidence (WoE), and evidential belief function (EBF) models, are applied in the newly proposed ArcMAP tool. This tool is programmed in Python and created by a simple graphical user interface (GUI), which facilitates the improvement of model performance. The proposed tool implements BSA automatically, thus allowing numerous variables to be examined. To validate the capability and accuracy of this program, a pilot test area in Malaysia is selected and all three models are tested by using the proposed program. Area under curve (AUC) is used to measure the success rate and prediction rate. Results demonstrate that the proposed program executes BSA with reasonable accuracy. The proposed BSA tool can be used in numerous applications, such as natural hazard, mineral potential, hydrological, and other engineering and environmental applications.

  9. Measuring the statistical validity of summary meta-analysis and meta-regression results for use in clinical practice.

    PubMed

    Willis, Brian H; Riley, Richard D

    2017-09-20

    An important question for clinicians appraising a meta-analysis is: are the findings likely to be valid in their own practice-does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity-where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple ('leave-one-out') cross-validation technique, we demonstrate how we may test meta-analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta-analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta-analysis and a tailored meta-regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within-study variance, between-study variance, study sample size, and the number of studies in the meta-analysis. Finally, we apply Vn to two published meta-analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta-analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

  10. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    NASA Astrophysics Data System (ADS)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances between the different methane and acetylene sources. The results from these controlled experiments demonstrate that, when the targeted and tracer gases are not well collocated, this new approach provides a better estimate of the emission rates than the tracer release technique. As an example, the relative error between the estimated and actual emission rates is reduced from 32 % with the tracer release technique to 16 % with the combined approach in the case of a tracer located 60 m upwind of a single methane source. Further studies and more complex implementations with more advanced transport models and more advanced optimisations of their configuration will be required to generalise the applicability of the approach and strengthen its robustness.

  11. The Use of Recommended Communication Techniques by Maryland Family Physicians and Pediatricians

    PubMed Central

    Weatherspoon, Darien J.; Horowitz, Alice M.; Kleinman, Dushanka V.; Wang, Min Qi

    2015-01-01

    Background Health literacy experts and the American Medical Association have developed recommended communication techniques for healthcare providers given that effective communication has been shown to greatly improve health outcomes. The purpose of this study was to determine the number and types of communication techniques routinely used by Maryland physicians. Methods In 2010, a 30-item survey was mailed to a random sample of 1,472 Maryland family physicians and pediatricians, with 294 surveys being returned and usable. The survey contained questions about provider and practice characteristics, and 17 items related to communication techniques, including seven basic communication techniques. Physicians’ use of recommended communication techniques was analyzed using descriptive statistics, analysis of variance, and ordinary least squares regression. Results Family physicians routinely used an average of 6.6 of the 17 total techniques and 3.3 of the seven basic techniques, whereas pediatricians routinely used 6.4 and 3.2 techniques, respectively. The use of simple language was the only technique that nearly all physicians routinely utilized (Family physicians, 91%; Pediatricians, 93%). Physicians who had taken a communications course used significantly more techniques than those who had not. Physicians with a low percentage of patients on Medicaid were significantly less likely to use the recommended communication techniques compared to those providers who had high proportion of their patient population on Medicaid. Conclusions Overall, the use of recommended communication techniques was low. Additionally, many physicians were unsure of the effectiveness of several of the recommended techniques, which could suggest that physicians are unaware of valuable skills that could enhance their communication. The findings of this study suggest that communications training should be given a higher priority in the medical training process in the United States. PMID:25856371

  12. Remote sensing estimation of the total phosphorus concentration in a large lake using band combinations and regional multivariate statistical modeling techniques.

    PubMed

    Gao, Yongnian; Gao, Junfeng; Yin, Hongbin; Liu, Chuansheng; Xia, Ting; Wang, Jing; Huang, Qi

    2015-03-15

    Remote sensing has been widely used for ater quality monitoring, but most of these monitoring studies have only focused on a few water quality variables, such as chlorophyll-a, turbidity, and total suspended solids, which have typically been considered optically active variables. Remote sensing presents a challenge in estimating the phosphorus concentration in water. The total phosphorus (TP) in lakes has been estimated from remotely sensed observations, primarily using the simple individual band ratio or their natural logarithm and the statistical regression method based on the field TP data and the spectral reflectance. In this study, we investigated the possibility of establishing a spatial modeling scheme to estimate the TP concentration of a large lake from multi-spectral satellite imagery using band combinations and regional multivariate statistical modeling techniques, and we tested the applicability of the spatial modeling scheme. The results showed that HJ-1A CCD multi-spectral satellite imagery can be used to estimate the TP concentration in a lake. The correlation and regression analysis showed a highly significant positive relationship between the TP concentration and certain remotely sensed combination variables. The proposed modeling scheme had a higher accuracy for the TP concentration estimation in the large lake compared with the traditional individual band ratio method and the whole-lake scale regression-modeling scheme. The TP concentration values showed a clear spatial variability and were high in western Lake Chaohu and relatively low in eastern Lake Chaohu. The northernmost portion, the northeastern coastal zone and the southeastern portion of western Lake Chaohu had the highest TP concentrations, and the other regions had the lowest TP concentration values, except for the coastal zone of eastern Lake Chaohu. These results strongly suggested that the proposed modeling scheme, i.e., the band combinations and the regional multivariate statistical modeling techniques, demonstrated advantages for estimating the TP concentration in a large lake and had a strong potential for universal application for the TP concentration estimation in large lake waters worldwide. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  14. Simple and accurate methods for quantifying deformation, disruption, and development in biological tissues

    PubMed Central

    Boyle, John J.; Kume, Maiko; Wyczalkowski, Matthew A.; Taber, Larry A.; Pless, Robert B.; Xia, Younan; Genin, Guy M.; Thomopoulos, Stavros

    2014-01-01

    When mechanical factors underlie growth, development, disease or healing, they often function through local regions of tissue where deformation is highly concentrated. Current optical techniques to estimate deformation can lack precision and accuracy in such regions due to challenges in distinguishing a region of concentrated deformation from an error in displacement tracking. Here, we present a simple and general technique for improving the accuracy and precision of strain estimation and an associated technique for distinguishing a concentrated deformation from a tracking error. The strain estimation technique improves accuracy relative to other state-of-the-art algorithms by directly estimating strain fields without first estimating displacements, resulting in a very simple method and low computational cost. The technique for identifying local elevation of strain enables for the first time the successful identification of the onset and consequences of local strain concentrating features such as cracks and tears in a highly strained tissue. We apply these new techniques to demonstrate a novel hypothesis in prenatal wound healing. More generally, the analytical methods we have developed provide a simple tool for quantifying the appearance and magnitude of localized deformation from a series of digital images across a broad range of disciplines. PMID:25165601

  15. A Role for Chunk Formation in Statistical Learning of Second Language Syntax

    ERIC Educational Resources Information Center

    Hamrick, Phillip

    2014-01-01

    Humans are remarkably sensitive to the statistical structure of language. However, different mechanisms have been proposed to account for such statistical sensitivities. The present study compared adult learning of syntax and the ability of two models of statistical learning to simulate human performance: Simple Recurrent Networks, which learn by…

  16. Learning investment indicators through data extension

    NASA Astrophysics Data System (ADS)

    Dvořák, Marek

    2017-07-01

    Stock prices in the form of time series were analysed using single and multivariate statistical methods. After simple data preprocessing in the form of logarithmic differences, we augmented this single variate time series to a multivariate representation. This method makes use of sliding windows to calculate several dozen of new variables using simple statistic tools like first and second moments as well as more complicated statistic, like auto-regression coefficients and residual analysis, followed by an optional quadratic transformation that was further used for data extension. These were used as a explanatory variables in a regularized logistic LASSO regression which tried to estimate Buy-Sell Index (BSI) from real stock market data.

  17. Stochastic Partial Differential Equation Solver for Hydroacoustic Modeling: Improvements to Paracousti Sound Propagation Solver

    NASA Astrophysics Data System (ADS)

    Preston, L. A.

    2017-12-01

    Marine hydrokinetic (MHK) devices offer a clean, renewable alternative energy source for the future. Responsible utilization of MHK devices, however, requires that the effects of acoustic noise produced by these devices on marine life and marine-related human activities be well understood. Paracousti is a 3-D full waveform acoustic modeling suite that can accurately propagate MHK noise signals in the complex bathymetry found in the near-shore to open ocean environment and considers real properties of the seabed, water column, and air-surface interface. However, this is a deterministic simulation that assumes the environment and source are exactly known. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected noise levels within the marine environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. One method is to use Monte Carlo (MC) techniques where simulation results from a large number of deterministic solutions are aggregated to provide statistical properties of the output signal. However, MC methods can be computationally prohibitive since they can require tens of thousands or more simulations to build up an accurate representation of those statistical properties. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a small fraction of the computational cost of MC. We are developing a SPDE solver for the 3-D acoustic wave propagation problem called Paracousti-UQ to help regulators and operators assess the statistical properties of environmental noise produced by MHK devices. In this presentation, we present the SPDE method and compare statistical distributions of simulated acoustic signals in simple models to MC simulations to show the accuracy and efficiency of the SPDE method. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc. for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-NA0003525.

  18. Arthroscopic Medial Meniscus Posterior Root Fixation Using a Modified Mason-Allen Stitch.

    PubMed

    Chung, Kyu Sung; Ha, Jeong Ku; Ra, Ho Jong; Kim, Jin Goo

    2016-02-01

    A complete radial tear of the meniscus posterior root, which can effectively cause a state of total meniscectomy via loss of hoop tension, requires that the torn root be repaired. Several methods have been used to repair medial meniscus posterior root tears, most of which are based on a simple stitch technique that is known to have stitch-holding strength. We applied a modified version of the Mason-Allen stitch technique, which is recognized as a method for rotator cuff repair surgery because its locking effect overcomes the potential weakness of simple stitches. This article introduces the medial meniscus posterior root tears repair procedure based on a modified Mason-Allen stitch technique in which 2 strands (i.e., 1 simple horizontal and 1 simple vertical stitch) are used.

  19. Robust Combining of Disparate Classifiers Through Order Statistics

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2001-01-01

    Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.

  20. Counting statistics for genetic switches based on effective interaction approximation

    NASA Astrophysics Data System (ADS)

    Ohkubo, Jun

    2012-09-01

    Applicability of counting statistics for a system with an infinite number of states is investigated. The counting statistics has been studied a lot for a system with a finite number of states. While it is possible to use the scheme in order to count specific transitions in a system with an infinite number of states in principle, we have non-closed equations in general. A simple genetic switch can be described by a master equation with an infinite number of states, and we use the counting statistics in order to count the number of transitions from inactive to active states in the gene. To avoid having the non-closed equations, an effective interaction approximation is employed. As a result, it is shown that the switching problem can be treated as a simple two-state model approximately, which immediately indicates that the switching obeys non-Poisson statistics.

  1. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  2. Distinguishing Positive Selection From Neutral Evolution: Boosting the Performance of Summary Statistics

    PubMed Central

    Lin, Kao; Li, Haipeng; Schlötterer, Christian; Futschik, Andreas

    2011-01-01

    Summary statistics are widely used in population genetics, but they suffer from the drawback that no simple sufficient summary statistic exists, which captures all information required to distinguish different evolutionary hypotheses. Here, we apply boosting, a recent statistical method that combines simple classification rules to maximize their joint predictive performance. We show that our implementation of boosting has a high power to detect selective sweeps. Demographic events, such as bottlenecks, do not result in a large excess of false positives. A comparison to other neutrality tests shows that our boosting implementation performs well compared to other neutrality tests. Furthermore, we evaluated the relative contribution of different summary statistics to the identification of selection and found that for recent sweeps integrated haplotype homozygosity is very informative whereas older sweeps are better detected by Tajima's π. Overall, Watterson's θ was found to contribute the most information for distinguishing between bottlenecks and selection. PMID:21041556

  3. A randomized comparison of three chest compression techniques and associated hemodynamic effect during infant CPR: A randomized manikin study.

    PubMed

    Smereka, Jacek; Szarpak, Lukasz; Rodríguez-Núñez, Antonio; Ladny, Jerzy R; Leung, Steve; Ruetzler, Kurt

    2017-10-01

    Pediatric cardiac arrest is an uncommon but critical life-threatening event requiring effective cardiopulmonary resuscitation. High-quality cardio-pulmonary resuscitation (CPR) is essential, but is poorly performed, even by highly skilled healthcare providers. The recently described two-thumb chest compression technique (nTTT) consists of the two thumbs directed at the angle of 90° to the chest while having the fingers fist-clenched. This technique might facilitate adequate chest-compression depth, chest-compression rate and rate of full chest-pressure relief. 42 paramedics from the national Emergency Medical Service of Poland performed three single-rescuer CPR sessions for 10 minutes each. Each session was randomly assigned to the conventional two-thumb (TTHT), the conventional two-finger (TFT) or the nTTT. The manikin used for this study was connected with an arterial blood pressure measurement device and blood measurements were documented on a 10-seconds cycle. The nTTT provided significant higher systolic (82 vs. 30 vs. 41 mmHg). A statistically significant difference was noticed between nTTT and TFT (p<.001), nTTT and TTHT (p<0.001), TFT and TTHT (p=0.003). The median diastolic preassure using nTTT was 16 mmHg compared with 9 mmHg for TFT (p<0.001), and 9.5 mmHg for TTHT (p<0.001). Mean arterial pressure using distinct methods varied and amounted to 40 vs. 22. vs. 26 mmHg (nTTT vs. TFT vs. TTHT, respectively). A statistically significant difference was noticed between nTTT and TFT (p<0.001), nTTT and TTEHT (p<0.001), and TFT and TTHT (p<0.001). The highest median pulse pressure was obtained by the nTTT 67.5 mmHg. Pulse pressure was 31.5 mmHg in the TTHT and 24 mmHg in the TFT. The difference between TFT and TTHT (p=0.025), TFT and nTTT (p<0.001), as well as between TTHT and nTTT (p<0.001) were statistically significant. The new nTTT technique generated higher arterial blood pressures compared to established chest compression techniques using an infant manikin model, suggesting a more effective chest compression. Our results have important clinical implications as nTTT was simple to perform and could be widely taught to both healthcare professionals and bystanders. Whether this technique translates to improved outcomes over existing techniques needs further animal studies and subsequent human trials. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. Simple statistical bias correction techniques greatly improve moderate resolution air quality forecast at station level

    NASA Astrophysics Data System (ADS)

    Curci, Gabriele; Falasca, Serena

    2017-04-01

    Deterministic air quality forecast is routinely carried out at many local Environmental Agencies in Europe and throughout the world by means of eulerian chemistry-transport models. The skill of these models in predicting the ground-level concentrations of relevant pollutants (ozone, nitrogen dioxide, particulate matter) a few days ahead has greatly improved in recent years, but it is not yet always compliant with the required quality level for decision making (e.g. the European Commission has set a maximum uncertainty of 50% on daily values of relevant pollutants). Post-processing of deterministic model output is thus still regarded as a useful tool to make the forecast more reliable. In this work, we test several bias correction techniques applied to a long-term dataset of air quality forecasts over Europe and Italy. We used the WRF-CHIMERE modelling system, which provides operational experimental chemical weather forecast at CETEMPS (http://pumpkin.aquila.infn.it/forechem/), to simulate the years 2008-2012 at low resolution over Europe (0.5° x 0.5°) and moderate resolution over Italy (0.15° x 0.15°). We compared the simulated dataset with available observation from the European Environmental Agency database (AirBase) and characterized model skill and compliance with EU legislation using the Delta tool from FAIRMODE project (http://fairmode.jrc.ec.europa.eu/). The bias correction techniques adopted are, in order of complexity: (1) application of multiplicative factors calculated as the ratio of model-to-observed concentrations averaged over the previous days; (2) correction of the statistical distribution of model forecasts, in order to make it similar to that of the observations; (3) development and application of Model Output Statistics (MOS) regression equations. We illustrate differences and advantages/disadvantages of the three approaches. All the methods are relatively easy to implement for other modelling systems.

  5. Midterm clinical outcomes following arthroscopic transosseous rotator cuff repair

    PubMed Central

    Flanagin, Brody A.; Garofalo, Raffaele; Lo, Eddie Y.; Feher, LeeAnne; Castagna, Alessandro; Qin, Huanying; Krishnan, Sumant G.

    2016-01-01

    Purpose: Arthroscopic transosseous (TO) rotator cuff repair has recently emerged as a new option for surgical treatment of symptomatic rotator cuff tears. Limited data is available regarding outcomes using this technique. This study evaluated midterm clinical outcomes following a novel arthroscopic TO (anchorless) rotator cuff repair technique. Materials and Methods: A consecutive series of 107 patients and 109 shoulders underwent arthroscopic TO (anchorless) rotator cuff repair for a symptomatic full-thickness tear. Pre and postoperative range of motion (ROM) was compared at an average of 11.8 months. Postoperative outcome scores were obtained at an average of 38.0 months. Statistical analysis was performed to compare pre and postoperative ROM data. Univariate analysis was performed using Student's t-test to compare the effect of other clinical characteristics on final outcome. Results: Statistically significant improvements were noted in forward flexion, external rotation and internal rotation (P < 0.0001). Average postoperative subjective shoulder value was 93.7, simple shoulder test 11.6, and American Shoulder and Elbow Surgeons (ASES) score 94.6. According to ASES scores, results for the 109 shoulders available for final follow-up were excellent in 95 (87.1%), good in 8 (7.3%), fair in 3 (2.8%), and poor in 3 (2.8%). There was no difference in ROM or outcome scores in patients who underwent a concomitant biceps procedure (tenodesis or tenotomy) compared with those who did not. Furthermore, there was no significant difference in outcome between patients who underwent either biceps tenodesis or tenotomy. Age, history of injury preceding the onset of pain, tear size, number of TO tunnels required to perform the repair, and presence of fatty infiltration did not correlate with postoperative ROM or subjective outcome measures at final follow-up. Two complications and four failures were noted. Conclusions: Arthroscopic TO rotator cuff repair technique leads to statistically significant midterm improvement in ROM and satisfactory midterm subjective outcome scores with low complication/failure rates in patients with average medium-sized rotator cuff tears with minimal fatty infiltration. Further work is required to evaluate radiographic healing rates with this technique and to compare outcomes following suture anchor repair. Level of Evidence: Level IV PMID:26980983

  6. Analysing attitude data through ridit schemes.

    PubMed

    El-rouby, M G

    1994-12-02

    The attitudes of individuals and populations on various issues are usually assessed through sample surveys. Responses to survey questions are then scaled and combined into a meaningful whole which defines the measured attitude. The applied scales may be of nominal, ordinal, interval, or ratio nature depending upon the degree of sophistication the researcher wants to introduce into the measurement. This paper discusses methods of analysis for categorical variables of the type used in attitude and human behavior research, and recommends adoption of ridit analysis, a technique which has been successfully applied to epidemiological, clinical investigation, laboratory, and microbiological data. The ridit methodology is described after reviewing some general attitude scaling methods and problems of analysis related to them. The ridit method is then applied to a recent study conducted to assess health care service quality in North Carolina. This technique is conceptually and computationally more simple than other conventional statistical methods, and is also distribution-free. Basic requirements and limitations on its use are indicated.

  7. Confidence of compliance: a Bayesian approach for percentile standards.

    PubMed

    McBride, G B; Ellis, J C

    2001-04-01

    Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.

  8. Insufficient Knowledge of Breast Cancer Risk Factors Among Malaysian Female University Students

    PubMed Central

    Samah, Asnarulkhadi Abu; Ahmadian, Maryam; Latiff, Latiffah A.

    2016-01-01

    Background: Despite continuous argument about the efficacy of breast self-examination; it still could be a life-saving technique through inspiring and empowering women to take better control over their body/breast and health. This study investigated Malaysian female university students’ knowledge about breast cancer risk factors, signs, and symptoms and assessed breast self-examination frequency among students. Method: A cross-sectional survey was conducted in 2013 in nine public and private universities in the Klang Valley and Selangor. 842 female students were respondents for the self-administered survey technique. Simple descriptive and inferential statistics were employed for data analysis. Results: The uptake of breast self-examination (BSE) was less than 50% among the students. Most of students had insufficient knowledge on several breast cancer risk factors. Conclusion: Actions and efforts should be done to increase knowledge of breast cancer through the development of ethnically and traditionally sensitive educational training on BSE and breast cancer literacy. PMID:26234996

  9. Analysis of polymeric phenolics in red wines using different techniques combined with gel permeation chromatography fractionation.

    PubMed

    Guadalupe, Zenaida; Soldevilla, Alberto; Sáenz-Navajas, María-Pilar; Ayestarán, Belén

    2006-04-21

    A multiple-step analytical method was developed to improve the analysis of polymeric phenolics in red wines. With a common initial step based on the fractionation of wine phenolics by gel permeation chromatography (GPC), different analytical techniques were used: high-performance liquid chromatography-diode array detection (HPLC-DAD), HPLC-mass spectrometry (MS), capillary zone electrophoresis (CZE) and spectrophotometry. This method proved to be valid for analyzing different families of phenolic compounds, such as monomeric phenolics and their derivatives, polymeric pigments and proanthocyanidins. The analytical characteristics of fractionation by GPC were studied and the method was fully validated, yielding satisfactory statistical results. GPC fractionation substantially improved the analysis of polymeric pigments by CZE, in terms of response, repeatability and reproducibility. It also represented an improvement in the traditional vanillin assay used for proanthocyanidin (PA) quantification. Astringent proanthocyanidins were also analyzed using a simple combined method that allowed these compounds, for which only general indexes were available, to be quantified.

  10. A 4-Dimensional Representation of Antennal Lobe Output Based on an Ensemble of Characterized Projection Neurons

    PubMed Central

    Staudacher, Erich M.; Huetteroth, Wolf; Schachtner, Joachim; Daly, Kevin C.

    2009-01-01

    A central problem facing studies of neural encoding in sensory systems is how to accurately quantify the extent of spatial and temporal responses. In this study, we take advantage of the relatively simple and stereotypic neural architecture found in invertebrates. We combine standard electrophysiological techniques, recently developed population analysis techniques, and novel anatomical methods to form an innovative 4-dimensional view of odor output representations in the antennal lobe of the moth Manduca sexta. This novel approach allows quantification of olfactory responses of characterized neurons with spike time resolution. Additionally, arbitrary integration windows can be used for comparisons with other methods such as imaging. By assigning statistical significance to changes in neuronal firing, this method can visualize activity across the entire antennal lobe. The resulting 4-dimensional representation of antennal lobe output complements imaging and multi-unit experiments yet provides a more comprehensive and accurate view of glomerular activation patterns in spike time resolution. PMID:19464513

  11. Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems

    NASA Astrophysics Data System (ADS)

    Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B.

    2018-05-01

    Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance.

  12. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tokovinin, Andrei, E-mail: atokovinin@ctio.noao.edu

    Radial velocity (RV) monitoring of solar-type visual binaries has been conducted at the CTIO/SMARTS 1.5 m telescope to study short-period systems. The data reduction is described, and mean and individual RVs of 163 observed objects are given. New spectroscopic binaries are discovered or suspected in 17 objects, and for some of them the orbital periods could be determined. Subsystems are efficiently detected even in a single observation by double lines and/or by the RV difference between the components of visual binaries. The potential of this detection technique is quantified by simulation and used for statistical assessment of 96 wide binariesmore » within 67 pc. It is found that 43 binaries contain at least one subsystem, and the occurrence of subsystems is equally probable in either primary or secondary components. The frequency of subsystems and their periods matches the simple prescription proposed by the author. The remaining 53 simple wide binaries with a median projected separation of 1300 AU have an RV difference distribution between their components that is not compatible with the thermal eccentricity distribution f (e) = 2e but rather matches the uniform eccentricity distribution.« less

  14. Temporal and spatial intermittencies within Newtonian turbulence

    NASA Astrophysics Data System (ADS)

    Kushwaha, Anubhav; Graham, Michael

    2015-11-01

    Direct numerical simulations of a pressure driven turbulent flow are performed in a large rectangular channel. Intermittent high- and low-drag regimes within turbulence that have earlier been found to exist temporally in minimal channels have been observed both spatially and temporally in full-size turbulent flows. These intermittent regimes, namely, ''active'' and ''hibernating'' turbulence, display very different structural and statistical features. We adopt a very simple sampling technique to identify these intermittent intervals, both temporally and spatially, and present differences between them in terms of simple quantities like mean-velocity, wall-shear stress and flow structures. By conditionally sampling of the low wall-shear stress events in particular, we show that the Maximum Drag Reduction (MDR) velocity profile, that occurs in viscoelastic flows, can also be approached in a Newtonian-fluid flow in the absence of any additives. This suggests that the properties of polymer drag reduction are inherent to all flows and their occurrence is just enhanced by the addition of polymers. We also show how the intermittencies within turbulence vary with Reynolds number. The work was supported by AFOSR grant FA9550-15-1-0062.

  15. A Simple Model of Pulsed Ejector Thrust Augmentation

    NASA Technical Reports Server (NTRS)

    Wilson, Jack; Deloof, Richard L. (Technical Monitor)

    2003-01-01

    A simple model of thrust augmentation from a pulsed source is described. In the model it is assumed that the flow into the ejector is quasi-steady, and can be calculated using potential flow techniques. The velocity of the flow is related to the speed of the starting vortex ring formed by the jet. The vortex ring properties are obtained from the slug model, knowing the jet diameter, speed and slug length. The model, when combined with experimental results, predicts an optimum ejector radius for thrust augmentation. Data on pulsed ejector performance for comparison with the model was obtained using a shrouded Hartmann-Sprenger tube as the pulsed jet source. A statistical experiment, in which ejector length, diameter, and nose radius were independent parameters, was performed at four different frequencies. These frequencies corresponded to four different slug length to diameter ratios, two below cut-off, and two above. Comparison of the model with the experimental data showed reasonable agreement. Maximum pulsed thrust augmentation is shown to occur for a pulsed source with slug length to diameter ratio equal to the cut-off value.

  16. Intellicount: High-Throughput Quantification of Fluorescent Synaptic Protein Puncta by Machine Learning

    PubMed Central

    Fantuzzo, J. A.; Mirabella, V. R.; Zahn, J. D.

    2017-01-01

    Abstract Synapse formation analyses can be performed by imaging and quantifying fluorescent signals of synaptic markers. Traditionally, these analyses are done using simple or multiple thresholding and segmentation approaches or by labor-intensive manual analysis by a human observer. Here, we describe Intellicount, a high-throughput, fully-automated synapse quantification program which applies a novel machine learning (ML)-based image processing algorithm to systematically improve region of interest (ROI) identification over simple thresholding techniques. Through processing large datasets from both human and mouse neurons, we demonstrate that this approach allows image processing to proceed independently of carefully set thresholds, thus reducing the need for human intervention. As a result, this method can efficiently and accurately process large image datasets with minimal interaction by the experimenter, making it less prone to bias and less liable to human error. Furthermore, Intellicount is integrated into an intuitive graphical user interface (GUI) that provides a set of valuable features, including automated and multifunctional figure generation, routine statistical analyses, and the ability to run full datasets through nested folders, greatly expediting the data analysis process. PMID:29218324

  17. A simple linear regression method for quantitative trait loci linkage analysis with censored observations.

    PubMed

    Anderson, Carl A; McRae, Allan F; Visscher, Peter M

    2006-07-01

    Standard quantitative trait loci (QTL) mapping techniques commonly assume that the trait is both fully observed and normally distributed. When considering survival or age-at-onset traits these assumptions are often incorrect. Methods have been developed to map QTL for survival traits; however, they are both computationally intensive and not available in standard genome analysis software packages. We propose a grouped linear regression method for the analysis of continuous survival data. Using simulation we compare this method to both the Cox and Weibull proportional hazards models and a standard linear regression method that ignores censoring. The grouped linear regression method is of equivalent power to both the Cox and Weibull proportional hazards methods and is significantly better than the standard linear regression method when censored observations are present. The method is also robust to the proportion of censored individuals and the underlying distribution of the trait. On the basis of linear regression methodology, the grouped linear regression model is computationally simple and fast and can be implemented readily in freely available statistical software.

  18. STAR (Simple Targeted Arterial Rendering) Technique: a Novel and Simple Method to Visualize the Fetal Cardiac Outflow Tracts

    PubMed Central

    Yeo, Lami; Romero, Roberto; Jodicke, Cristiano; Kim, Sun Kwon; Gonzalez, Juan M.; Oggè, Giovanna; Lee, Wesley; Kusanovic, Juan Pedro; Vaisbuch, Edi; Hassan, Sonia S.

    2010-01-01

    Objective To describe a novel and simple technique (STAR: Simple Targeted Arterial Rendering) to visualize the fetal cardiac outflow tracts from dataset volumes obtained with spatiotemporal image correlation (STIC) and applying a new display technology (OmniView). Methods We developed a technique to image the outflow tracts by drawing three dissecting lines through the four-chamber view of the heart contained in a STIC volume dataset. Each line generated the following plane: 1) Line 1: ventricular septum “en face” with both great vessels (pulmonary artery anterior to the aorta); 2) Line 2: pulmonary artery with continuation into the longitudinal view of the ductal arch; and 3) Line 3: long axis view of the aorta arising from the left ventricle. The pattern formed by all 3 lines intersecting approximately through the crux of the heart resembles a “star”. The technique was then tested in 50 normal hearts (15.3 – 40.4 weeks of gestation). To determine if the technique could identify planes that departed from the normal images, we tested the technique in 4 cases with proven congenital heart defects (ventricular septal defect, transposition of great vessels, tetralogy of Fallot, and pulmonary atresia with intact ventricular septum). Results The STAR technique was able to generate the intended planes in all 50 normal cases. In the abnormal cases, the STAR technique allowed identification of the ventricular septal defect, demonstrated great vessel anomalies, and displayed views that deviated from what was expected from the examination of normal hearts. Conclusions This novel and simple technique can be used to visualize the outflow tracts and ventricular septum “en face” in normal fetal hearts. The inability to obtain expected views or the appearance of abnormal views in the generated planes should raise the index of suspicion for congenital heart disease involving the great vessels and/or the ventricular septum. The STAR technique may simplify examination of the fetal heart and could reduce operator dependency. PMID:20878672

  19. Are Statisticians Cold-Blooded Bosses? A New Perspective on the "Old" Concept of Statistical Population

    ERIC Educational Resources Information Center

    Lu, Yonggang; Henning, Kevin S. S.

    2013-01-01

    Spurred by recent writings regarding statistical pragmatism, we propose a simple, practical approach to introducing students to a new style of statistical thinking that models nature through the lens of data-generating processes, not populations. (Contains 5 figures.)

  20. A Simple Ultrasonic Experiment Using a Phase Shift Detection Technique.

    ERIC Educational Resources Information Center

    Yunus, W. Mahmood Mat; Ahmad, Maulana

    1996-01-01

    Describes a simple ultrasonic experiment that can be used to measure the purity of liquid samples by detecting variations in the velocity of sound. Uses a phase shift detection technique that incorporates the use of logic gates and a piezoelectric transducer. (JRH)

  1. How-to-Do-It: A Simple DNA Isolation Technique Using Halophilic Bacteria.

    ERIC Educational Resources Information Center

    Guilfoile, Patrick

    1989-01-01

    Described is a simple technique for isolating DNA from halophilic bacteria. Materials, procedure, and additional experiments are outlined. It is stated that the DNA obtained will be somewhat contaminated with cellular proteins and RNA. Offers a procedure for greater purification. (RT)

  2. Polarizability of KC60: Evidence for Potassium Skating on the C60 Surface

    NASA Astrophysics Data System (ADS)

    Rayane, D.; Antoine, R.; Dugourd, Ph.; Benichou, E.; Allouche, A. R.; Aubert-Frécon, M.; Broyer, M.

    2000-02-01

    We present the first measurement of the polarizability and the permanent dipole moment of isolated KC60 molecules by molecular beam deflection technique. We have obtained a value of 2506+/-250 Å3 for the polarizability at room temperature. The addition of a potassium atom enhances by more than a factor of 20 the polarizability of a pure C60 molecule. This very high polarizability and the lack of observed permanent dipole show that the apparent polarizability of KC60 is induced by the free skating of the potassium atom on the C60 surface, resulting in a statistical orientation of the dipole. The results are interpreted with a simple model similar to the Langevin theory for paramagnetic systems.

  3. Geostatistics applied to gas reservoirs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meunier, G.; Coulomb, C.; Laille, J.P.

    1989-09-01

    The spatial distribution of many of the physical parameters connected with a gas reservoir is of primary interest to both engineers and geologists throughout the study, development, and operation of a field. It is therefore desirable for the distribution to be capable of statistical interpretation, to have a simple graphical representation, and to allow data to be entered from either two- or three-dimensional grids. To satisfy these needs while dealing with the geographical variables, new methods have been developed under the name geostatistics. This paper describes briefly the theory of geostatistics and its most recent improvements for the specific problemmore » of subsurface description. The external-drift technique has been emphasized in particular, and in addition, four case studies related to gas reservoirs are presented.« less

  4. Testing averaged cosmology with type Ia supernovae and BAO data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos, B.; Alcaniz, J.S.; Coley, A.A.

    An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less

  5. Data survey on the effect of product features on competitive advantage of selected firms in Nigeria.

    PubMed

    Olokundun, Maxwell; Iyiola, Oladele; Ibidunni, Stephen; Falola, Hezekiah; Salau, Odunayo; Amaihian, Augusta; Peter, Fred; Borishade, Taiye

    2018-06-01

    The main objective of this study was to present a data article that investigates the effect product features on firm's competitive advantage. Few studies have examined how the features of a product could help in driving the competitive advantage of a firm. Descriptive research method was used. Statistical Package for Social Sciences (SPSS 22) was engaged for analysis of one hundred and fifty (150) valid questionnaire which were completed by small business owners registered under small and medium scale enterprises development of Nigeria (SMEDAN). Stratified and simple random sampling techniques were employed; reliability and validity procedures were also confirmed. The field data set is made publicly available to enable critical or extended analysis.

  6. Inferring molecular interactions pathways from eQTL data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rashid, Imran; McDermott, Jason E.; Samudrala, Ram

    Analysis of expression quantitative trait loci (eQTL) helps elucidate the connection between genotype, gene expression levels, and phenotype. However, standard statistical genetics can only attribute changes in expression levels to loci on the genome, not specific genes. Each locus can contain many genes, making it very difficult to discover which gene is controlling the expression levels of other genes. Furthermore, it is even more difficult to find a pathway of molecular interactions responsible for controlling the expression levels. Here we describe a series of techniques for finding explanatory pathways by exploring graphs of molecular interactions. We show several simple methodsmore » can find complete pathways the explain the mechanism of differential expression in eQTL data.« less

  7. Using a Five-Step Procedure for Inferential Statistical Analyses

    ERIC Educational Resources Information Center

    Kamin, Lawrence F.

    2010-01-01

    Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The…

  8. Modeling the Development of Audiovisual Cue Integration in Speech Perception

    PubMed Central

    Getz, Laura M.; Nordeen, Elke R.; Vrabic, Sarah C.; Toscano, Joseph C.

    2017-01-01

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues. PMID:28335558

  9. Modeling the Development of Audiovisual Cue Integration in Speech Perception.

    PubMed

    Getz, Laura M; Nordeen, Elke R; Vrabic, Sarah C; Toscano, Joseph C

    2017-03-21

    Adult speech perception is generally enhanced when information is provided from multiple modalities. In contrast, infants do not appear to benefit from combining auditory and visual speech information early in development. This is true despite the fact that both modalities are important to speech comprehension even at early stages of language acquisition. How then do listeners learn how to process auditory and visual information as part of a unified signal? In the auditory domain, statistical learning processes provide an excellent mechanism for acquiring phonological categories. Is this also true for the more complex problem of acquiring audiovisual correspondences, which require the learner to integrate information from multiple modalities? In this paper, we present simulations using Gaussian mixture models (GMMs) that learn cue weights and combine cues on the basis of their distributional statistics. First, we simulate the developmental process of acquiring phonological categories from auditory and visual cues, asking whether simple statistical learning approaches are sufficient for learning multi-modal representations. Second, we use this time course information to explain audiovisual speech perception in adult perceivers, including cases where auditory and visual input are mismatched. Overall, we find that domain-general statistical learning techniques allow us to model the developmental trajectory of audiovisual cue integration in speech, and in turn, allow us to better understand the mechanisms that give rise to unified percepts based on multiple cues.

  10. Temperature, Not Fine Particulate Matter (PM2.5), is Causally Associated with Short-Term Acute Daily Mortality Rates: Results from One Hundred United States Cities

    PubMed Central

    Cox, Tony; Popken, Douglas; Ricci, Paolo F

    2013-01-01

    Exposures to fine particulate matter (PM2.5) in air (C) have been suspected of contributing causally to increased acute (e.g., same-day or next-day) human mortality rates (R). We tested this causal hypothesis in 100 United States cities using the publicly available NMMAPS database. Although a significant, approximately linear, statistical C-R association exists in simple statistical models, closer analysis suggests that it is not causal. Surprisingly, conditioning on other variables that have been extensively considered in previous analyses (usually using splines or other smoothers to approximate their effects), such as month of the year and mean daily temperature, suggests that they create strong, nonlinear confounding that explains the statistical association between PM2.5 and mortality rates in this data set. As this finding disagrees with conventional wisdom, we apply several different techniques to examine it. Conditional independence tests for potential causation, non-parametric classification tree analysis, Bayesian Model Averaging (BMA), and Granger-Sims causality testing, show no evidence that PM2.5 concentrations have any causal impact on increasing mortality rates. This apparent absence of a causal C-R relation, despite their statistical association, has potentially important implications for managing and communicating the uncertain health risks associated with, but not necessarily caused by, PM2.5 exposures. PMID:23983662

  11. Biomechanical Comparison of Standard and Linked Single-Row Rotator Cuff Repairs in a Human Cadaver Model.

    PubMed

    Meisel, Adam F; Henninger, Heath B; Barber, F Alan; Getelman, Mark H

    2017-05-01

    The purpose of this study was to evaluate the time zero cyclic and failure loading properties of a linked single-row rotator cuff repair compared with a standard simple suture single-row repair using triple-loaded suture anchors. Eighteen human cadaveric shoulders from 9 matched pairs were dissected, and full-thickness supraspinatus tears were created. The tendon cross-sectional area was recorded. In each pair, one side was repaired with a linked single-row construct and the other with a simple suture single-row construct, both using 2 triple-loaded suture anchors. After preloading, specimens were cycled to 1 MPa of effective stress at 1 Hz for 500 cycles, and gap formation was recorded with a digital video system. Samples were then loaded to failure, and modes of failure were recorded. There was no statistical difference in peak gap formation between the control and linked constructs (3.6 ± 0.9 mm and 3.6 ± 1.2 mm, respectively; P = .697). Both constructs averaged below a 5-mm cyclic failure threshold. There was no statistical difference in ultimate load to failure between the control and linked repair (511.1 ± 139.0 N and 561.2 ± 131.8 N, respectively; P = .164), and both groups reached failure at loads similar to previous studies. Constructs failed predominantly via tissue tearing parallel to the medial suture line. The linked repair performed similarly to the simple single-row repair. Both constructs demonstrated high ultimate load to failure and good resistance to gap formation with cyclic loading, validating the time zero strength of both constructs in a human cadaveric model. The linked repair provided equivalent resistance to gap formation and failure loads compared with simple suture single-row repairs with triple-loaded suture anchors. This suggests that the linked repair is a simplified rip-stop configuration using the existing suture that may perform similarly to current rotator cuff repair techniques. Copyright © 2016 Arthroscopy Association of North America. Published by Elsevier Inc. All rights reserved.

  12. Chroma key without color restrictions based on asynchronous amplitude modulation of background illumination on retroreflective screens

    NASA Astrophysics Data System (ADS)

    Vidal, Borja; Lafuente, Juan A.

    2016-03-01

    A simple technique to avoid color limitations in image capture systems based on chroma key video composition using retroreflective screens and light-emitting diodes (LED) rings is proposed and demonstrated. The combination of an asynchronous temporal modulation onto the background illumination and simple image processing removes the usual restrictions on foreground colors in the scene. The technique removes technical constraints in stage composition, allowing its design to be purely based on artistic grounds. Since it only requires adding a very simple electronic circuit to widely used chroma keying hardware based on retroreflective screens, the technique is easily applicable to TV and filming studios.

  13. Measuring the statistical validity of summary meta‐analysis and meta‐regression results for use in clinical practice

    PubMed Central

    Riley, Richard D.

    2017-01-01

    An important question for clinicians appraising a meta‐analysis is: are the findings likely to be valid in their own practice—does the reported effect accurately represent the effect that would occur in their own clinical population? To this end we advance the concept of statistical validity—where the parameter being estimated equals the corresponding parameter for a new independent study. Using a simple (‘leave‐one‐out’) cross‐validation technique, we demonstrate how we may test meta‐analysis estimates for statistical validity using a new validation statistic, Vn, and derive its distribution. We compare this with the usual approach of investigating heterogeneity in meta‐analyses and demonstrate the link between statistical validity and homogeneity. Using a simulation study, the properties of Vn and the Q statistic are compared for univariate random effects meta‐analysis and a tailored meta‐regression model, where information from the setting (included as model covariates) is used to calibrate the summary estimate to the setting of application. Their properties are found to be similar when there are 50 studies or more, but for fewer studies Vn has greater power but a higher type 1 error rate than Q. The power and type 1 error rate of Vn are also shown to depend on the within‐study variance, between‐study variance, study sample size, and the number of studies in the meta‐analysis. Finally, we apply Vn to two published meta‐analyses and conclude that it usefully augments standard methods when deciding upon the likely validity of summary meta‐analysis estimates in clinical practice. © 2017 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd. PMID:28620945

  14. Atmospheric Tracer Inverse Modeling Using Markov Chain Monte Carlo (MCMC)

    NASA Astrophysics Data System (ADS)

    Kasibhatla, P.

    2004-12-01

    In recent years, there has been an increasing emphasis on the use of Bayesian statistical estimation techniques to characterize the temporal and spatial variability of atmospheric trace gas sources and sinks. The applications have been varied in terms of the particular species of interest, as well as in terms of the spatial and temporal resolution of the estimated fluxes. However, one common characteristic has been the use of relatively simple statistical models for describing the measurement and chemical transport model error statistics and prior source statistics. For example, multivariate normal probability distribution functions (pdfs) are commonly used to model these quantities and inverse source estimates are derived for fixed values of pdf paramaters. While the advantage of this approach is that closed form analytical solutions for the a posteriori pdfs of interest are available, it is worth exploring Bayesian analysis approaches which allow for a more general treatment of error and prior source statistics. Here, we present an application of the Markov Chain Monte Carlo (MCMC) methodology to an atmospheric tracer inversion problem to demonstrate how more gereral statistical models for errors can be incorporated into the analysis in a relatively straightforward manner. The MCMC approach to Bayesian analysis, which has found wide application in a variety of fields, is a statistical simulation approach that involves computing moments of interest of the a posteriori pdf by efficiently sampling this pdf. The specific inverse problem that we focus on is the annual mean CO2 source/sink estimation problem considered by the TransCom3 project. TransCom3 was a collaborative effort involving various modeling groups and followed a common modeling and analysis protocoal. As such, this problem provides a convenient case study to demonstrate the applicability of the MCMC methodology to atmospheric tracer source/sink estimation problems.

  15. Statistics Using Just One Formula

    ERIC Educational Resources Information Center

    Rosenthal, Jeffrey S.

    2018-01-01

    This article advocates that introductory statistics be taught by basing all calculations on a single simple margin-of-error formula and deriving all of the standard introductory statistical concepts (confidence intervals, significance tests, comparisons of means and proportions, etc) from that one formula. It is argued that this approach will…

  16. A Simple ELISA Exercise for Undergraduate Biology.

    ERIC Educational Resources Information Center

    Baker, William P.; Moore, Cathy R.

    Understanding of immunological techniques such as the Enzyme Linked Immuno Sorbent Assay (ELISA) is an important part of instructional units in human health, developmental biology, microbiology, and biotechnology. This paper describes a simple ELISA exercise for undergraduate biology that effectively simulates the technique using a paper model.…

  17. Fourier Spectroscopy: A Simple Analysis Technique

    ERIC Educational Resources Information Center

    Oelfke, William C.

    1975-01-01

    Presents a simple method of analysis in which the student can integrate, point by point, any interferogram to obtain its Fourier transform. The manual technique requires no special equipment and is based on relationships that most undergraduate physics students can derive from the Fourier integral equations. (Author/MLH)

  18. Simple taper: Taper equations for the field forester

    Treesearch

    David R. Larsen

    2017-01-01

    "Simple taper" is set of linear equations that are based on stem taper rates; the intent is to provide taper equation functionality to field foresters. The equation parameters are two taper rates based on differences in diameter outside bark at two points on a tree. The simple taper equations are statistically equivalent to more complex equations. The linear...

  19. Using Simple Linear Regression to Assess the Success of the Montreal Protocol in Reducing Atmospheric Chlorofluorocarbons

    ERIC Educational Resources Information Center

    Nelson, Dean

    2009-01-01

    Following the Guidelines for Assessment and Instruction in Statistics Education (GAISE) recommendation to use real data, an example is presented in which simple linear regression is used to evaluate the effect of the Montreal Protocol on atmospheric concentration of chlorofluorocarbons. This simple set of data, obtained from a public archive, can…

  20. Efficacy of dialysis in peritoneal dialysis: utility of bioimpedance to calculate Kt/V and the search for a target Kt.

    PubMed

    Martínez Fernández, G; Ortega Cerrato, A; Masiá Mondéjar, J; Pérez Rodríguez, A; Llamas Fuentes, F; Gómez Roldán, C; Pérez-Martínez, Juan

    2013-04-01

    To calculate Kt/V, volume (V) is usually obtained by Watson formula, but bioimpedance spectroscopy (BIS) is a simple and applicable technique to determinate V, along with other hydration and nutrition parameters, in peritoneal dialysis (PD) patients. Dialysis efficacy can also be measured with Kt, but no experience exists in PD, so there is no reference/target value for Kt that must be achieved in these patients to be considered adequately dialyzed. We evaluated the efficacy of PD with Kt/V using Watson formula and BIS for V calculation, assessed hydration status in a PD unit by data obtained by BIS, and attempted to find a reference Kt from the Kt/V previously obtained by BIS. In this observational prospective study of 78 PD patients, we measured V using BIS (V bis) and Watson formula (V w) and calculated weekly Kt/V using both volumes (Kt/V bis/V bis and Kt/V w). With the BIS technique, we obtained and subsequently analyzed other hydration status parameters. We achieved a reference Kt, extrapolating the value desired (weekly Kt/V 1.7) to the target Kt using the simple linear regression statistical technique, basing it on the results of the previously calculated Pearson's linear correlation coefficient. Volume was 1.8 l higher by Watson formula than with BIS (p < 0.001). Weekly Kt/V bis was 2.33 ± 0.68, and mean weekly Kt/V w was 2.20 ± 0.63 (p < 0.0001); 60.25 % of patients presented overhydration according to the BIS study (OH >1.1 l). The target value of Kt for the reference weekly Kt/V bis (1.7) was 64.87 l. BIS is a simple, applicable technique for calculating V in dialysis that can be especially useful in PD patients compared with the anthropometric formulas, by the abnormally distributed body water in these patients. Other parameters obtained by BIS will serve to assess both the distribution of body volume and nutritional status in the clinical setting. The target Kt value obtained from Kt/V bis allowed us to measure the efficacy of PD in a practical way, omitting V measurement.

  1. Asymptotically Optimal and Private Statistical Estimation

    NASA Astrophysics Data System (ADS)

    Smith, Adam

    Differential privacy is a definition of "privacy" for statistical databases. The definition is simple, yet it implies strong semantics even in the presence of an adversary with arbitrary auxiliary information about the database.

  2. [Environmental Education Units.] Photography for Kids. Vacant Lot Studies. Contour Mapping.

    ERIC Educational Resources Information Center

    Minneapolis Independent School District 275, Minn.

    Techniques suitable for use with elementary school students when studying field environment are described in these four booklets. Techniques for photography (construction of simple cameras, printing on blueprint and photographic paper, use of simple commercial cameras, development of exposed film); for measuring microclimatic factors (temperature,…

  3. Simple & Rapid Generation of Complex DNA Profiles for the Undergraduate Laboratory

    ERIC Educational Resources Information Center

    Kass, David H.

    2007-01-01

    Deoxyribonucleic acid (DNA) profiles can be generated by a variety of techniques incorporating different types of DNA markers. Simple methods are commonly utilized in the undergraduate laboratory, but with certain drawbacks. In this article, the author presents an advancement of the "Alu" dimorphism technique involving two tetraplex polymerase…

  4. Simple technique to treat pupillary capture after transscleral fixation of intraocular lens.

    PubMed

    Jürgens, Ignasi; Rey, Amanda

    2015-01-01

    We describe a simple surgical technique to manage pupillary capture after previous transscleral fixation of an intraocular lens. Neither author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  5. A simple technique for maintaining Stimulium adults, including onchocerciasis vectors, under artificial conditions.

    PubMed

    Raybould, J N; Mhiddin, H K

    1974-01-01

    The maintenance of Simulium adults in the laboratory is necessary for many experimental investigations and is a prerequisite to laboratory colonization. A simple technique for this purpose is described in which each fly is kept separately in a hole bored in a block of expanded polystyrene.

  6. Statistical Tutorial | Center for Cancer Research

    Cancer.gov

    Recent advances in cancer biology have resulted in the need for increased statistical analysis of research data.  ST is designed as a follow up to Statistical Analysis of Research Data (SARD) held in April 2018.  The tutorial will apply the general principles of statistical analysis of research data including descriptive statistics, z- and t-tests of means and mean differences, simple and multiple linear regression, ANOVA tests, and Chi-Squared distribution.

  7. The sandwich technique for repair of pectus carinatum and excavatum/carinatum complex.

    PubMed

    Park, Hyung Joo; Kim, Kyung Soo

    2016-09-01

    Simple external compression of pectus carinatum seems to have its limitations, particularly the repair of asymmetric pectus carinatum or excavatum/carinatum complex. We devised the sandwich technique (press-molding) to remodel the entire chest wall. The purpose of this study is to introduce the sandwich technique and appraise the early results. Between January 2007 and January 2016, 523 consecutive patients with pectus carinatum and its variants were analyzed retrospectively. No patients were excluded during the study period. The sandwich 1 and 2 techniques using the internal and external pectus bars were for pectus carinatum repair. Modified techniques using the external string and the internal bar were to treat the lower costal flare (the flare-buster) and focal protuberances (the magic string) in pectus excavatum repair. Statistical analyses were carried out using paired and unpaired t -test or Wilcoxon signed rank tests. The sandwich repair with the external and internal bars was applied to 58 pectus carinatum patients: seven symmetric (12.1%), 14 asymmetric (24.1%), and 37 carinatum-excavatum complex (63.8%). After pectus excavatum repair, 426 patients had the flare-buster and 39 patients received the magic string. The sandwich 1 technique achieved near-complete resolution of carinatum in 52 patients (86.2%). The sandwich 2 technique accomplished almost symmetric configuration with no residual carinatum in all six cases. The sandwich technique using the external and internal bars seems to be effective in treating asymmetric pectus carinatum and complex excavatum/carinatum deformities. The flare-buster and the magic string effectively relieve the costal flare and focal protuberances in pectus excavatum repair.

  8. Self-organization of cosmic radiation pressure instability. II - One-dimensional simulations

    NASA Technical Reports Server (NTRS)

    Hogan, Craig J.; Woods, Jorden

    1992-01-01

    The clustering of statistically uniform discrete absorbing particles moving solely under the influence of radiation pressure from uniformly distributed emitters is studied in a simple one-dimensional model. Radiation pressure tends to amplify statistical clustering in the absorbers; the absorbing material is swept into empty bubbles, the biggest bubbles grow bigger almost as they would in a uniform medium, and the smaller ones get crushed and disappear. Numerical simulations of a one-dimensional system are used to support the conjecture that the system is self-organizing. Simple statistics indicate that a wide range of initial conditions produce structure approaching the same self-similar statistical distribution, whose scaling properties follow those of the attractor solution for an isolated bubble. The importance of the process for large-scale structuring of the interstellar medium is briefly discussed.

  9. Linear and nonlinear regression techniques for simultaneous and proportional myoelectric control.

    PubMed

    Hahne, J M; Biessmann, F; Jiang, N; Rehbaum, H; Farina, D; Meinecke, F C; Muller, K-R; Parra, L C

    2014-03-01

    In recent years the number of active controllable joints in electrically powered hand-prostheses has increased significantly. However, the control strategies for these devices in current clinical use are inadequate as they require separate and sequential control of each degree-of-freedom (DoF). In this study we systematically compare linear and nonlinear regression techniques for an independent, simultaneous and proportional myoelectric control of wrist movements with two DoF. These techniques include linear regression, mixture of linear experts (ME), multilayer-perceptron, and kernel ridge regression (KRR). They are investigated offline with electro-myographic signals acquired from ten able-bodied subjects and one person with congenital upper limb deficiency. The control accuracy is reported as a function of the number of electrodes and the amount and diversity of training data providing guidance for the requirements in clinical practice. The results showed that KRR, a nonparametric statistical learning method, outperformed the other methods. However, simple transformations in the feature space could linearize the problem, so that linear models could achieve similar performance as KRR at much lower computational costs. Especially ME, a physiologically inspired extension of linear regression represents a promising candidate for the next generation of prosthetic devices.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shumway, R.H.; McQuarrie, A.D.

    Robust statistical approaches to the problem of discriminating between regional earthquakes and explosions are developed. We compare linear discriminant analysis using descriptive features like amplitude and spectral ratios with signal discrimination techniques using the original signal waveforms and spectral approximations to the log likelihood function. Robust information theoretic techniques are proposed and all methods are applied to 8 earthquakes and 8 mining explosions in Scandinavia and to an event from Novaya Zemlya of unknown origin. It is noted that signal discrimination approaches based on discrimination information and Renyi entropy perform better in the test sample than conventional methods based onmore » spectral ratios involving the P and S phases. Two techniques for identifying the ripple-firing pattern for typical mining explosions are proposed and shown to work well on simulated data and on several Scandinavian earthquakes and explosions. We use both cepstral analysis in the frequency domain and a time domain method based on the autocorrelation and partial autocorrelation functions. The proposed approach strips off underlying smooth spectral and seasonal spectral components corresponding to the echo pattern induced by two simple ripple-fired models. For two mining explosions, a pattern is identified whereas for two earthquakes, no pattern is evident.« less

  11. Survey of statistical techniques used in validation studies of air pollution prediction models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bornstein, R D; Anderson, S F

    1979-03-01

    Statistical techniques used by meteorologists to validate predictions made by air pollution models are surveyed. Techniques are divided into the following three groups: graphical, tabular, and summary statistics. Some of the practical problems associated with verification are also discussed. Characteristics desired in any validation program are listed and a suggested combination of techniques that possesses many of these characteristics is presented.

  12. S-SPatt: simple statistics for patterns on Markov chains.

    PubMed

    Nuel, Grégory

    2005-07-01

    S-SPatt allows the counting of patterns occurrences in text files and, assuming these texts are generated from a random Markovian source, the computation of the P-value of a given observation using a simple binomial approximation.

  13. The transformed-stationary approach: a generic and simplified methodology for non-stationary extreme value analysis

    NASA Astrophysics Data System (ADS)

    Mentaschi, Lorenzo; Vousdoukas, Michalis; Voukouvalas, Evangelos; Sartini, Ludovica; Feyen, Luc; Besio, Giovanni; Alfieri, Lorenzo

    2016-09-01

    Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MATLAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/ (Mentaschi et al., 2016).

  14. Accounting for selection bias in association studies with complex survey data.

    PubMed

    Wirth, Kathleen E; Tchetgen Tchetgen, Eric J

    2014-05-01

    Obtaining representative information from hidden and hard-to-reach populations is fundamental to describe the epidemiology of many sexually transmitted diseases, including HIV. Unfortunately, simple random sampling is impractical in these settings, as no registry of names exists from which to sample the population at random. However, complex sampling designs can be used, as members of these populations tend to congregate at known locations, which can be enumerated and sampled at random. For example, female sex workers may be found at brothels and street corners, whereas injection drug users often come together at shooting galleries. Despite the logistical appeal, complex sampling schemes lead to unequal probabilities of selection, and failure to account for this differential selection can result in biased estimates of population averages and relative risks. However, standard techniques to account for selection can lead to substantial losses in efficiency. Consequently, researchers implement a variety of strategies in an effort to balance validity and efficiency. Some researchers fully or partially account for the survey design, whereas others do nothing and treat the sample as a realization of the population of interest. We use directed acyclic graphs to show how certain survey sampling designs, combined with subject-matter considerations unique to individual exposure-outcome associations, can induce selection bias. Finally, we present a novel yet simple maximum likelihood approach for analyzing complex survey data; this approach optimizes statistical efficiency at no cost to validity. We use simulated data to illustrate this method and compare it with other analytic techniques.

  15. Presurgical Planning for Supratentorial Lesions with Free Slicer Software and Sina App.

    PubMed

    Chen, Ji-Gang; Han, Kai-Wei; Zhang, Dan-Feng; Li, Zhen-Xing; Li, Yi-Ming; Hou, Li-Jun

    2017-10-01

    Neuronavigation systems are used widely in the localization of intracranial lesions with satisfactory accuracy. However, they are expensive and difficult to learn. Therefore, a simple and practical augmented reality (AR) system using mobile devices might be an alternative technique. We introduce a mobile AR system for the localization of supratentorial lesions. Its practicability and accuracy were examined by clinical application in patients and comparison with a standard neuronavigation system. A 3-dimensional (3D) model including lesions was created with 3D Slicer. A 2-dimensional image of this 3D model was obtained and overlapped on the patient's head with the Sina app. Registration was conducted with the assistance of anatomical landmarks and fiducial markers. The center of lesion projected on scalp was identified with our mobile AR system and standard neuronavigation system, respectively. The difference in distance between the centers identified by these 2 systems was measured. Our mobile AR system was simple and accurate in the localization of supratentorial lesions with a mean distance difference of 4.4 ± 1.1 mm. Registration added on an average of 141.7 ± 39 seconds to operation time. There was no statistically significant difference for the required time among 3 registrations (P = 0.646). The mobile AR system presents an alternative technology for image-guided neurosurgery and proves to be practical and reliable. The technique contributes to optimal presurgical planning for supratentorial lesions, especially in the absence of a neuronavigation system. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. A Tiered Approach to Evaluating Salinity Sources in Water at Oil and Gas Production Sites.

    PubMed

    Paquette, Shawn M; Molofsky, Lisa J; Connor, John A; Walker, Kenneth L; Hopkins, Harley; Chakraborty, Ayan

    2017-09-01

    A suspected increase in the salinity of fresh water resources can trigger a site investigation to identify the source(s) of salinity and the extent of any impacts. These investigations can be complicated by the presence of naturally elevated total dissolved solids or chlorides concentrations, multiple potential sources of salinity, and incomplete data and information on both naturally occurring conditions and the characteristics of potential sources. As a result, data evaluation techniques that are effective at one site may not be effective at another. In order to match the complexity of the evaluation effort to the complexity of the specific site, this paper presents a strategic tiered approach that utilizes established techniques for evaluating and identifying the source(s) of salinity in an efficient step-by-step manner. The tiered approach includes: (1) a simple screening process to evaluate whether an impact has occurred and if the source is readily apparent; (2) basic geochemical characterization of the impacted water resource(s) and potential salinity sources coupled with simple visual and statistical data evaluation methods to determine the source(s); and (3) advanced laboratory analyses (e.g., isotopes) and data evaluation methods to identify the source(s) and the extent of salinity impacts where it was not otherwise conclusive. A case study from the U.S. Gulf Coast is presented to illustrate the application of this tiered approach. © 2017, National Ground Water Association.

  17. A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.

    PubMed

    Jayachandran, V; Bonilha, M W

    2003-03-01

    This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.

  18. Resolution Study of a Hyperspectral Sensor using Computed Tomography in the Presence of Noise

    DTIC Science & Technology

    2012-06-14

    diffraction efficiency is dependent on wavelength. Compared to techniques developed by later work, simple algebraic reconstruction techniques were used...spectral di- mension, using computed tomography (CT) techniques with only a finite number of diverse images. CTHIS require a reconstruction algorithm in...many frames are needed to reconstruct the spectral cube of a simple object using a theoretical lower bound. In this research a new algorithm is derived

  19. Regression: The Apple Does Not Fall Far From the Tree.

    PubMed

    Vetter, Thomas R; Schober, Patrick

    2018-05-15

    Researchers and clinicians are frequently interested in either: (1) assessing whether there is a relationship or association between 2 or more variables and quantifying this association; or (2) determining whether 1 or more variables can predict another variable. The strength of such an association is mainly described by the correlation. However, regression analysis and regression models can be used not only to identify whether there is a significant relationship or association between variables but also to generate estimations of such a predictive relationship between variables. This basic statistical tutorial discusses the fundamental concepts and techniques related to the most common types of regression analysis and modeling, including simple linear regression, multiple regression, logistic regression, ordinal regression, and Poisson regression, as well as the common yet often underrecognized phenomenon of regression toward the mean. The various types of regression analysis are powerful statistical techniques, which when appropriately applied, can allow for the valid interpretation of complex, multifactorial data. Regression analysis and models can assess whether there is a relationship or association between 2 or more observed variables and estimate the strength of this association, as well as determine whether 1 or more variables can predict another variable. Regression is thus being applied more commonly in anesthesia, perioperative, critical care, and pain research. However, it is crucial to note that regression can identify plausible risk factors; it does not prove causation (a definitive cause and effect relationship). The results of a regression analysis instead identify independent (predictor) variable(s) associated with the dependent (outcome) variable. As with other statistical methods, applying regression requires that certain assumptions be met, which can be tested with specific diagnostics.

  20. Identifying significant gene‐environment interactions using a combination of screening testing and hierarchical false discovery rate control

    PubMed Central

    Shen, Li; Saykin, Andrew J.; Williams, Scott M.; Moore, Jason H.

    2016-01-01

    ABSTRACT Although gene‐environment (G× E) interactions play an important role in many biological systems, detecting these interactions within genome‐wide data can be challenging due to the loss in statistical power incurred by multiple hypothesis correction. To address the challenge of poor power and the limitations of existing multistage methods, we recently developed a screening‐testing approach for G× E interaction detection that combines elastic net penalized regression with joint estimation to support a single omnibus test for the presence of G× E interactions. In our original work on this technique, however, we did not assess type I error control or power and evaluated the method using just a single, small bladder cancer data set. In this paper, we extend the original method in two important directions and provide a more rigorous performance evaluation. First, we introduce a hierarchical false discovery rate approach to formally assess the significance of individual G× E interactions. Second, to support the analysis of truly genome‐wide data sets, we incorporate a score statistic‐based prescreening step to reduce the number of single nucleotide polymorphisms prior to fitting the first stage penalized regression model. To assess the statistical properties of our method, we compare the type I error rate and statistical power of our approach with competing techniques using both simple simulation designs as well as designs based on real disease architectures. Finally, we demonstrate the ability of our approach to identify biologically plausible SNP‐education interactions relative to Alzheimer's disease status using genome‐wide association study data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). PMID:27578615

  1. A step-by-step solution for embedding user-controlled cines into educational Web pages.

    PubMed

    Cornfeld, Daniel

    2008-03-01

    The objective of this article is to introduce a simple method for embedding user-controlled cines into a Web page using a simple JavaScript. Step-by-step instructions are included and the source code is made available. This technique allows the creation of portable Web pages that allow the user to scroll through cases as if seated at a PACS workstation. A simple JavaScript allows scrollable image stacks to be included on Web pages. With this technique, you can quickly and easily incorporate entire stacks of CT or MR images into online teaching files. This technique has the potential for use in case presentations, online didactics, teaching archives, and resident testing.

  2. A simple equation to estimate body fat percentage in children with overweightness or obesity: a retrospective study.

    PubMed

    Cortés-Castell, Ernesto; Juste, Mercedes; Palazón-Bru, Antonio; Monge, Laura; Sánchez-Ferrer, Francisco; Rizo-Baeza, María Mercedes

    2017-01-01

    Dual-energy X-ray absorptiometry (DXA) provides separate measurements of fat mass, fat-free mass and bone mass, and is a quick, accurate, and safe technique, yet one that is not readily available in routine clinical practice. Consequently, we aimed to develop statistical formulas to predict fat mass (%) and fat mass index (FMI) with simple parameters (age, sex, weight and height). We conducted a retrospective observational cross-sectional study in 416 overweight or obese patients aged 4-18 years that involved assessing adiposity by DXA (fat mass percentage and FMI), body mass index (BMI), sex and age. We randomly divided the sample into two parts (construction and validation). In the construction sample, we developed formulas to predict fat mass and FMI using linear multiple regression models. The formulas were validated in the other sample, calculating the intraclass correlation coefficient via bootstrapping. The fat mass percentage formula had a coefficient of determination of 0.65. This value was 0.86 for FMI. In the validation, the constructed formulas had an intraclass correlation coefficient of 0.77 for fat mass percentage and 0.92 for FMI. Our predictive formulas accurately predicted fat mass and FMI with simple parameters (BMI, sex and age) in children with overweight and obesity. The proposed methodology could be applied in other fields. Further studies are needed to externally validate these formulas.

  3. Exploring Remote Rensing Through The Use Of Readily-Available Classroom Technologies

    NASA Astrophysics Data System (ADS)

    Rogers, M. A.

    2013-12-01

    Frontier geoscience research using remotely-sensed satellite observation routinely requires sophisticated and novel remote sensing techniques to succeed. Describing these techniques in an educational format presents significant challenges to the science educator, especially with regards to the professional development setting where a small, but competent audience has limited instructor contact time to develop the necessary understanding. In this presentation, we describe the use of simple and cheaply available technologies, including ultrasonic transducers, FLIR detectors, and even simple web cameras to provide a tangible analogue to sophisticated remote sensing platforms. We also describe methods of curriculum development that leverages the use of these simple devices to teach the fundamentals of remote sensing, resulting in a deeper and more intuitive understanding of the techniques used in modern remote sensing research. Sample workshop itineraries using these techniques are provided as well.

  4. Support Vector Feature Selection for Early Detection of Anastomosis Leakage From Bag-of-Words in Electronic Health Records.

    PubMed

    Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert

    2016-09-01

    The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.

  5. Statistical Modelling of Temperature and Moisture Uptake of Biochars Exposed to Selected Relative Humidity of Air.

    PubMed

    Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel

    2018-02-09

    New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.

  6. Bridging stylized facts in finance and data non-stationarities

    NASA Astrophysics Data System (ADS)

    Camargo, Sabrina; Duarte Queirós, Sílvio M.; Anteneodo, Celia

    2013-04-01

    Employing a recent technique which allows the representation of nonstationary data by means of a juxtaposition of locally stationary paths of different length, we introduce a comprehensive analysis of the key observables in a financial market: the trading volume and the price fluctuations. From the segmentation procedure we are able to introduce a quantitative description of statistical features of these two quantities, which are often named stylized facts, namely the tails of the distribution of trading volume and price fluctuations and a dynamics compatible with the U-shaped profile of the volume in a trading section and the slow decay of the autocorrelation function. The segmentation of the trading volume series provides evidence of slow evolution of the fluctuating parameters of each patch, pointing to the mixing scenario. Assuming that long-term features are the outcome of a statistical mixture of simple local forms, we test and compare different probability density functions to provide the long-term distribution of the trading volume, concluding that the log-normal gives the best agreement with the empirical distribution. Moreover, the segmentation of the magnitude price fluctuations are quite different from the results for the trading volume, indicating that changes in the statistics of price fluctuations occur at a faster scale than in the case of trading volume.

  7. Effect of crowd size on patient volume at a large, multipurpose, indoor stadium.

    PubMed

    De Lorenzo, R A; Gray, B C; Bennett, P C; Lamparella, V J

    1989-01-01

    A prediction of patient volume expected at "mass gatherings" is desirable in order to provide optimal on-site emergency medical care. While several methods of predicting patient loads have been suggested, a reliable technique has not been established. This study examines the frequency of medical emergencies at the Syracuse University Carrier Dome, a 50,500-seat indoor stadium. Patient volume and level of care at collegiate basketball and football games as well as rock concerts, over a 7-year period were examined and tabulated. This information was analyzed using simple regression and nonparametric statistical methods to determine level of correlation between crowd size and patient volume. These analyses demonstrated no statistically significant increase in patient volume for increasing crowd size for basketball and football events. There was a small but statistically significant increase in patient volume for increasing crowd size for concerts. A comparison of similar crowd size for each of the three events showed that patient frequency is greatest for concerts and smallest for basketball. The study suggests that crowd size alone has only a minor influence on patient volume at any given event. Structuring medical services based solely on expected crowd size and not considering other influences such as event type and duration may give poor results.

  8. OPLS statistical model versus linear regression to assess sonographic predictors of stroke prognosis.

    PubMed

    Vajargah, Kianoush Fathi; Sadeghi-Bazargani, Homayoun; Mehdizadeh-Esfanjani, Robab; Savadi-Oskouei, Daryoush; Farhoudi, Mehdi

    2012-01-01

    The objective of the present study was to assess the comparable applicability of orthogonal projections to latent structures (OPLS) statistical model vs traditional linear regression in order to investigate the role of trans cranial doppler (TCD) sonography in predicting ischemic stroke prognosis. The study was conducted on 116 ischemic stroke patients admitted to a specialty neurology ward. The Unified Neurological Stroke Scale was used once for clinical evaluation on the first week of admission and again six months later. All data was primarily analyzed using simple linear regression and later considered for multivariate analysis using PLS/OPLS models through the SIMCA P+12 statistical software package. The linear regression analysis results used for the identification of TCD predictors of stroke prognosis were confirmed through the OPLS modeling technique. Moreover, in comparison to linear regression, the OPLS model appeared to have higher sensitivity in detecting the predictors of ischemic stroke prognosis and detected several more predictors. Applying the OPLS model made it possible to use both single TCD measures/indicators and arbitrarily dichotomized measures of TCD single vessel involvement as well as the overall TCD result. In conclusion, the authors recommend PLS/OPLS methods as complementary rather than alternative to the available classical regression models such as linear regression.

  9. A Simple Classroom Teaching Technique to Help Students Understand Michaelis-Menten Kinetics

    ERIC Educational Resources Information Center

    Runge, Steven W.; Hill, Brent J. F.; Moran, William M.

    2006-01-01

    A new, simple classroom technique helps cell biology students understand principles of Michaelis-Menten enzyme kinetics. A student mimics the enzyme and the student's hand represents the enzyme's active site. The catalytic event is the transfer of marbles (substrate molecules) by hand from one plastic container to another. As predicted, increases…

  10. Medial Meniscus Posterior Root Tear Repair Using a 2-Simple-Suture Pullout Technique.

    PubMed

    Samy, Tarek Mohamed; Nassar, Wael A M; Zakaria, Zeiad Mohamed; Farrag Abdelaziz, Ahmed Khaled

    2017-06-01

    Medial meniscus posterior root tear is one of the underestimated knee injuries in terms of incidence. Despite its grave sequelae, using simple but effective technique can maintain the native knee joint longevity. In the current note, a 2-simple-suture pullout technique was used to effectively reduce the meniscus posterior root to its anatomic position. The success of the technique depended on proper tool selection as well as tibial tunnel direction that allowed easier root suturing and better suture tensioning, without inducing any iatrogenic articular cartilage injury or meniscal tissue loss. Using anterior knee arthroscopy portals, anterolateral as a viewing portal and anteromedial as a working portal, a 7-mm tibial tunnel starting at Gerdy tubercle and ending at the medial meniscus posterior root bed was created. The 2 simple sutures were retrieved through the tunnel and tensioned and secured over a 12-mm-diameter washer at the tibial tunnel outer orifice. Anatomic reduction of the medial meniscus posterior root tear was confirmed arthroscopically intraoperatively and radiologically by postoperative magnetic resonance imaging.

  11. Estimating global distribution of boreal, temperate, and tropical tree plant functional types using clustering techniques

    NASA Astrophysics Data System (ADS)

    Wang, Audrey; Price, David T.

    2007-03-01

    A simple integrated algorithm was developed to relate global climatology to distributions of tree plant functional types (PFT). Multivariate cluster analysis was performed to analyze the statistical homogeneity of the climate space occupied by individual tree PFTs. Forested regions identified from the satellite-based GLC2000 classification were separated into tropical, temperate, and boreal sub-PFTs for use in the Canadian Terrestrial Ecosystem Model (CTEM). Global data sets of monthly minimum temperature, growing degree days, an index of climatic moisture, and estimated PFT cover fractions were then used as variables in the cluster analysis. The statistical results for individual PFT clusters were found consistent with other global-scale classifications of dominant vegetation. As an improvement of the quantification of the climatic limitations on PFT distributions, the results also demonstrated overlapping of PFT cluster boundaries that reflected vegetation transitions, for example, between tropical and temperate biomes. The resulting global database should provide a better basis for simulating the interaction of climate change and terrestrial ecosystem dynamics using global vegetation models.

  12. On base station cooperation using statistical CSI in jointly correlated MIMO downlink channels

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Jiang, Bin; Jin, Shi; Gao, Xiqi; Wong, Kai-Kit

    2012-12-01

    This article studies the transmission of a single cell-edge user's signal using statistical channel state information at cooperative base stations (BSs) with a general jointly correlated multiple-input multiple-output (MIMO) channel model. We first present an optimal scheme to maximize the ergodic sum capacity with per-BS power constraints, revealing that the transmitted signals of all BSs are mutually independent and the optimum transmit directions for each BS align with the eigenvectors of the BS's own transmit correlation matrix of the channel. Then, we employ matrix permanents to derive a closed-form tight upper bound for the ergodic sum capacity. Based on these results, we develop a low-complexity power allocation solution using convex optimization techniques and a simple iterative water-filling algorithm (IWFA) for power allocation. Finally, we derive a necessary and sufficient condition for which a beamforming approach achieves capacity for all BSs. Simulation results demonstrate that the upper bound of ergodic sum capacity is tight and the proposed cooperative transmission scheme increases the downlink system sum capacity considerably.

  13. Universal Capacitance Model for Real-Time Biomass in Cell Culture.

    PubMed

    Konakovsky, Viktor; Yagtu, Ali Civan; Clemens, Christoph; Müller, Markus Michael; Berger, Martina; Schlatter, Stefan; Herwig, Christoph

    2015-09-02

    : Capacitance probes have the potential to revolutionize bioprocess control due to their safe and robust use and ability to detect even the smallest capacitors in the form of biological cells. Several techniques have evolved to model biomass statistically, however, there are problems with model transfer between cell lines and process conditions. Errors of transferred models in the declining phase of the culture range for linear models around +100% or worse, causing unnecessary delays with test runs during bioprocess development. The goal of this work was to develop one single universal model which can be adapted by considering a potentially mechanistic factor to estimate biomass in yet untested clones and scales. The novelty of this work is a methodology to select sensitive frequencies to build a statistical model which can be shared among fermentations with an error between 9% and 38% (mean error around 20%) for the whole process, including the declining phase. A simple linear factor was found to be responsible for the transferability of biomass models between cell lines, indicating a link to their phenotype or physiology.

  14. A classification procedure for the effective management of changes during the maintenance process

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Basili, Victor R.

    1992-01-01

    During software operation, maintainers are often faced with numerous change requests. Given available resources such as effort and calendar time, changes, if approved, have to be planned to fit within budget and schedule constraints. In this paper, we address the issue of assessing the difficulty of a change based on known or predictable data. This paper should be considered as a first step towards the construction of customized economic models for maintainers. In it, we propose a modeling approach, based on regular statistical techniques, that can be used in a variety of software maintenance environments. The approach can be easily automated, and is simple for people with limited statistical experience to use. Moreover, it deals effectively with the uncertainty usually associated with both model inputs and outputs. The modeling approach is validated on a data set provided by NASA/GSFC which shows it was effective in classifying changes with respect to the effort involved in implementing them. Other advantages of the approach are discussed along with additional steps to improve the results.

  15. Paracousti-UQ: A Stochastic 3-D Acoustic Wave Propagation Algorithm.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Preston, Leiph

    Acoustic full waveform algorithms, such as Paracousti, provide deterministic solutions in complex, 3-D variable environments. In reality, environmental and source characteristics are often only known in a statistical sense. Thus, to fully characterize the expected sound levels within an environment, this uncertainty in environmental and source factors should be incorporated into the acoustic simulations. Performing Monte Carlo (MC) simulations is one method of assessing this uncertainty, but it can quickly become computationally intractable for realistic problems. An alternative method, using the technique of stochastic partial differential equations (SPDE), allows computation of the statistical properties of output signals at a fractionmore » of the computational cost of MC. Paracousti-UQ solves the SPDE system of 3-D acoustic wave propagation equations and provides estimates of the uncertainty of the output simulated wave field (e.g., amplitudes, waveforms) based on estimated probability distributions of the input medium and source parameters. This report describes the derivation of the stochastic partial differential equations, their implementation, and comparison of Paracousti-UQ results with MC simulations using simple models.« less

  16. Comparison of Kasai Autocorrelation and Maximum Likelihood Estimators for Doppler Optical Coherence Tomography

    PubMed Central

    Chan, Aaron C.; Srinivasan, Vivek J.

    2013-01-01

    In optical coherence tomography (OCT) and ultrasound, unbiased Doppler frequency estimators with low variance are desirable for blood velocity estimation. Hardware improvements in OCT mean that ever higher acquisition rates are possible, which should also, in principle, improve estimation performance. Paradoxically, however, the widely used Kasai autocorrelation estimator’s performance worsens with increasing acquisition rate. We propose that parametric estimators based on accurate models of noise statistics can offer better performance. We derive a maximum likelihood estimator (MLE) based on a simple additive white Gaussian noise model, and show that it can outperform the Kasai autocorrelation estimator. In addition, we also derive the Cramer Rao lower bound (CRLB), and show that the variance of the MLE approaches the CRLB for moderate data lengths and noise levels. We note that the MLE performance improves with longer acquisition time, and remains constant or improves with higher acquisition rates. These qualities may make it a preferred technique as OCT imaging speed continues to improve. Finally, our work motivates the development of more general parametric estimators based on statistical models of decorrelation noise. PMID:23446044

  17. Implications of low-power He-Ne laser and monochromatic red light biostimulation in the metabolism of proteins and glucosides

    NASA Astrophysics Data System (ADS)

    Onac, I.; Pop, L.; Ungur, Rodica; Giurgiu, Ioana

    2001-06-01

    We checked the changes occurring in the metabolism of proteins (seric cholinesterase, total proteins) and in the metabolism of glycosides (seric glucose) in Cavia cobaia. A simple blind study was carried out and the results were checked on the first, tenth and twentieth days of treatment. The data thus obtained were graphically represented and statistically processed according to the Duncan test. The technique and treatment doses were similar and they were compared with the data obtained from controls and environment controls. In the groups biostimulated with He-Ne laser, seric cholinesterase levels increased proportionally with the dose reaching a peak on day 10, which was not the case with the controls. Monochromatic red light caused a similar but quantitatively lower effect. The same results were obtained in the case of seric proteins as well, however, the effect did not depend on the dose and it was less significant statistically than in the case of seric cholinesterase both in laser treated and in monochromatic red light treated groups.

  18. Nanocluster building blocks of artificial square spin ice: Stray-field studies of thermal dynamics

    NASA Astrophysics Data System (ADS)

    Pohlit, Merlin; Porrati, Fabrizio; Huth, Michael; Ohno, Yuzo; Ohno, Hideo; Müller, Jens

    2015-05-01

    We present measurements of the thermal dynamics of a Co-based single building block of an artificial square spin ice fabricated by focused electron-beam-induced deposition. We employ micro-Hall magnetometry, an ultra-sensitive tool to study the stray field emanating from magnetic nanostructures, as a new technique to access the dynamical properties during the magnetization reversal of the spin-ice nanocluster. The obtained hysteresis loop exhibits distinct steps, displaying a reduction of their "coercive field" with increasing temperature. Therefore, thermally unstable states could be repetitively prepared by relatively simple temperature and field protocols allowing one to investigate the statistics of their switching behavior within experimentally accessible timescales. For a selected switching event, we find a strong reduction of the so-prepared states' "survival time" with increasing temperature and magnetic field. Besides the possibility to control the lifetime of selected switching events at will, we find evidence for a more complex behavior caused by the special spin ice arrangement of the macrospins, i.e., that the magnetic reversal statistically follows distinct "paths" most likely driven by thermal perturbation.

  19. Assessment of the Resistance of Several Suture Techniques in Human Cadaver Achilles Tendons.

    PubMed

    Manent, Andrea; Lopez, Laia; Vilanova, Joan; Mota, Tiago; Alvarez, Jordi; Santamaría, Alejandro; Oliva, Xavier Martí

    Many treatments are available for acute Achilles tendon ruptures, conservative and surgical, with none superior to another. For surgical treatment, one can use various techniques. Recent studies have shown that double stitches are superior to simple sutures. Therefore, in the present study, we sought to determine the suture technique that is the most resistant to rupture. We performed an experimental anatomic study with 27 fresh-frozen human cadaveric Achilles tendons obtained through the body donation program of the University of Barcelona, testing the maximum strength. We simulated a rupture by performing resection in the middle portion of the tendon, 4 cm proximal to the calcaneus insertion. We then evaluated the double Kessler, double Bunnell, Krackow, and percutaneous Ma and Griffith technique. We used absorbable suture (polydioxanone no. 1) with all the techniques. Traction was performed using a machine that pulls the tendon at 10 to 100 N in 1000 repetitive cycles. Statistical analysis was performed using the χ 2 test and analysis of variance, with the 95% confidence intervals (p < .05). All repairs failed at the site of the suture knots, with none pulling out through the substance of the tendon. We found no significant differences among the different open suture techniques (p > .05). The Krackow suture presented with superior resistance, with a rupture rate 16.70% but with a mean elongation of 7.11 mm. The double Bunnell suture had the same rupture rate as the Krakow suture (16.70%) but with an inferior mean elongation of 4.53 mm. The Krackow and Bunnell suture were superior in endurance, strength of failure, and primary stability compared with the other suture types. However, the former presented with greater tendon elongation, although the difference was not statistically significant. Therefore, according to our findings and the published data, we recommend double Bunnell sutures for the surgical treatment of acute Achilles tendon rupture. Copyright © 2017 American College of Foot and Ankle Surgeons. Published by Elsevier Inc. All rights reserved.

  20. Using R-Project for Free Statistical Analysis in Extension Research

    ERIC Educational Resources Information Center

    Mangiafico, Salvatore S.

    2013-01-01

    One option for Extension professionals wishing to use free statistical software is to use online calculators, which are useful for common, simple analyses. A second option is to use a free computing environment capable of performing statistical analyses, like R-project. R-project is free, cross-platform, powerful, and respected, but may be…

  1. Using Data from Climate Science to Teach Introductory Statistics

    ERIC Educational Resources Information Center

    Witt, Gary

    2013-01-01

    This paper shows how the application of simple statistical methods can reveal to students important insights from climate data. While the popular press is filled with contradictory opinions about climate science, teachers can encourage students to use introductory-level statistics to analyze data for themselves on this important issue in public…

  2. Using R in Introductory Statistics Courses with the pmg Graphical User Interface

    ERIC Educational Resources Information Center

    Verzani, John

    2008-01-01

    The pmg add-on package for the open source statistics software R is described. This package provides a simple to use graphical user interface (GUI) that allows introductory statistics students, without advanced computing skills, to quickly create the graphical and numeric summaries expected of them. (Contains 9 figures.)

  3. "Push back" technique: A simple method to remove broken drill bit from the proximal femur.

    PubMed

    Chouhan, Devendra K; Sharma, Siddhartha

    2015-11-18

    Broken drill bits can be difficult to remove from the proximal femur and may necessitate additional surgical exploration or special instrumentation. We present a simple technique to remove a broken drill bit that does not require any special instrumentation and can be accomplished through the existing incision. This technique is useful for those cases where the length of the broken drill bit is greater than the diameter of the bone.

  4. A simple method of measuring tibial tubercle to trochlear groove distance on MRI: description of a novel and reliable technique.

    PubMed

    Camp, Christopher L; Heidenreich, Mark J; Dahm, Diane L; Bond, Jeffrey R; Collins, Mark S; Krych, Aaron J

    2016-03-01

    Tibial tubercle-trochlear groove (TT-TG) distance is a variable that helps guide surgical decision-making in patients with patellar instability. The purpose of this study was to compare the accuracy and reliability of an MRI TT-TG measuring technique using a simple external alignment method to a previously validated gold standard technique that requires advanced software read by radiologists. TT-TG was calculated by MRI on 59 knees with a clinical diagnosis of patellar instability in a blinded and randomized fashion by two musculoskeletal radiologists using advanced software and by two orthopaedists using the study technique which utilizes measurements taken on a simple electronic imaging platform. Interrater reliability between the two radiologists and the two orthopaedists and intermethods reliability between the two techniques were calculated using interclass correlation coefficients (ICC) and concordance correlation coefficients (CCC). ICC and CCC values greater than 0.75 were considered to represent excellent agreement. The mean TT-TG distance was 14.7 mm (Standard Deviation (SD) 4.87 mm) and 15.4 mm (SD 5.41) as measured by the radiologists and orthopaedists, respectively. Excellent interobserver agreement was noted between the radiologists (ICC 0.941; CCC 0.941), the orthopaedists (ICC 0.978; CCC 0.976), and the two techniques (ICC 0.941; CCC 0.933). The simple TT-TG distance measurement technique analysed in this study resulted in excellent agreement and reliability as compared to the gold standard technique. This method can predictably be performed by orthopaedic surgeons without advanced radiologic software. II.

  5. Aortic valve replacement using continuous suture technique in patients with aortic valve disease.

    PubMed

    Choi, Jong Bum; Kim, Jong Hun; Park, Hyun Kyu; Kim, Kyung Hwa; Kim, Min Ho; Kuh, Ja Hong; Jo, Jung Ku

    2013-08-01

    The continuous suture (CS) technique has several advantages as a method for simple, fast, and secure aortic valve replacement (AVR). We used a simple CS technique without the use of a pledget for AVR and evaluated the surgical outcomes. Between October 2007 and 2012, 123 patients with aortic valve disease underwent AVR alone (n=28) or with other concomitant cardiac procedures (n=95), such as mitral, tricuspid, or aortic surgery. The patients were divided into two groups: the interrupted suture (IS) group (n=47), in which the conventional IS technique was used, and the CS group (n=76), in which the simple CS technique was used. There were two hospital deaths (1.6%), which were not related to the suture technique. There were no significant differences in cardiopulmonary bypass time or aortic cross-clamp time between the two groups for AVR alone or AVR with concomitant cardiac procedures. In the IS group, two patients had prosthetic endocarditis and one patient experienced significant perivalvular leak. These patients underwent reoperations. In the CS group, there were no complications related to the surgery. Postoperatively, the two groups had similar aortic valve gradients. The simple CS method is useful and secure for AVR in patients with aortic valve disease, and it may minimize surgical complications, as neither pledgets nor braided sutures are used.

  6. Evaluation of Two Statistical Methods Provides Insights into the Complex Patterns of Alternative Polyadenylation Site Switching

    PubMed Central

    Li, Jie; Li, Rui; You, Leiming; Xu, Anlong; Fu, Yonggui; Huang, Shengfeng

    2015-01-01

    Switching between different alternative polyadenylation (APA) sites plays an important role in the fine tuning of gene expression. New technologies for the execution of 3’-end enriched RNA-seq allow genome-wide detection of the genes that exhibit significant APA site switching between different samples. Here, we show that the independence test gives better results than the linear trend test in detecting APA site-switching events. Further examination suggests that the discrepancy between these two statistical methods arises from complex APA site-switching events that cannot be represented by a simple change of average 3’-UTR length. In theory, the linear trend test is only effective in detecting these simple changes. We classify the switching events into four switching patterns: two simple patterns (3’-UTR shortening and lengthening) and two complex patterns. By comparing the results of the two statistical methods, we show that complex patterns account for 1/4 of all observed switching events that happen between normal and cancerous human breast cell lines. Because simple and complex switching patterns may convey different biological meanings, they merit separate study. We therefore propose to combine both the independence test and the linear trend test in practice. First, the independence test should be used to detect APA site switching; second, the linear trend test should be invoked to identify simple switching events; and third, those complex switching events that pass independence testing but fail linear trend testing can be identified. PMID:25875641

  7. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    PubMed

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.

  8. Evaluation of statistical treatments of left-censored environmental data using coincident uncensored data sets: I. Summary statistics

    USGS Publications Warehouse

    Antweiler, Ronald C.; Taylor, Howard E.

    2008-01-01

    The main classes of statistical treatment of below-detection limit (left-censored) environmental data for the determination of basic statistics that have been used in the literature are substitution methods, maximum likelihood, regression on order statistics (ROS), and nonparametric techniques. These treatments, along with using all instrument-generated data (even those below detection), were evaluated by examining data sets in which the true values of the censored data were known. It was found that for data sets with less than 70% censored data, the best technique overall for determination of summary statistics was the nonparametric Kaplan-Meier technique. ROS and the two substitution methods of assigning one-half the detection limit value to censored data or assigning a random number between zero and the detection limit to censored data were adequate alternatives. The use of these two substitution methods, however, requires a thorough understanding of how the laboratory censored the data. The technique of employing all instrument-generated data - including numbers below the detection limit - was found to be less adequate than the above techniques. At high degrees of censoring (greater than 70% censored data), no technique provided good estimates of summary statistics. Maximum likelihood techniques were found to be far inferior to all other treatments except substituting zero or the detection limit value to censored data.

  9. A Step Beyond Simple Keyword Searches: Services Enabled by a Full Content Digital Journal Archive

    NASA Technical Reports Server (NTRS)

    Boccippio, Dennis J.

    2003-01-01

    The problems of managing and searching large archives of scientific journal articles can potentially be addressed through data mining and statistical techniques matured primarily for quantitative scientific data analysis. A journal paper could be represented by a multivariate descriptor, e.g., the occurrence counts of a number key technical terms or phrases (keywords), perhaps derived from a controlled vocabulary ( e . g . , the American Meteorological Society's Glossary of Meteorology) or bootstrapped from the journal archive itself. With this technique, conventional statistical classification tools can be leveraged to address challenges faced by both scientists and professional societies in knowledge management. For example, cluster analyses can be used to find bundles of "most-related" papers, and address the issue of journal bifurcation (when is a new journal necessary, and what topics should it encompass). Similarly, neural networks can be trained to predict the optimal journal (within a society's collection) in which a newly submitted paper should be published. Comparable techniques could enable very powerful end-user tools for journal searches, all premised on the view of a paper as a data point in a multidimensional descriptor space, e.g.: "find papers most similar to the one I am reading", "build a personalized subscription service, based on the content of the papers I am interested in, rather than preselected keywords", "find suitable reviewers, based on the content of their own published works", etc. Such services may represent the next "quantum leap" beyond the rudimentary search interfaces currently provided to end-users, as well as a compelling value-added component needed to bridge the print-to-digital-medium gap, and help stabilize professional societies' revenue stream during the print-to-digital transition.

  10. Hydrologic Response to Climate Change: Missing Precipitation Data Matters for Computed Timing Trends

    NASA Astrophysics Data System (ADS)

    Daniels, B.

    2016-12-01

    This work demonstrates the derivation of climate timing statistics and applying them to determine resulting hydroclimate impacts. Long-term daily precipitation observations from 50 California stations were used to compute climate trends of precipitation event Intensity, event Duration and Pause between events. Each precipitation event trend was then applied as input to a PRMS hydrology model which showed hydrology changes to recharge, baseflow, streamflow, etc. An important concern was precipitation uncertainty induced by missing observation values and causing errors in quantification of precipitation trends. Many standard statistical techniques such as ARIMA and simple endogenous or even exogenous imputation were applied but failed to help resolve these uncertainties. What helped resolve these uncertainties was use of multiple imputation techniques. This involved fitting of Weibull probability distributions to multiple imputed values for the three precipitation trends.Permutation resampling techniques using Monte Carlo processing were then applied to the multiple imputation values to derive significance p-values for each trend. Significance at the 95% level for Intensity was found for 11 of the 50 stations, Duration from 16 of the 50, and Pause from 19, of which 12 were 99% significant. The significance weighted trends for California are Intensity -4.61% per decade, Duration +3.49% per decade, and Pause +3.58% per decade. Two California basins with PRMS hydrologic models were studied: Feather River in the northern Sierra Nevada mountains and the central coast Soquel-Aptos. Each local trend was changed without changing the other trends or the total precipitation. Feather River Basin's critical supply to Lake Oroville and the State Water Project benefited from a total streamflow increase of 1.5%. The Soquel-Aptos Basin water supply was impacted by a total groundwater recharge decrease of -7.5% and streamflow decrease of -3.2%.

  11. Simple to complex modeling of breathing volume using a motion sensor.

    PubMed

    John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-06-01

    To compare simple and complex modeling techniques to estimate categories of low, medium, and high ventilation (VE) from ActiGraph™ activity counts. Vertical axis ActiGraph™ GT1M activity counts, oxygen consumption and VE were measured during treadmill walking and running, sports, household chores and labor-intensive employment activities. Categories of low (<19.3 l/min), medium (19.3 to 35.4 l/min) and high (>35.4 l/min) VEs were derived from activity intensity classifications (light <2.9 METs, moderate 3.0 to 5.9 METs and vigorous >6.0 METs). We examined the accuracy of two simple techniques (multiple regression and activity count cut-point analyses) and one complex (random forest technique) modeling technique in predicting VE from activity counts. Prediction accuracy of the complex random forest technique was marginally better than the simple multiple regression method. Both techniques accurately predicted VE categories almost 80% of the time. The multiple regression and random forest techniques were more accurate (85 to 88%) in predicting medium VE. Both techniques predicted the high VE (70 to 73%) with greater accuracy than low VE (57 to 60%). Actigraph™ cut-points for light, medium and high VEs were <1381, 1381 to 3660 and >3660 cpm. There were minor differences in prediction accuracy between the multiple regression and the random forest technique. This study provides methods to objectively estimate VE categories using activity monitors that can easily be deployed in the field. Objective estimates of VE should provide a better understanding of the dose-response relationship between internal exposure to pollutants and disease. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Microscopic calculations of liquid and solid neutron star matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakravarty, Sudip; Miller, Michael D.; Chia-Wei, Woo

    1974-02-01

    As the first step to a microscopic determination of the solidification density of neutron star matter, variational calculations are performed for both liquid and solid phases using a very simple model potential. The potential, containing only the repulsive part of the Reid /sup 1/S/sub o/ interaction, together with Boltzmann statistics defines a homework problem'' which several groups involved in solidification calculations have agreed to solve. The results were to be compared for the purpose of checking calculational techniques. For the solid energy good agreement with Canuto and Chitre was found. Both the liquid and solid energies are much lower thanmore » those of Pandharipande. It is shown that for this oversimplified model, neutron star matter will remain solid down to ordinary nuclear matter density.« less

  13. Low Light Diagnostics in Thin-Film Photovoltaics

    NASA Astrophysics Data System (ADS)

    Shvydka, Diana; Karpov, Victor; Compaan, Alvin

    2003-03-01

    We study statistics of the major photovoltaic (PV) parameters such as open circuit voltage, short circuit current and fill factor vs. light intensity on a set of nominally identical CdTe/CdS solar cells. We found the most probable parameter values to change with the light intensity as predicted by the standard diode model, while their relative fluctuations increase dramatically under low light. The crossover light intensity is found below which the relative fluctuations of the PV parameters diverge inversely proportional to the square root of the light intensity. We propose a model where the observed fluctuations are due to lateral nonuniformities in the device structure. In particular, the crossover is attributed to the lateral nonuniformity screening length exceeding the device size. >From the practical standpoint, our study introduces a simple uniformity diagnostic technique.

  14. ``Sweetening'' Technical Physics with Hershey's Kisses

    NASA Astrophysics Data System (ADS)

    Stone, Chuck

    2003-04-01

    This paper describes an activity in which students measure the mass of each candy in one full bag of Hershey's Kisses and then use a simple spreadsheet program to construct a histogram showing the number of candies as a function of mass. Student measurements indicate that one single bag of 80 Kisses yields enough data to produce a noticeable variation in the candy's mass distribution. The bimodal character of this distribution provides a useful discussion topic. This activity can be performed as a classroom project, a laboratory exercise, or an interactive lecture demonstration. In all these formats, students have the opportunity to collect, organize, process, and analyze real data. In addition to strengthening graphical analysis skills, this activity introduces students to fundamentals of statistics, manufacturing processes in the industrial workplace, and process control techniques.

  15. Bayesian reconstruction of projection reconstruction NMR (PR-NMR).

    PubMed

    Yoon, Ji Won

    2014-11-01

    Projection reconstruction nuclear magnetic resonance (PR-NMR) is a technique for generating multidimensional NMR spectra. A small number of projections from lower-dimensional NMR spectra are used to reconstruct the multidimensional NMR spectra. In our previous work, it was shown that multidimensional NMR spectra are efficiently reconstructed using peak-by-peak based reversible jump Markov chain Monte Carlo (RJMCMC) algorithm. We propose an extended and generalized RJMCMC algorithm replacing a simple linear model with a linear mixed model to reconstruct close NMR spectra into true spectra. This statistical method generates samples in a Bayesian scheme. Our proposed algorithm is tested on a set of six projections derived from the three-dimensional 700 MHz HNCO spectrum of a protein HasA. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Sculpting bespoke mountains: Determining free energies with basis expansions

    NASA Astrophysics Data System (ADS)

    Whitmer, Jonathan K.; Fluitt, Aaron M.; Antony, Lucas; Qin, Jian; McGovern, Michael; de Pablo, Juan J.

    2015-07-01

    The intriguing behavior of a wide variety of physical systems, ranging from amorphous solids or glasses to proteins, is a direct manifestation of underlying free energy landscapes riddled with local minima separated by large barriers. Exploring such landscapes has arguably become one of statistical physics's great challenges. A new method is proposed here for uniform sampling of rugged free energy surfaces. The method, which relies on special Green's functions to approximate the Dirac delta function, improves significantly on existing simulation techniques by providing a boundary-agnostic approach that is capable of mapping complex features in multidimensional free energy surfaces. The usefulness of the proposed approach is established in the context of a simple model glass former and model proteins, demonstrating improved convergence and accuracy over existing methods.

  17. A simple, physically-based method for evaluating the economic costs of geo-engineering schemes

    NASA Astrophysics Data System (ADS)

    Garrett, T. J.

    2009-04-01

    The consumption of primary energy (e.g coal, oil, uranium) by the global economy is done in expectation of a return on investment. For geo-engineering schemes, however, the relationship between the primary energy consumption required and the economic return is, at first glance, quite different. The energy costs of a given scheme represent a removal of economically productive available energy to do work in the normal global economy. What are the economic implications of the energy consumption associated with geo-engineering techniques? I will present a simple thermodynamic argument that, in general, real (inflation-adjusted) economic value has a fixed relationship to the rate of global primary energy consumption. This hypothesis will be shown to be supported by 36 years of available energy statistics and a two millennia period of statistics for global economic production. What is found from this analysis is that the value in any given inflation-adjusted 1990 dollar is sustained by a constant 9.7 +/- 0.3 milliwatts of global primary energy consumption. Thus, insofar as geo-engineering is concerned, any scheme that requires some nominal fraction of continuous global primary energy output necessitates a corresponding inflationary loss of real global economic value. For example, if 1% of global energy output is required, at today's consumption rates of 15 TW this corresponds to an inflationary loss of 15 trillion 1990 dollars of real value. The loss will be less, however, if the geo-engineering scheme also enables a demonstrable enhancement to global economic production capacity through climate modification.

  18. Simple PowerPoint Animation

    NASA Astrophysics Data System (ADS)

    Takahashi, Leo

    2011-03-01

    The use of animation as a teaching tool has long been of interest to the readers of and contributors to this journal.1-5 While the sophisticated techniques presented in the cited papers are excellent and useful, there is one overlooked technique that may be of interest to the teacher who wants something quick and simple to enhance classroom presentations: PowerPoint animation.

  19. Using High Speed Smartphone Cameras and Video Analysis Techniques to Teach Mechanical Wave Physics

    ERIC Educational Resources Information Center

    Bonato, Jacopo; Gratton, Luigi M.; Onorato, Pasquale; Oss, Stefano

    2017-01-01

    We propose the use of smartphone-based slow-motion video analysis techniques as a valuable tool for investigating physics concepts ruling mechanical wave propagation. The simple experimental activities presented here, suitable for both high school and undergraduate students, allows one to measure, in a simple yet rigorous way, the speed of pulses…

  20. A simple technique for maintaining Simulium adults, including onchocerciasis vectors, under artificial conditions*

    PubMed Central

    Raybould, John N.; Mhiddin, Haji K.

    1974-01-01

    The maintenance of Simulium adults in the laboratory is necessary for many experimental investigations and is a prerequisite to laboratory colonization. A simple technique for this purpose is described in which each fly is kept separately in a hole bored in a block of expanded polystyrene. ImagesFig. 1 PMID:4549353

  1. Round Robin Study: Molecular Simulation of Thermodynamic Properties from Models with Internal Degrees of Freedom.

    PubMed

    Schappals, Michael; Mecklenfeld, Andreas; Kröger, Leif; Botan, Vitalie; Köster, Andreas; Stephan, Simon; García, Edder J; Rutkai, Gabor; Raabe, Gabriele; Klein, Peter; Leonhard, Kai; Glass, Colin W; Lenhard, Johannes; Vrabec, Jadran; Hasse, Hans

    2017-09-12

    Thermodynamic properties are often modeled by classical force fields which describe the interactions on the atomistic scale. Molecular simulations are used for retrieving thermodynamic data from such models, and many simulation techniques and computer codes are available for that purpose. In the present round robin study, the following fundamental question is addressed: Will different user groups working with different simulation codes obtain coinciding results within the statistical uncertainty of their data? A set of 24 simple simulation tasks is defined and solved by five user groups working with eight molecular simulation codes: DL_POLY, GROMACS, IMC, LAMMPS, ms2, NAMD, Tinker, and TOWHEE. Each task consists of the definition of (1) a pure fluid that is described by a force field and (2) the conditions under which that property is to be determined. The fluids are four simple alkanes: ethane, propane, n-butane, and iso-butane. All force fields consider internal degrees of freedom: OPLS, TraPPE, and a modified OPLS version with bond stretching vibrations. Density and potential energy are determined as a function of temperature and pressure on a grid which is specified such that all states are liquid. The user groups worked independently and reported their results to a central instance. The full set of results was disclosed to all user groups only at the end of the study. During the study, the central instance gave only qualitative feedback. The results reveal the challenges of carrying out molecular simulations. Several iterations were needed to eliminate gross errors. For most simulation tasks, the remaining deviations between the results of the different groups are acceptable from a practical standpoint, but they are often outside of the statistical errors of the individual simulation data. However, there are also cases where the deviations are unacceptable. This study highlights similarities between computer experiments and laboratory experiments, which are both subject not only to statistical error but also to systematic error.

  2. CAPSAS: Computer Assisted Program for the Selection of Appropriate Statistics.

    ERIC Educational Resources Information Center

    Shermis, Mark D.; Albert, Susan L.

    A computer-assisted program has been developed for the selection of statistics or statistical techniques by both students and researchers. Based on Andrews, Klem, Davidson, O'Malley and Rodgers "A Guide for Selecting Statistical Techniques for Analyzing Social Science Data," this FORTRAN-compiled interactive computer program was…

  3. Data analysis techniques

    NASA Technical Reports Server (NTRS)

    Park, Steve

    1990-01-01

    A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.

  4. Simple technique for observing subsurface damage in machining of ceramics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, H.H.K.; Jahanmir, S.

    1994-05-01

    A simple technique is proposed for directly observing subsurface damage in the machining of ceramics. The technique requires two polished specimens and an optical microscope with Nomarski illumination for examination. The subsurface damage created by the grinding of an alumina ceramic is investigated using this technique. The mode of damage is identified as intragrain twinning/slip, and intergranular and transgranular cracking. Chipping along the twinned planes and along the transgranular crack planes, and dislodgement of the intergranularly debonded grains are suggested to be the mechanisms of material removal in the machining of this alumina ceramic.

  5. Humans make efficient use of natural image statistics when performing spatial interpolation.

    PubMed

    D'Antona, Anthony D; Perry, Jeffrey S; Geisler, Wilson S

    2013-12-16

    Visual systems learn through evolution and experience over the lifespan to exploit the statistical structure of natural images when performing visual tasks. Understanding which aspects of this statistical structure are incorporated into the human nervous system is a fundamental goal in vision science. To address this goal, we measured human ability to estimate the intensity of missing image pixels in natural images. Human estimation accuracy is compared with various simple heuristics (e.g., local mean) and with optimal observers that have nearly complete knowledge of the local statistical structure of natural images. Human estimates are more accurate than those of simple heuristics, and they match the performance of an optimal observer that knows the local statistical structure of relative intensities (contrasts). This optimal observer predicts the detailed pattern of human estimation errors and hence the results place strong constraints on the underlying neural mechanisms. However, humans do not reach the performance of an optimal observer that knows the local statistical structure of the absolute intensities, which reflect both local relative intensities and local mean intensity. As predicted from a statistical analysis of natural images, human estimation accuracy is negligibly improved by expanding the context from a local patch to the whole image. Our results demonstrate that the human visual system exploits efficiently the statistical structure of natural images.

  6. Intracutaneous sterile water for back pain in labour.

    PubMed Central

    Reynolds, J. L.

    1994-01-01

    Intracutaneous sterile water appears to be a simple, effective, and harmless technique for relieving back pain. This technique has been used to relieve pain of renal colic, whiplash, and back pain in labour. Family doctors often practise obstetrics in small or isolated units that have limited options for pain relief in labour. This technique is simple, is easy to learn, and appears effective for relieving back pain, which complicates about one third of all labours. Images Figures 1-2 Figures 3-4 PMID:7950471

  7. Computational Modeling of Statistical Learning: Effects of Transitional Probability versus Frequency and Links to Word Learning

    ERIC Educational Resources Information Center

    Mirman, Daniel; Estes, Katharine Graf; Magnuson, James S.

    2010-01-01

    Statistical learning mechanisms play an important role in theories of language acquisition and processing. Recurrent neural network models have provided important insights into how these mechanisms might operate. We examined whether such networks capture two key findings in human statistical learning. In Simulation 1, a simple recurrent network…

  8. A Laboratory Experiment, Based on the Maillard Reaction, Conducted as a Project in Introductory Statistics

    ERIC Educational Resources Information Center

    Kravchuk, Olena; Elliott, Antony; Bhandari, Bhesh

    2005-01-01

    A simple laboratory experiment, based on the Maillard reaction, served as a project in Introductory Statistics for undergraduates in Food Science and Technology. By using the principles of randomization and replication and reflecting on the sources of variation in the experimental data, students reinforced the statistical concepts and techniques…

  9. Performance Data Gathering and Representation from Fixed-Size Statistical Data

    NASA Technical Reports Server (NTRS)

    Yan, Jerry C.; Jin, Haoqiang H.; Schmidt, Melisa A.; Kutler, Paul (Technical Monitor)

    1997-01-01

    The two commonly-used performance data types in the super-computing community, statistics and event traces, are discussed and compared. Statistical data are much more compact but lack the probative power event traces offer. Event traces, on the other hand, are unbounded and can easily fill up the entire file system during program execution. In this paper, we propose an innovative methodology for performance data gathering and representation that offers a middle ground. Two basic ideas are employed: the use of averages to replace recording data for each instance and 'formulae' to represent sequences associated with communication and control flow. The user can trade off tracing overhead, trace data size with data quality incrementally. In other words, the user will be able to limit the amount of trace data collected and, at the same time, carry out some of the analysis event traces offer using space-time views. With the help of a few simple examples, we illustrate the use of these techniques in performance tuning and compare the quality of the traces we collected with event traces. We found that the trace files thus obtained are, indeed, small, bounded and predictable before program execution, and that the quality of the space-time views generated from these statistical data are excellent. Furthermore, experimental results showed that the formulae proposed were able to capture all the sequences associated with 11 of the 15 applications tested. The performance of the formulae can be incrementally improved by allocating more memory at runtime to learn longer sequences.

  10. Kolmogorov-Smirnov statistical test for analysis of ZAP-70 expression in B-CLL, compared with quantitative PCR and IgV(H) mutation status.

    PubMed

    Van Bockstaele, Femke; Janssens, Ann; Piette, Anne; Callewaert, Filip; Pede, Valerie; Offner, Fritz; Verhasselt, Bruno; Philippé, Jan

    2006-07-15

    ZAP-70 has been proposed as a surrogate marker for immunoglobulin heavy-chain variable region (IgV(H)) mutation status, which is known as a prognostic marker in B-cell chronic lymphocytic leukemia (CLL). The flow cytometric analysis of ZAP-70 suffers from difficulties in standardization and interpretation. We applied the Kolmogorov-Smirnov (KS) statistical test to make analysis more straightforward. We examined ZAP-70 expression by flow cytometry in 53 patients with CLL. Analysis was performed as initially described by Crespo et al. (New England J Med 2003; 348:1764-1775) and alternatively by application of the KS statistical test comparing T cells with B cells. Receiver-operating-characteristics (ROC)-curve analyses were performed to determine the optimal cut-off values for ZAP-70 measured by the two approaches. ZAP-70 protein expression was compared with ZAP-70 mRNA expression measured by a quantitative PCR (qPCR) and with the IgV(H) mutation status. Both flow cytometric analyses correlated well with the molecular technique and proved to be of equal value in predicting the IgV(H) mutation status. Applying the KS test is reproducible, simple, straightforward, and overcomes a number of difficulties encountered in the Crespo-method. The KS statistical test is an essential part of the software delivered with modern routine analytical flow cytometers and is well suited for analysis of ZAP-70 expression in CLL. (c) 2006 International Society for Analytical Cytology.

  11. Metamodels for Computer-Based Engineering Design: Survey and Recommendations

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Peplinski, Jesse; Koch, Patrick N.; Allen, Janet K.

    1997-01-01

    The use of statistical techniques to build approximations of expensive computer analysis codes pervades much of todays engineering design. These statistical approximations, or metamodels, are used to replace the actual expensive computer analyses, facilitating multidisciplinary, multiobjective optimization and concept exploration. In this paper we review several of these techniques including design of experiments, response surface methodology, Taguchi methods, neural networks, inductive learning, and kriging. We survey their existing application in engineering design and then address the dangers of applying traditional statistical techniques to approximate deterministic computer analysis codes. We conclude with recommendations for the appropriate use of statistical approximation techniques in given situations and how common pitfalls can be avoided.

  12. Comparison of fat maintenance in the face with centrifuge versus filtered and washed fat

    PubMed Central

    Asilian, Ali; Siadat, Amir Hossein; Iraji, Razieh

    2014-01-01

    Background: Autogenous fat injection of the face is a viable and lasting remedy for soft tissue loss and has become a mainstay in facial rejuvenation. Fat transfer as either a stand-alone technique or as an adjunct to other filler technique and lifting depending on patient needs. Although soft tissue augmentation with autologous fat transfer has been increasingly used by esthetic surgeon, but there is no agreement concerning the best way of processing the harvested fat before injection. This study compared the clinical results obtained using simple filtered and washed fat via metal sieve with those achieved by means of pure centrifuged fat. Materials and Methods: A prospective single-blind analysis on 32 healthy patients undergoing nasolabial fold fat transplantation from 2009 to 2011 (simple sampling). Patients assigned in two groups randomly. The face of half (16 subjects) was injected with centrifuged, another half with simple filtered and washed fat to evaluate the effect of preparation methods on fat graft viability. Objective method was used to evaluate the results, involving the evaluation of postoperative photographs (in month 1, 6 and 12) by an esthetic surgeon (according to the nasolabial scale). Subjective method was a self-assessment obtained from patients about general level of satisfaction and improvement of skin texture, statistical analysis were performed by means of the Wilcoxon and Mann–Whitney test. Acquired data were analyzed using SPSS version 15 and a value of P > 0.05 was considered as significant. Results: There was no significant difference in the survival of grafted fat between the Group 1 (fat-processing with centrifuge at 3400 rpm for 1-min) and Group 2 (washing the fat in the sieve). Conclusion: Our data suggest that the centrifuge of the fat does not enhance survival of grafted fat (P > 0.05). PMID:25197299

  13. Comparison of four modeling tools for the prediction of potential distribution for non-indigenous weeds in the United States

    USGS Publications Warehouse

    Magarey, Roger; Newton, Leslie; Hong, Seung C.; Takeuchi, Yu; Christie, Dave; Jarnevich, Catherine S.; Kohl, Lisa; Damus, Martin; Higgins, Steven I.; Miller, Leah; Castro, Karen; West, Amanda; Hastings, John; Cook, Gericke; Kartesz, John; Koop, Anthony

    2018-01-01

    This study compares four models for predicting the potential distribution of non-indigenous weed species in the conterminous U.S. The comparison focused on evaluating modeling tools and protocols as currently used for weed risk assessment or for predicting the potential distribution of invasive weeds. We used six weed species (three highly invasive and three less invasive non-indigenous species) that have been established in the U.S. for more than 75 years. The experiment involved providing non-U. S. location data to users familiar with one of the four evaluated techniques, who then developed predictive models that were applied to the United States without knowing the identity of the species or its U.S. distribution. We compared a simple GIS climate matching technique known as Proto3, a simple climate matching tool CLIMEX Match Climates, the correlative model MaxEnt, and a process model known as the Thornley Transport Resistance (TTR) model. Two experienced users ran each modeling tool except TTR, which had one user. Models were trained with global species distribution data excluding any U.S. data, and then were evaluated using the current known U.S. distribution. The influence of weed species identity and modeling tool on prevalence and sensitivity effects was compared using a generalized linear mixed model. Each modeling tool itself had a low statistical significance, while weed species alone accounted for 69.1 and 48.5% of the variance for prevalence and sensitivity, respectively. These results suggest that simple modeling tools might perform as well as complex ones in the case of predicting potential distribution for a weed not yet present in the United States. Considerations of model accuracy should also be balanced with those of reproducibility and ease of use. More important than the choice of modeling tool is the construction of robust protocols and testing both new and experienced users under blind test conditions that approximate operational conditions.

  14. "Clothesline technique" for proximal tibial shaft fracture fixation using conventional intramedullary nail: a simple, useful, and inexpensive technique to prevent fracture malalignment.

    PubMed

    Belangero, William Dias; Santos Pires, Robinson Esteves; Livani, Bruno; Rossi, Felipe Lins; de Andrade, Andre Luis Lugnani

    2018-05-01

    Treatment of proximal tibial shaft fractures is always challenging. Despite the development of modern techniques, the literature still shows high complication rates, especially regarding proximal fragment malalignment. It is well known that knee position in flexion during tibial nailing is responsible for extension and valgus deformities of the proximal fragment. Unlike in tibial shaft fractures, nails do not reduce proximal tibial fractures due to the medullary canal width. This study aims to describe a simple, useful, and inexpensive technique to prevent valgus and extension deformities when treating proximal tibial fractures using conventional nails: the so-called clothesline technique.

  15. The sandwich technique for repair of pectus carinatum and excavatum/carinatum complex

    PubMed Central

    Kim, Kyung Soo

    2016-01-01

    Background Simple external compression of pectus carinatum seems to have its limitations, particularly the repair of asymmetric pectus carinatum or excavatum/carinatum complex. We devised the sandwich technique (press-molding) to remodel the entire chest wall. The purpose of this study is to introduce the sandwich technique and appraise the early results. Methods Between January 2007 and January 2016, 523 consecutive patients with pectus carinatum and its variants were analyzed retrospectively. No patients were excluded during the study period. The sandwich 1 and 2 techniques using the internal and external pectus bars were for pectus carinatum repair. Modified techniques using the external string and the internal bar were to treat the lower costal flare (the flare-buster) and focal protuberances (the magic string) in pectus excavatum repair. Statistical analyses were carried out using paired and unpaired t-test or Wilcoxon signed rank tests. Results The sandwich repair with the external and internal bars was applied to 58 pectus carinatum patients: seven symmetric (12.1%), 14 asymmetric (24.1%), and 37 carinatum-excavatum complex (63.8%). After pectus excavatum repair, 426 patients had the flare-buster and 39 patients received the magic string. The sandwich 1 technique achieved near-complete resolution of carinatum in 52 patients (86.2%). The sandwich 2 technique accomplished almost symmetric configuration with no residual carinatum in all six cases. Conclusions The sandwich technique using the external and internal bars seems to be effective in treating asymmetric pectus carinatum and complex excavatum/carinatum deformities. The flare-buster and the magic string effectively relieve the costal flare and focal protuberances in pectus excavatum repair. PMID:27747176

  16. A Highly Efficient Design Strategy for Regression with Outcome Pooling

    PubMed Central

    Mitchell, Emily M.; Lyles, Robert H.; Manatunga, Amita K.; Perkins, Neil J.; Schisterman, Enrique F.

    2014-01-01

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. PMID:25220822

  17. A highly efficient design strategy for regression with outcome pooling.

    PubMed

    Mitchell, Emily M; Lyles, Robert H; Manatunga, Amita K; Perkins, Neil J; Schisterman, Enrique F

    2014-12-10

    The potential for research involving biospecimens can be hindered by the prohibitive cost of performing laboratory assays on individual samples. To mitigate this cost, strategies such as randomly selecting a portion of specimens for analysis or randomly pooling specimens prior to performing laboratory assays may be employed. These techniques, while effective in reducing cost, are often accompanied by a considerable loss of statistical efficiency. We propose a novel pooling strategy based on the k-means clustering algorithm to reduce laboratory costs while maintaining a high level of statistical efficiency when predictor variables are measured on all subjects, but the outcome of interest is assessed in pools. We perform simulations motivated by the BioCycle study to compare this k-means pooling strategy with current pooling and selection techniques under simple and multiple linear regression models. While all of the methods considered produce unbiased estimates and confidence intervals with appropriate coverage, pooling under k-means clustering provides the most precise estimates, closely approximating results from the full data and losing minimal precision as the total number of pools decreases. The benefits of k-means clustering evident in the simulation study are then applied to an analysis of the BioCycle dataset. In conclusion, when the number of lab tests is limited by budget, pooling specimens based on k-means clustering prior to performing lab assays can be an effective way to save money with minimal information loss in a regression setting. Copyright © 2014 John Wiley & Sons, Ltd.

  18. Techniques for estimating health care costs with censored data: an overview for the health services researcher

    PubMed Central

    Wijeysundera, Harindra C; Wang, Xuesong; Tomlinson, George; Ko, Dennis T; Krahn, Murray D

    2012-01-01

    Objective The aim of this study was to review statistical techniques for estimating the mean population cost using health care cost data that, because of the inability to achieve complete follow-up until death, are right censored. The target audience is health service researchers without an advanced statistical background. Methods Data were sourced from longitudinal heart failure costs from Ontario, Canada, and administrative databases were used for estimating costs. The dataset consisted of 43,888 patients, with follow-up periods ranging from 1 to 1538 days (mean 576 days). The study was designed so that mean health care costs over 1080 days of follow-up were calculated using naïve estimators such as full-sample and uncensored case estimators. Reweighted estimators – specifically, the inverse probability weighted estimator – were calculated, as was phase-based costing. Costs were adjusted to 2008 Canadian dollars using the Bank of Canada consumer price index (http://www.bankofcanada.ca/en/cpi.html). Results Over the restricted follow-up of 1080 days, 32% of patients were censored. The full-sample estimator was found to underestimate mean cost ($30,420) compared with the reweighted estimators ($36,490). The phase-based costing estimate of $37,237 was similar to that of the simple reweighted estimator. Conclusion The authors recommend against the use of full-sample or uncensored case estimators when censored data are present. In the presence of heavy censoring, phase-based costing is an attractive alternative approach. PMID:22719214

  19. Evidence from a partial report task for forgetting in dynamic spatial memory.

    PubMed

    Gugerty, L

    1998-09-01

    G. Sperling (1960) and others have investigated memory for briefly presented stimuli by using a partial versus whole report technique in which participants sometimes reported part of a stimulus array and sometimes reported all of it. For simple, static stimulus displays, the partial report technique showed that participants could recall most of the information in the stimulus array but that this information faded quickly when participants engaged in whole report recall. An experiment was conducted that applied the partial report method to a task involving complex displays of moving objects. In the experiment, 26 participants viewed cars in a low-fidelity driving simulator and then reported the locations of some or all of the cars in each scene. A statistically significant advantage was found for the partial report trials. This finding suggests that detailed spatial location information was forgotten from dynamic spatial memory over the 14 s that it took participants to recall whole report trials. The experiment results suggest better ways of measuring situation awareness. Partial report recall techniques may give a more accurate measure of people's momentary situation awareness than whole report techniques. Potential applications of this research include simulator-based measures of situation awareness ability that can be part of inexpensive test batteries to select people for real-time tasks (e.g., in a driver licensing battery) and to identify people who need additional training.

  20. Image space subdivision for fast ray tracing

    NASA Astrophysics Data System (ADS)

    Yu, Billy T.; Yu, William W.

    1999-09-01

    Ray-tracing is notorious of its computational requirement. There were a number of techniques to speed up the process. However, a famous statistic indicated that ray-object intersections occupies over 95% of the total image generation time. Thus, it is most beneficial to work on this bottle-neck. There were a number of ray-object intersection reduction techniques and they could be classified into three major categories: bounding volume hierarchies, space subdivision, and directional subdivision. This paper introduces a technique falling into the third category. To further speed up the process, it takes advantages of hierarchy by adopting a MX-CIF quadtree in the image space. This special kind of quadtree provides simple objects allocation and ease of implementation. The text also included a theoretical proof of the expected performance. For ray-polygon comparison, the technique reduces the order of complexity from linear to square-root, O(n) -> O(2(root)n). Experiments with various shape, size and complexity were conducted to verify the expectation. Results shown that computational improvement grew with the complexity of the sceneries. The experimental improvement was more than 90% and it agreed with the theoretical value when the number of polygons exceeded 3000. The more complex was the scene, the more efficient was the acceleration. The algorithm described was implemented in the polygonal level, however, it could be easily enhanced and extended to the object or higher levels.

  1. Single-Incision Transumbilical Surgery (SITUS) versus Single-Port Laparoscopic Surgery and conventional laparoscopic surgery: a prospective randomized comparative study of performance with novices in a dry laboratory.

    PubMed

    Schoenthaler, Martin; Avcil, Tuba; Sevcenco, Sabina; Nagele, Udo; Hermann, Thomas E W; Kuehhas, Franklin E; Shariat, Shahrokh F; Frankenschmidt, Alexander; Wetterauer, Ulrich; Miernik, Arkadiusz

    2015-01-01

    To evaluate the Single-Incision Transumbilical Surgery (SITUS) technique as compared to an established laparoendoscopic single-site surgery (LESS) technique (Single-Port Laparoscopic Surgery, SPLS) and conventional laparoscopy (CLS) in a surgical simulator model. Sixty-three medical students without previous laparoscopic experience were randomly assigned to one of the three groups (SITUS, SPLS and CLS). Subjects were asked to perform five standardized tasks of increasing difficulty adopted from the Fundamentals of Laparoscopic Surgery curriculum. Statistical evaluation included task completion times and accuracy. Overall performances of all tasks (except precision cutting) were significantly faster and of higher accuracy in the CLS and SITUS groups than in the SPLS group (p = 0.004 to p < 0.001). CLS and SITUS groups alone showed no significant difference in performance times and accuracy measurements for all tasks (p = 0.048 to p = 0.989). SITUS proved to be a simple, but highly effective technique to overcome restrictions of SPLS. In a surgical simulator model, novices were able to achieve task performances comparable to CLS and did significantly better than using a port-assisted LESS technique such as SPLS. The demonstrated advantages of SITUS may be attributed to a preservation of the basic principles of conventional laparoscopy, such as the use of straight instruments and an adequate degree of triangulation.

  2. Gynecomastia: glandular-liposculpture through a single transaxillary one hole incision.

    PubMed

    Lee, Yung Ki; Lee, Jun Hee; Kang, Sang Yoon

    2018-04-01

    Gynecomastia is characterized by the benign proliferation of breast tissue in men. Herein, we present a new method for the treatment of gynecomastia, using ultrasound-assisted liposuction with both conventional and reverse-cutting edge tip cannulas in combination with a pull-through lipectomy technique with pituitary forceps through a single transaxillary incision. Thirty patients were treated with this technique at the author's institution from January 2010 to January 2015. Ten patients were treated with conventional surgical excision of the glandular/fibrous breast tissue combined with liposuction through a periareolar incision before January 2010. Medical records, clinical photographs and linear analog scale scores were analyzed to compare the surgical results and complications. The patients were required to rate their cosmetic outcomes based on the linear analog scale with which they rated their own surgical results; the mean overall average score indicated a good or high level of satisfaction. There were no incidences of skin necrosis, hematoma, infection and scar contracture; however, one case each of seroma and nipple inversion did occur. Operative time was reduced overall using the new technique since it is relatively simple and straightforward. According to the evaluation by the four independent researchers, the patients treated with this new technique showed statistically significant improvements in scar and nipple-areolar complex (NAC) deformity compared to those who were treated using the conventional method. Glandular liposculpture through a single transaxillary incision is an efficient and safe technique that can provide aesthetically satisfying and consistent results.

  3. Using Fisher information to track stability in multivariate ...

    EPA Pesticide Factsheets

    With the current proliferation of data, the proficient use of statistical and mining techniques offer substantial benefits to capture useful information from any dataset. As numerous approaches make use of information theory concepts, here, we discuss how Fisher information (FI) can be applied to sustainability science problems and used in data mining applications by analyzing patterns in data. FI was developed as a measure of information content in data, and it has been adapted to assess order in complex system behaviors. The main advantage of the approach is the ability to collapse multiple variables into an index that can be used to assess stability and track overall trends in a system, including its regimes and regime shifts. Here, we provide a brief overview of FI theory, followed by a simple step-by-step numerical example on how to compute FI. Furthermore, we introduce an open source Python library that can be freely downloaded from GitHub and we use it in a simple case study to evaluate the evolution of FI for the global-mean temperature from 1880 to 2015. Results indicate significant declines in FI starting in 1978, suggesting a possible regime shift. Demonstrate Fisher information as a useful method for assessing patterns in big data.

  4. Fitting distributions to microbial contamination data collected with an unequal probability sampling design.

    PubMed

    Williams, M S; Ebel, E D; Cao, Y

    2013-01-01

    The fitting of statistical distributions to microbial sampling data is a common application in quantitative microbiology and risk assessment applications. An underlying assumption of most fitting techniques is that data are collected with simple random sampling, which is often times not the case. This study develops a weighted maximum likelihood estimation framework that is appropriate for microbiological samples that are collected with unequal probabilities of selection. A weighted maximum likelihood estimation framework is proposed for microbiological samples that are collected with unequal probabilities of selection. Two examples, based on the collection of food samples during processing, are provided to demonstrate the method and highlight the magnitude of biases in the maximum likelihood estimator when data are inappropriately treated as a simple random sample. Failure to properly weight samples to account for how data are collected can introduce substantial biases into inferences drawn from the data. The proposed methodology will reduce or eliminate an important source of bias in inferences drawn from the analysis of microbial data. This will also make comparisons between studies and the combination of results from different studies more reliable, which is important for risk assessment applications. © 2012 No claim to US Government works.

  5. Application of a first impression triage in the Japan railway west disaster.

    PubMed

    Hashimoto, Atsunori; Ueda, Takahiro; Kuboyama, Kazutoshi; Yamada, Taihei; Terashima, Mariko; Miyawaki, Atsushi; Nakao, Atsunori; Kotani, Joji

    2013-01-01

    On April 25, 2005, a Japanese express train derailed into a building, resulting in 107 deaths and 549 injuries. We used "First Impression Triage (FIT)", our new triage strategy based on general inspection and palpation without counting pulse/respiratory rates, and determined the feasibility of FIT in the chaotic situation of treating a large number of injured people in a brief time period. The subjects included 39 patients who required hospitalization among 113 victims transferred to our hospital. After initial assessment with FIT by an emergency physician, patients were retrospectively reassessed with the preexisting the modified Simple Triage and Rapid Treatment (START) methodology, based on Injury Severity Score, probability of survival, and ICU stay. FIT resulted in shorter waiting time for triage. FIT designations comprised 11 red (immediate), 28 yellow (delayed), while START assigned six to red and 32 to yellow. There were no statistical differences between FIT and START in the accuracy rate calculated by means of probability of survival and ICU stay. Overall validity and reliability of FIT determined by outcome assessment were similar to those of START. FIT would be a simple and accurate technique to quickly triage a large number of patients.

  6. Detection of outliers in the response and explanatory variables of the simple circular regression model

    NASA Astrophysics Data System (ADS)

    Mahmood, Ehab A.; Rana, Sohel; Hussin, Abdul Ghapor; Midi, Habshah

    2016-06-01

    The circular regression model may contain one or more data points which appear to be peculiar or inconsistent with the main part of the model. This may be occur due to recording errors, sudden short events, sampling under abnormal conditions etc. The existence of these data points "outliers" in the data set cause lot of problems in the research results and the conclusions. Therefore, we should identify them before applying statistical analysis. In this article, we aim to propose a statistic to identify outliers in the both of the response and explanatory variables of the simple circular regression model. Our proposed statistic is robust circular distance RCDxy and it is justified by the three robust measurements such as proportion of detection outliers, masking and swamping rates.

  7. Use of iPhone technology in improving acetabular component position in total hip arthroplasty.

    PubMed

    Tay, Xiau Wei; Zhang, Benny Xu; Gayagay, George

    2017-09-01

    Improper acetabular cup positioning is associated with high risk of complications after total hip arthroplasty. The aim of our study is to objectively compare 3 methods, namely (1) free hand, (2) alignment jig (Sputnik), and (3) iPhone application to identify an easy, reproducible, and accurate method in improving acetabular cup placement. We designed a simple setup and carried out a simple experiment (see Method section). Using statistical analysis, the difference in inclination angles using iPhone application compared with the freehand method was found to be statistically significant ( F [2,51] = 4.17, P = .02) in the "untrained group". There is no statistical significance detected for the other groups. This suggests a potential role for iPhone applications in junior surgeons in overcoming the steep learning curve.

  8. Microwave resonance lamp absorption technique for measuring temperature and OH number density in combustion environments

    NASA Technical Reports Server (NTRS)

    Lempert, Walter R.

    1988-01-01

    A simple technique for simultaneous determination of temperature and OH number density is described, along with characteristic results obtained from measurements using a premixed, hydrogen air flat flame burner. The instrumentation is based upon absorption of resonant radiation from a flowing microwave discharge lamp, and is rugged, relatively inexpensive, and very simple to operate.

  9. Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains.

    PubMed

    Busse, B L; Bezrukov, L; Blank, P S; Zimmerberg, J

    2016-08-08

    Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains.

  10. Reduction of PAPR in coded OFDM using fast Reed-Solomon codes over prime Galois fields

    NASA Astrophysics Data System (ADS)

    Motazedi, Mohammad Reza; Dianat, Reza

    2017-02-01

    In this work, two new techniques using Reed-Solomon (RS) codes over GF(257) and GF(65,537) are proposed for peak-to-average power ratio (PAPR) reduction in coded orthogonal frequency division multiplexing (OFDM) systems. The lengths of these codes are well-matched to the length of OFDM frames. Over these fields, the block lengths of codes are powers of two and we fully exploit the radix-2 fast Fourier transform algorithms. Multiplications and additions are simple modulus operations. These codes provide desirable randomness with a small perturbation in information symbols that is essential for generation of different statistically independent candidates. Our simulations show that the PAPR reduction ability of RS codes is the same as that of conventional selected mapping (SLM), but contrary to SLM, we can get error correction capability. Also for the second proposed technique, the transmission of side information is not needed. To the best of our knowledge, this is the first work using RS codes for PAPR reduction in single-input single-output systems.

  11. Dynamics and spatio-temporal variability of environmental factors in Eastern Australia using functional principal component analysis

    USGS Publications Warehouse

    Szabo, J.K.; Fedriani, E.M.; Segovia-Gonzalez, M. M.; Astheimer, L.B.; Hooper, M.J.

    2010-01-01

    This paper introduces a new technique in ecology to analyze spatial and temporal variability in environmental variables. By using simple statistics, we explore the relations between abiotic and biotic variables that influence animal distributions. However, spatial and temporal variability in rainfall, a key variable in ecological studies, can cause difficulties to any basic model including time evolution. The study was of a landscape scale (three million square kilometers in eastern Australia), mainly over the period of 19982004. We simultaneously considered qualitative spatial (soil and habitat types) and quantitative temporal (rainfall) variables in a Geographical Information System environment. In addition to some techniques commonly used in ecology, we applied a new method, Functional Principal Component Analysis, which proved to be very suitable for this case, as it explained more than 97% of the total variance of the rainfall data, providing us with substitute variables that are easier to manage and are even able to explain rainfall patterns. The main variable came from a habitat classification that showed strong correlations with rainfall values and soil types. ?? 2010 World Scientific Publishing Company.

  12. Comparative study of wine tannin classification using Fourier transform mid-infrared spectrometry and sensory analysis.

    PubMed

    Fernández, Katherina; Labarca, Ximena; Bordeu, Edmundo; Guesalaga, Andrés; Agosin, Eduardo

    2007-11-01

    Wine tannins are fundamental to the determination of wine quality. However, the chemical and sensorial analysis of these compounds is not straightforward and a simple and rapid technique is necessary. We analyzed the mid-infrared spectra of white, red, and model wines spiked with known amounts of skin or seed tannins, collected using Fourier transform mid-infrared (FT-MIR) transmission spectroscopy (400-4000 cm(-1)). The spectral data were classified according to their tannin source, skin or seed, and tannin concentration by means of discriminant analysis (DA) and soft independent modeling of class analogy (SIMCA) to obtain a probabilistic classification. Wines were also classified sensorially by a trained panel and compared with FT-MIR. SIMCA models gave the most accurate classification (over 97%) and prediction (over 60%) among the wine samples. The prediction was increased (over 73%) using the leave-one-out cross-validation technique. Sensory classification of the wines was less accurate than that obtained with FT-MIR and SIMCA. Overall, these results show the potential of FT-MIR spectroscopy, in combination with adequate statistical tools, to discriminate wines with different tannin levels.

  13. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  14. Resin embedded multicycle imaging (REMI): a tool to evaluate protein domains

    PubMed Central

    Busse, B. L.; Bezrukov, L.; Blank, P. S.; Zimmerberg, J.

    2016-01-01

    Protein complexes associated with cellular processes comprise a significant fraction of all biology, but our understanding of their heterogeneous organization remains inadequate, particularly for physiological densities of multiple protein species. Towards resolving this limitation, we here present a new technique based on resin-embedded multicycle imaging (REMI) of proteins in-situ. By stabilizing protein structure and antigenicity in acrylic resins, affinity labels were repeatedly applied, imaged, removed, and replaced. In principle, an arbitrarily large number of proteins of interest may be imaged on the same specimen with subsequent digital overlay. A series of novel preparative methods were developed to address the problem of imaging multiple protein species in areas of the plasma membrane or volumes of cytoplasm of individual cells. For multiplexed examination of antibody staining we used straightforward computational techniques to align sequential images, and super-resolution microscopy was used to further define membrane protein colocalization. We give one example of a fibroblast membrane with eight multiplexed proteins. A simple statistical analysis of this limited membrane proteomic dataset is sufficient to demonstrate the analytical power contributed by additional imaged proteins when studying membrane protein domains. PMID:27499335

  15. A weighted generalized score statistic for comparison of predictive values of diagnostic tests.

    PubMed

    Kosinski, Andrzej S

    2013-03-15

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting. Copyright © 2012 John Wiley & Sons, Ltd.

  16. A weighted generalized score statistic for comparison of predictive values of diagnostic tests

    PubMed Central

    Kosinski, Andrzej S.

    2013-01-01

    Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations which are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we present, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic which incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, it always reduces to the score statistic in the independent samples situation, and it preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the weighted generalized score test statistic in a general GEE setting. PMID:22912343

  17. Monte Carlo Techniques for Nuclear Systems - Theory Lectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    These are lecture notes for a Monte Carlo class given at the University of New Mexico. The following topics are covered: course information; nuclear eng. review & MC; random numbers and sampling; computational geometry; collision physics; tallies and statistics; eigenvalue calculations I; eigenvalue calculations II; eigenvalue calculations III; variance reduction; parallel Monte Carlo; parameter studies; fission matrix and higher eigenmodes; doppler broadening; Monte Carlo depletion; HTGR modeling; coupled MC and T/H calculations; fission energy deposition. Solving particle transport problems with the Monte Carlo method is simple - just simulate the particle behavior. The devil is in the details, however. Thesemore » lectures provide a balanced approach to the theory and practice of Monte Carlo simulation codes. The first lectures provide an overview of Monte Carlo simulation methods, covering the transport equation, random sampling, computational geometry, collision physics, and statistics. The next lectures focus on the state-of-the-art in Monte Carlo criticality simulations, covering the theory of eigenvalue calculations, convergence analysis, dominance ratio calculations, bias in Keff and tallies, bias in uncertainties, a case study of a realistic calculation, and Wielandt acceleration techniques. The remaining lectures cover advanced topics, including HTGR modeling and stochastic geometry, temperature dependence, fission energy deposition, depletion calculations, parallel calculations, and parameter studies. This portion of the class focuses on using MCNP to perform criticality calculations for reactor physics and criticality safety applications. It is an intermediate level class, intended for those with at least some familiarity with MCNP. Class examples provide hands-on experience at running the code, plotting both geometry and results, and understanding the code output. The class includes lectures & hands-on computer use for a variety of Monte Carlo calculations. Beginning MCNP users are encouraged to review LA-UR-09-00380, "Criticality Calculations with MCNP: A Primer (3nd Edition)" (available at http:// mcnp.lanl.gov under "Reference Collection") prior to the class. No Monte Carlo class can be complete without having students write their own simple Monte Carlo routines for basic random sampling, use of the random number generator, and simplified particle transport simulation.« less

  18. A very simple technique to repair Grynfeltt-Lesshaft hernia.

    PubMed

    Solaini, Leonardo; di Francesco, F; Gourgiotis, S; Solaini, Luciano

    2010-08-01

    A very simple technique to repair a superior lumbar hernia is described. The location of this type of hernia, also known as the Grynfeltt-Lesshaft hernia, is defined by a triangle placed in the lumbar region. An unusual case of a 67-year-old woman with a superior lumbar hernia is reported. The diagnosis was made by physical examination. The defect of the posterior abdominal wall was repaired with a polypropylene dart mesh. The patient had no evidence of recurrence at 11 months follow up. The surgical approach described in this paper is simple and easy to perform, and its result is comparable with other techniques that are much more sophisticated. No cases on the use of dart mesh to repair Grynfeltt-Lesshaft hernia have been reported by surgical journals indexed in PubMed.

  19. A Dosimetric Comparison of Breast Radiotherapy Techniques to Treat Locoregional Lymph Nodes Including the Internal Mammary Chain.

    PubMed

    Ranger, A; Dunlop, A; Hutchinson, K; Convery, H; Maclennan, M K; Chantler, H; Twyman, N; Rose, C; McQuaid, D; Amos, R A; Griffin, C; deSouza, N M; Donovan, E; Harris, E; Coles, C E; Kirby, A

    2018-06-01

    Radiotherapy target volumes in early breast cancer treatment increasingly include the internal mammary chain (IMC). In order to maximise survival benefits of IMC radiotherapy, doses to the heart and lung should be minimised. This dosimetry study compared the ability of three-dimensional conformal radiotherapy, arc therapy and proton beam therapy (PBT) techniques with and without breath-hold to achieve target volume constraints while minimising dose to organs at risk (OARs). In 14 patients' datasets, seven IMC radiotherapy techniques were compared: wide tangent (WT) three-dimensional conformal radiotherapy, volumetric-modulated arc therapy (VMAT) and PBT, each in voluntary deep inspiratory breath-hold (vDIBH) and free breathing (FB), and tomotherapy in FB only. Target volume coverage and OAR doses were measured for each technique. These were compared using a one-way ANOVA with all pairwise comparisons tested using Bonferroni's multiple comparisons test, with adjusted P-values ≤ 0.05 indicating statistical significance. One hundred per cent of WT(vDIBH), 43% of WT(FB), 100% of VMAT(vDIBH), 86% of VMAT(FB), 100% of tomotherapy FB and 100% of PBT plans in vDIBH and FB passed all mandatory constraints. However, coverage of the IMC with 90% of the prescribed dose was significantly better than all other techniques using VMAT(vDIBH), PBT(vDIBH) and PBT(FB) (mean IMC coverage ± 1 standard deviation = 96.0% ± 4.3, 99.8% ± 0.3 and 99.0% ± 0.2, respectively). The mean heart dose was significantly reduced in vDIBH compared with FB for both the WT (P < 0.0001) and VMAT (P < 0.0001) techniques. There was no advantage in target volume coverage or OAR doses for PBT(vDIBH) compared with PBT(FB). Simple WT radiotherapy delivered in vDIBH achieves satisfactory coverage of the IMC while meeting heart and lung dose constraints. However, where higher isodose coverage is required, VMAT(vDIBH) is the optimal photon technique. The lowest OAR doses are achieved by PBT, in which the use of vDIBH does not improve dose statistics. Crown Copyright © 2018. Published by Elsevier Ltd. All rights reserved.

  20. Entropy Is Simple, Qualitatively.

    ERIC Educational Resources Information Center

    Lambert, Frank L.

    2002-01-01

    Suggests that qualitatively, entropy is simple. Entropy increase from a macro viewpoint is a measure of the dispersal of energy from localized to spread out at a temperature T. Fundamentally based on statistical and quantum mechanics, this approach is superior to the non-fundamental "disorder" as a descriptor of entropy change. (MM)

  1. Evaluation of Bayesian Sequential Proportion Estimation Using Analyst Labels

    NASA Technical Reports Server (NTRS)

    Lennington, R. K.; Abotteen, K. M. (Principal Investigator)

    1980-01-01

    The author has identified the following significant results. A total of ten Large Area Crop Inventory Experiment Phase 3 blind sites and analyst-interpreter labels were used in a study to compare proportional estimates obtained by the Bayes sequential procedure with estimates obtained from simple random sampling and from Procedure 1. The analyst error rate using the Bayes technique was shown to be no greater than that for the simple random sampling. Also, the segment proportion estimates produced using this technique had smaller bias and mean squared errors than the estimates produced using either simple random sampling or Procedure 1.

  2. WE-AB-209-12: Quasi Constrained Multi-Criteria Optimization for Automated Radiation Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Watkins, W.T.; Siebers, J.V.

    Purpose: To introduce quasi-constrained Multi-Criteria Optimization (qcMCO) for unsupervised radiation therapy optimization which generates alternative patient-specific plans emphasizing dosimetric tradeoffs and conformance to clinical constraints for multiple delivery techniques. Methods: For N Organs At Risk (OARs) and M delivery techniques, qcMCO generates M(N+1) alternative treatment plans per patient. Objective weight variations for OARs and targets are used to generate alternative qcMCO plans. For 30 locally advanced lung cancer patients, qcMCO plans were generated for dosimetric tradeoffs to four OARs: each lung, heart, and esophagus (N=4) and 4 delivery techniques (simple 4-field arrangements, 9-field coplanar IMRT, 27-field non-coplanar IMRT, and non-coplanarmore » Arc IMRT). Quasi-constrained objectives included target prescription isodose to 95% (PTV-D95), maximum PTV dose (PTV-Dmax)< 110% of prescription, and spinal cord Dmax<45 Gy. The algorithm’s ability to meet these constraints while simultaneously revealing dosimetric tradeoffs was investigated. Statistically significant dosimetric tradeoffs were defined such that the coefficient of determination between dosimetric indices which varied by at least 5 Gy between different plans was >0.8. Results: The qcMCO plans varied mean dose by >5 Gy to ipsilateral lung for 24/30 patients, contralateral lung for 29/30 patients, esophagus for 29/30 patients, and heart for 19/30 patients. In the 600 plans computed without human interaction, average PTV-D95=67.4±3.3 Gy, PTV-Dmax=79.2±5.3 Gy, and spinal cord Dmax was >45 Gy in 93 plans (>50 Gy in 2/600 plans). Statistically significant dosimetric tradeoffs were evident in 19/30 plans, including multiple tradeoffs of at least 5 Gy between multiple OARs in 7/30 cases. The most common statistically significant tradeoff was increasing PTV-Dmax to reduce OAR dose (15/30 patients). Conclusion: The qcMCO method can conform to quasi-constrained objectives while revealing significant variations in OAR doses including mean dose reductions >5 Gy. Clinical implementation will facilitate patient-specific decision making based on achievable dosimetry as opposed to accept/reject models based on population derived objectives.« less

  3. Effect of Various Eye Exercise Techniques along with Pranayama on Visual Reaction Time: A Case Control Study

    PubMed Central

    Gosewade, Nitin B.; Shende, Vinod S.; Kashalikar, Shriniwas J.

    2013-01-01

    Introduction: We depend on eyesight more than any other of our senses to maneuver through the space around us. In a fraction of a second, our eyes work with our brain to tell us the size, shape, colour, and texture of an object. Our eyes are body’s most highly developed sensory organs. The use of computers and television in the era of information technology has given new heights to the professional success rate and it saves time but on the other hand, it has led to an increase in the number of patients with ocular complaints. Aims: The objective of the study was to study the effect of eye exercise techniques along with kapalbhati pranayama on Visual Reaction Time (VRT). Material & Methods: Total 60 subjects in an age group of 18–30 were recruited in the study. All the subjects were divided into two equal groups (study group and control group) containing 30 subjects (18 male & 12 female) each. Both the male and female subjects were selected on the basis of their voluntary involvement. Visual reaction time for red and green light was recorded from all 60 subjects before the start of the study. Study group subjects were trained to practice various eye exercise techniques and kapalbhati pranayama for 8 weeks regularly whereas control group were busy with their routine activities. After 8 weeks, visual reaction time was measured for red and green light from all 60 subjects. Statistical Analysis: Data expressed as Mean ± S.D, Student t –test was applied for analysis of data, p value <0.05 is taken as statistically significant. Results: Statistical analysis of data shows that there is a significant decrease in the visual reaction time for red and green light after intervention in study group (p value <0.05). Whereas there is no significant decrease in VRT in control group (p value >0.05). Conclusion: The results of our study suggest that simple eye exercises along with pranayama helps in improvement of visual reaction time. PMID:24179885

  4. Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.

    PubMed

    Miskell, Georgia; Salmond, Jennifer A; Williams, David E

    2018-04-27

    We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.

  5. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5

  6. Guidelines and Procedures for Computing Time-Series Suspended-Sediment Concentrations and Loads from In-Stream Turbidity-Sensor and Streamflow Data

    USGS Publications Warehouse

    Rasmussen, Patrick P.; Gray, John R.; Glysson, G. Douglas; Ziegler, Andrew C.

    2009-01-01

    In-stream continuous turbidity and streamflow data, calibrated with measured suspended-sediment concentration data, can be used to compute a time series of suspended-sediment concentration and load at a stream site. Development of a simple linear (ordinary least squares) regression model for computing suspended-sediment concentrations from instantaneous turbidity data is the first step in the computation process. If the model standard percentage error (MSPE) of the simple linear regression model meets a minimum criterion, this model should be used to compute a time series of suspended-sediment concentrations. Otherwise, a multiple linear regression model using paired instantaneous turbidity and streamflow data is developed and compared to the simple regression model. If the inclusion of the streamflow variable proves to be statistically significant and the uncertainty associated with the multiple regression model results in an improvement over that for the simple linear model, the turbidity-streamflow multiple linear regression model should be used to compute a suspended-sediment concentration time series. The computed concentration time series is subsequently used with its paired streamflow time series to compute suspended-sediment loads by standard U.S. Geological Survey techniques. Once an acceptable regression model is developed, it can be used to compute suspended-sediment concentration beyond the period of record used in model development with proper ongoing collection and analysis of calibration samples. Regression models to compute suspended-sediment concentrations are generally site specific and should never be considered static, but they represent a set period in a continually dynamic system in which additional data will help verify any change in sediment load, type, and source.

  7. Incorporating principal component analysis into air quality ...

    EPA Pesticide Factsheets

    The efficacy of standard air quality model evaluation techniques is becoming compromised as the simulation periods continue to lengthen in response to ever increasing computing capacity. Accordingly, the purpose of this paper is to demonstrate a statistical approach called Principal Component Analysis (PCA) with the intent of motivating its use by the evaluation community. One of the main objectives of PCA is to identify, through data reduction, the recurring and independent modes of variations (or signals) within a very large dataset, thereby summarizing the essential information of that dataset so that meaningful and descriptive conclusions can be made. In this demonstration, PCA is applied to a simple evaluation metric – the model bias associated with EPA's Community Multi-scale Air Quality (CMAQ) model when compared to weekly observations of sulfate (SO42−) and ammonium (NH4+) ambient air concentrations measured by the Clean Air Status and Trends Network (CASTNet). The advantages of using this technique are demonstrated as it identifies strong and systematic patterns of CMAQ model bias across a myriad of spatial and temporal scales that are neither constrained to geopolitical boundaries nor monthly/seasonal time periods (a limitation of many current studies). The technique also identifies locations (station–grid cell pairs) that are used as indicators for a more thorough diagnostic evaluation thereby hastening and facilitating understanding of the prob

  8. Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.

    PubMed

    Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

  9. Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems

    PubMed Central

    Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742

  10. Prediction of down-gradient impacts of DNAPL source depletion using tracer techniques: Laboratory and modeling validation

    NASA Astrophysics Data System (ADS)

    Jawitz, J. W.; Basu, N.; Chen, X.

    2007-05-01

    Interwell application of coupled nonreactive and reactive tracers through aquifer contaminant source zones enables quantitative characterization of aquifer heterogeneity and contaminant architecture. Parameters obtained from tracer tests are presented here in a Lagrangian framework that can be used to predict the dissolution of nonaqueous phase liquid (NAPL) contaminants. Nonreactive tracers are commonly used to provide information about travel time distributions in hydrologic systems. Reactive tracers have more recently been introduced as a tool to quantify the amount of NAPL contaminant present within the tracer swept volume. Our group has extended reactive tracer techniques to also characterize NAPL spatial distribution heterogeneity. By conceptualizing the flow field through an aquifer as a collection of streamtubes, the aquifer hydrodynamic heterogeneities may be characterized by a nonreactive tracer travel time distribution, and NAPL spatial distribution heterogeneity may be similarly described using reactive travel time distributions. The combined statistics of these distributions are used to derive a simple analytical solution for contaminant dissolution. This analytical solution, and the tracer techniques used for its parameterization, were validated both numerically and experimentally. Illustrative applications are presented from numerical simulations using the multiphase flow and transport simulator UTCHEM, and laboratory experiments of surfactant-enhanced NAPL remediation in two-dimensional flow chambers.

  11. A Simple Formula to Calculate Shallow-Water Transmission Loss by Means of a Least-Squares Surface Fit Technique.

    DTIC Science & Technology

    1980-09-01

    HASTRUP , T REAL UNCLASSIFIED SACLAATCEN- SM-139 N SACLANTCEN Memorandum SM -139 -LEFW SACLANT ASW RESEARCH CENTRE ~ MEMORANDUM A SIMPLE FORMULA TO...CALCULATE SHALLOW-WATER TRANSMISSION LOSS BY MEANS OF A LEAST- SQUARES SURFACE FIT TECHNIQUE 7-sallby OLE F. HASTRUP and TUNCAY AKAL I SEPTEMBER 1980 NORTH...JRANSi4ISSION LOSS/ BY MEANS OF A LEAST-SQUARES SURFACE fIT TECHNIQUE, C T ~e F./ Hastrup .0TnaAa ()1 Sep 8 This memorandum has been prepared within the

  12. Comparative Analysis Between Computed and Conventional Inferior Alveolar Nerve Block Techniques.

    PubMed

    Araújo, Gabriela Madeira; Barbalho, Jimmy Charles Melo; Dias, Tasiana Guedes de Souza; Santos, Thiago de Santana; Vasconcellos, Ricardo José de Holanda; de Morais, Hécio Henrique Araújo

    2015-11-01

    The aim of this randomized, double-blind, controlled trial was to compare the computed and conventional inferior alveolar nerve block techniques in symmetrically positioned inferior third molars. Both computed and conventional anesthetic techniques were performed in 29 healthy patients (58 surgeries) aged between 18 and 40 years. The anesthetic of choice was 2% lidocaine with 1: 200,000 epinephrine. The Visual Analogue Scale assessed the pain variable after anesthetic infiltration. Patient satisfaction was evaluated using the Likert Scale. Heart and respiratory rates, mean time to perform technique, and the need for additional anesthesia were also evaluated. Pain variable means were higher for the conventional technique as compared with computed, 3.45 ± 2.73 and 2.86 ± 1.96, respectively, but no statistically significant differences were found (P > 0.05). Patient satisfaction showed no statistically significant differences. The average computed technique runtime and the conventional were 3.85 and 1.61 minutes, respectively, showing statistically significant differences (P <0.001). The computed anesthetic technique showed lower mean pain perception, but did not show statistically significant differences when contrasted to the conventional technique.

  13. DETERMINING SERUM BICARBONATE—A Simple Syringe Titrator and Colorimeter

    PubMed Central

    Boone, Charles W.; Field, John B.

    1953-01-01

    The use of a tuberculin syringe as a burette has made possible an easy bedside technique for the determination of serum bicarbonate. By combining it with the use of a simple colorimeter, a relatively untrained person can do numerous bicarbonate determinations with a high degree of accuracy. The same technique also lends itself to other colorimetric clinical procedures such as determination of gastric acidity. PMID:13106724

  14. Determining serum bicarbonate; a simple syringe titrator and colorimeter.

    PubMed

    BOONE, C W; FIELD, J B

    1953-12-01

    The use of a tuberculin syringe as a burette has made possible an easy bedside technique for the determination of serum bicarbonate. By combining it with the use of a simple colorimeter, a relatively untrained person can do numerous bicarbonate determinations with a high degree of accuracy. The same technique also lends itself to other colorimetric clinical procedures such as determination of gastric acidity.

  15. Equivalent circuit models for interpreting impedance perturbation spectroscopy data

    NASA Astrophysics Data System (ADS)

    Smith, R. Lowell

    2004-07-01

    As in-situ structural integrity monitoring disciplines mature, there is a growing need to process sensor/actuator data efficiently in real time. Although smaller, faster embedded processors will contribute to this, it is also important to develop straightforward, robust methods to reduce the overall computational burden for practical applications of interest. This paper addresses the use of equivalent circuit modeling techniques for inferring structure attributes monitored using impedance perturbation spectroscopy. In pioneering work about ten years ago significant progress was associated with the development of simple impedance models derived from the piezoelectric equations. Using mathematical modeling tools currently available from research in ultrasonics and impedance spectroscopy is expected to provide additional synergistic benefits. For purposes of structural health monitoring the objective is to use impedance spectroscopy data to infer the physical condition of structures to which small piezoelectric actuators are bonded. Features of interest include stiffness changes, mass loading, and damping or mechanical losses. Equivalent circuit models are typically simple enough to facilitate the development of practical analytical models of the actuator-structure interaction. This type of parametric structure model allows raw impedance/admittance data to be interpreted optimally using standard multiple, nonlinear regression analysis. One potential long-term outcome is the possibility of cataloging measured viscoelastic properties of the mechanical subsystems of interest as simple lists of attributes and their statistical uncertainties, whose evolution can be followed in time. Equivalent circuit models are well suited for addressing calibration and self-consistency issues such as temperature corrections, Poisson mode coupling, and distributed relaxation processes.

  16. Simple Statistics: - Summarized!

    ERIC Educational Resources Information Center

    Blai, Boris, Jr.

    Statistics are an essential tool for making proper judgement decisions. It is concerned with probability distribution models, testing of hypotheses, significance tests and other means of determining the correctness of deductions and the most likely outcome of decisions. Measures of central tendency include the mean, median and mode. A second…

  17. Contrast Analysis: A Tutorial

    ERIC Educational Resources Information Center

    Haans, Antal

    2018-01-01

    Contrast analysis is a relatively simple but effective statistical method for testing theoretical predictions about differences between group means against the empirical data. Despite its advantages, contrast analysis is hardly used to date, perhaps because it is not implemented in a convenient manner in many statistical software packages. This…

  18. Superordinate Shape Classification Using Natural Shape Statistics

    ERIC Educational Resources Information Center

    Wilder, John; Feldman, Jacob; Singh, Manish

    2011-01-01

    This paper investigates the classification of shapes into broad natural categories such as "animal" or "leaf". We asked whether such coarse classifications can be achieved by a simple statistical classification of the shape skeleton. We surveyed databases of natural shapes, extracting shape skeletons and tabulating their…

  19. Modelling unsupervised online-learning of artificial grammars: linking implicit and statistical learning.

    PubMed

    Rohrmeier, Martin A; Cross, Ian

    2014-07-01

    Humans rapidly learn complex structures in various domains. Findings of above-chance performance of some untrained control groups in artificial grammar learning studies raise questions about the extent to which learning can occur in an untrained, unsupervised testing situation with both correct and incorrect structures. The plausibility of unsupervised online-learning effects was modelled with n-gram, chunking and simple recurrent network models. A novel evaluation framework was applied, which alternates forced binary grammaticality judgments and subsequent learning of the same stimulus. Our results indicate a strong online learning effect for n-gram and chunking models and a weaker effect for simple recurrent network models. Such findings suggest that online learning is a plausible effect of statistical chunk learning that is possible when ungrammatical sequences contain a large proportion of grammatical chunks. Such common effects of continuous statistical learning may underlie statistical and implicit learning paradigms and raise implications for study design and testing methodologies. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. A statistical method for measuring activation of gene regulatory networks.

    PubMed

    Esteves, Gustavo H; Reis, Luiz F L

    2018-06-13

    Gene expression data analysis is of great importance for modern molecular biology, given our ability to measure the expression profiles of thousands of genes and enabling studies rooted in systems biology. In this work, we propose a simple statistical model for the activation measuring of gene regulatory networks, instead of the traditional gene co-expression networks. We present the mathematical construction of a statistical procedure for testing hypothesis regarding gene regulatory network activation. The real probability distribution for the test statistic is evaluated by a permutation based study. To illustrate the functionality of the proposed methodology, we also present a simple example based on a small hypothetical network and the activation measuring of two KEGG networks, both based on gene expression data collected from gastric and esophageal samples. The two KEGG networks were also analyzed for a public database, available through NCBI-GEO, presented as Supplementary Material. This method was implemented in an R package that is available at the BioConductor project website under the name maigesPack.

  1. Contingency and statistical laws in replicate microbial closed ecosystems.

    PubMed

    Hekstra, Doeke R; Leibler, Stanislas

    2012-05-25

    Contingency, the persistent influence of past random events, pervades biology. To what extent, then, is each course of ecological or evolutionary dynamics unique, and to what extent are these dynamics subject to a common statistical structure? Addressing this question requires replicate measurements to search for emergent statistical laws. We establish a readily replicated microbial closed ecosystem (CES), sustaining its three species for years. We precisely measure the local population density of each species in many CES replicates, started from the same initial conditions and kept under constant light and temperature. The covariation among replicates of the three species densities acquires a stable structure, which could be decomposed into discrete eigenvectors, or "ecomodes." The largest ecomode dominates population density fluctuations around the replicate-average dynamics. These fluctuations follow simple power laws consistent with a geometric random walk. Thus, variability in ecological dynamics can be studied with CES replicates and described by simple statistical laws. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. The importance of topographically corrected null models for analyzing ecological point processes.

    PubMed

    McDowall, Philip; Lynch, Heather J

    2017-07-01

    Analyses of point process patterns and related techniques (e.g., MaxEnt) make use of the expected number of occurrences per unit area and second-order statistics based on the distance between occurrences. Ecologists working with point process data often assume that points exist on a two-dimensional x-y plane or within a three-dimensional volume, when in fact many observed point patterns are generated on a two-dimensional surface existing within three-dimensional space. For many surfaces, however, such as the topography of landscapes, the projection from the surface to the x-y plane preserves neither area nor distance. As such, when these point patterns are implicitly projected to and analyzed in the x-y plane, our expectations of the point pattern's statistical properties may not be met. When used in hypothesis testing, we find that the failure to account for the topography of the generating surface may bias statistical tests that incorrectly identify clustering and, furthermore, may bias coefficients in inhomogeneous point process models that incorporate slope as a covariate. We demonstrate the circumstances under which this bias is significant, and present simple methods that allow point processes to be simulated with corrections for topography. These point patterns can then be used to generate "topographically corrected" null models against which observed point processes can be compared. © 2017 by the Ecological Society of America.

  3. Using statistical process control to make data-based clinical decisions.

    PubMed

    Pfadt, A; Wheeler, D J

    1995-01-01

    Applied behavior analysis is based on an investigation of variability due to interrelationships among antecedents, behavior, and consequences. This permits testable hypotheses about the causes of behavior as well as for the course of treatment to be evaluated empirically. Such information provides corrective feedback for making data-based clinical decisions. This paper considers how a different approach to the analysis of variability based on the writings of Walter Shewart and W. Edwards Deming in the area of industrial quality control helps to achieve similar objectives. Statistical process control (SPC) was developed to implement a process of continual product improvement while achieving compliance with production standards and other requirements for promoting customer satisfaction. SPC involves the use of simple statistical tools, such as histograms and control charts, as well as problem-solving techniques, such as flow charts, cause-and-effect diagrams, and Pareto charts, to implement Deming's management philosophy. These data-analytic procedures can be incorporated into a human service organization to help to achieve its stated objectives in a manner that leads to continuous improvement in the functioning of the clients who are its customers. Examples are provided to illustrate how SPC procedures can be used to analyze behavioral data. Issues related to the application of these tools for making data-based clinical decisions and for creating an organizational climate that promotes their routine use in applied settings are also considered.

  4. Laparoscopic repair of perforated peptic ulcer: simple closure versus omentopexy.

    PubMed

    Lin, Being-Chuan; Liao, Chien-Hung; Wang, Shang-Yu; Hwang, Tsann-Long

    2017-12-01

    This report presents our experience with laparoscopic repair performed in 118 consecutive patients diagnosed with a perforated peptic ulcer (PPU). We compared the surgical outcome of simple closure with modified Cellan-Jones omentopexy and report the safety and benefit of simple closure. From January 2010 to December 2014, 118 patients with PPU underwent laparoscopic repair with simple closure (n = 27) or omentopexy (n = 91). Charts were retrospectively reviewed for demographic characteristics and outcome. The data were compared by Fisher's exact test, Mann-Whitney U test, Pearson's chi-square test, and the Kruskal-Wallis test. The results were considered statistically significant if P < 0.05. No patients died, whereas three incurred leakage. After matching, the simple closure and omentopexy groups had similarity in sex, systolic blood pressure, pulse rate, respiratory rate, Boey score, Charlson comorbidity index, Mannheim peritonitis index, and leakage. There were statistically significant differences in age, length of hospital stay, perforated size, and operating time. Comparison of the operating time in the ≤4.0 mm and 5.0-12 mm groups revealed that the simple closure took less time than omentopexy in both groups (≤4.0 mm, 76 versus 133 minutes, P < 0.0001; 5.0-12 mm, 97 versus 139.5 minutes; P = 0.006). Compared to the omentopexy, laparoscopic simple closure is a safe procedure and shortens the operating time. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  5. Estimating monotonic rates from biological data using local linear regression.

    PubMed

    Olito, Colin; White, Craig R; Marshall, Dustin J; Barneche, Diego R

    2017-03-01

    Accessing many fundamental questions in biology begins with empirical estimation of simple monotonic rates of underlying biological processes. Across a variety of disciplines, ranging from physiology to biogeochemistry, these rates are routinely estimated from non-linear and noisy time series data using linear regression and ad hoc manual truncation of non-linearities. Here, we introduce the R package LoLinR, a flexible toolkit to implement local linear regression techniques to objectively and reproducibly estimate monotonic biological rates from non-linear time series data, and demonstrate possible applications using metabolic rate data. LoLinR provides methods to easily and reliably estimate monotonic rates from time series data in a way that is statistically robust, facilitates reproducible research and is applicable to a wide variety of research disciplines in the biological sciences. © 2017. Published by The Company of Biologists Ltd.

  6. Integrated polarization-dependent sensor for autonomous navigation

    NASA Astrophysics Data System (ADS)

    Liu, Ze; Zhang, Ran; Wang, Zhiwen; Guan, Le; Li, Bin; Chu, Jinkui

    2015-01-01

    Based on the navigation strategy of insects utilizing the polarized skylight, an integrated polarization-dependent sensor for autonomous navigation is presented. The navigation sensor has the features of compact structure, high precision, strong robustness, and a simple manufacture technique. The sensor is composed by integrating a complementary-metal-oxide-semiconductor sensor with a multiorientation nanowire grid polarizer. By nanoimprint lithography, the multiorientation nanowire polarizer is fabricated in one step and the alignment error is eliminated. The statistical theory is added to the interval-division algorithm to calculate the polarization angle of the incident light. The laboratory and outdoor tests for the navigation sensor are implemented and the errors of the measured angle are ±0.02 deg and ±1.3 deg, respectively. The results show that the proposed sensor has potential for application in autonomous navigation.

  7. Complexity-entropy causality plane: A useful approach for distinguishing songs

    NASA Astrophysics Data System (ADS)

    Ribeiro, Haroldo V.; Zunino, Luciano; Mendes, Renio S.; Lenzi, Ervin K.

    2012-04-01

    Nowadays we are often faced with huge databases resulting from the rapid growth of data storage technologies. This is particularly true when dealing with music databases. In this context, it is essential to have techniques and tools able to discriminate properties from these massive sets. In this work, we report on a statistical analysis of more than ten thousand songs aiming to obtain a complexity hierarchy. Our approach is based on the estimation of the permutation entropy combined with an intensive complexity measure, building up the complexity-entropy causality plane. The results obtained indicate that this representation space is very promising to discriminate songs as well as to allow a relative quantitative comparison among songs. Additionally, we believe that the here-reported method may be applied in practical situations since it is simple, robust and has a fast numerical implementation.

  8. Different spectrophotometric methods applied for the analysis of simeprevir in the presence of its oxidative degradation product: Acomparative study

    NASA Astrophysics Data System (ADS)

    Attia, Khalid A. M.; El-Abasawi, Nasr M.; El-Olemy, Ahmed; Serag, Ahmed

    2018-02-01

    Five simple spectrophotometric methods were developed for the determination of simeprevir in the presence of its oxidative degradation product namely, ratio difference, mean centering, derivative ratio using the Savitsky-Golay filters, second derivative and continuous wavelet transform. These methods are linear in the range of 2.5-40 μg/mL and validated according to the ICH guidelines. The obtained results of accuracy, repeatability and precision were found to be within the acceptable limits. The specificity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. Furthermore, these methods were statistically comparable to RP-HPLC method and good results were obtained. So, they can be used for the routine analysis of simeprevir in quality-control laboratories.

  9. Two-way ANOVA Problems with Simple Numbers.

    ERIC Educational Resources Information Center

    Read, K. L. Q.; Shihab, L. H.

    1998-01-01

    Describes how to construct simple numerical examples in two-way ANOVAs, specifically randomized blocks, balanced two-way layouts, and Latin squares. Indicates that working through simple numerical problems is helpful to students meeting a technique for the first time and should be followed by computer-based analysis of larger, real datasets when…

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Honorio, J.; Goldstein, R.; Honorio, J.

    We propose a simple, well grounded classification technique which is suited for group classification on brain fMRI data sets that have high dimensionality, small number of subjects, high noise level, high subject variability, imperfect registration and capture subtle cognitive effects. We propose threshold-split region as a new feature selection method and majority voteas the classification technique. Our method does not require a predefined set of regions of interest. We use average acros ssessions, only one feature perexperimental condition, feature independence assumption, and simple classifiers. The seeming counter-intuitive approach of using a simple design is supported by signal processing and statisticalmore » theory. Experimental results in two block design data sets that capture brain function under distinct monetary rewards for cocaine addicted and control subjects, show that our method exhibits increased generalization accuracy compared to commonly used feature selection and classification techniques.« less

  11. Student Conceptions of Simple Circuits.

    ERIC Educational Resources Information Center

    Fredette, Norman; Lochhead, John

    1980-01-01

    Investigates some conceptual difficulties which college students have with regard to simple direct current circuits. The clinical interview technique was used with 57 students in a freshman level engineering course. (HM)

  12. Descriptive Statistical Techniques for Librarians. 2nd Edition.

    ERIC Educational Resources Information Center

    Hafner, Arthur W.

    A thorough understanding of the uses and applications of statistical techniques is integral in gaining support for library funding or new initiatives. This resource is designed to help practitioners develop and manipulate descriptive statistical information in evaluating library services, tracking and controlling limited resources, and analyzing…

  13. Understanding Summary Statistics and Graphical Techniques to Compare Michael Jordan versus LeBron James

    ERIC Educational Resources Information Center

    Williams, Immanuel James; Williams, Kelley Kim

    2016-01-01

    Understanding summary statistics and graphical techniques are building blocks to comprehending concepts beyond basic statistics. It's known that motivated students perform better in school. Using examples that students find engaging allows them to understand the concepts at a deeper level.

  14. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  15. More Powerful Tests of Simple Interaction Contrasts in the Two-Way Factorial Design

    ERIC Educational Resources Information Center

    Hancock, Gregory R.; McNeish, Daniel M.

    2017-01-01

    For the two-way factorial design in analysis of variance, the current article explicates and compares three methods for controlling the Type I error rate for all possible simple interaction contrasts following a statistically significant interaction, including a proposed modification to the Bonferroni procedure that increases the power of…

  16. Calibration of Response Data Using MIRT Models with Simple and Mixed Structures

    ERIC Educational Resources Information Center

    Zhang, Jinming

    2012-01-01

    It is common to assume during a statistical analysis of a multiscale assessment that the assessment is composed of several unidimensional subtests or that it has simple structure. Under this assumption, the unidimensional and multidimensional approaches can be used to estimate item parameters. These two approaches are equivalent in parameter…

  17. Quantitation & Case-Study-Driven Inquiry to Enhance Yeast Fermentation Studies

    ERIC Educational Resources Information Center

    Grammer, Robert T.

    2012-01-01

    We propose a procedure for the assay of fermentation in yeast in microcentrifuge tubes that is simple and rapid, permitting assay replicates, descriptive statistics, and the preparation of line graphs that indicate reproducibility. Using regression and simple derivatives to determine initial velocities, we suggest methods to compare the effects of…

  18. A Simple Statistical Thermodynamics Experiment

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2010-01-01

    Comparing the predicted and actual rolls of combinations of both two and three dice can help to introduce many of the basic concepts of statistical thermodynamics, including multiplicity, probability, microstates, and macrostates, and demonstrate that entropy is indeed a measure of randomness, that disordered states (those of higher entropy) are…

  19. Standard Entropy of Crystalline Iodine from Vapor Pressure Measurements: A Physical Chemistry Experiment.

    ERIC Educational Resources Information Center

    Harris, Ronald M.

    1978-01-01

    Presents material dealing with an application of statistical thermodynamics to the diatomic solid I-2(s). The objective is to enhance the student's appreciation of the power of the statistical formulation of thermodynamics. The Simple Einstein Model is used. (Author/MA)

  20. Advanced Undergraduate Experiments in Thermoanalytical Chemistry.

    ERIC Educational Resources Information Center

    Hill, J. O.; Magee, R. J.

    1988-01-01

    Describes several experiments using the techniques of thermal analysis and thermometric titrimetry. Defines thermal analysis and several recent branches of the technique. Notes most of the experiments use simple equipment and standard laboratory techniques. (MVL)

  1. Simulation methods to estimate design power: an overview for applied research.

    PubMed

    Arnold, Benjamin F; Hogan, Daniel R; Colford, John M; Hubbard, Alan E

    2011-06-20

    Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research.

  2. Simulation methods to estimate design power: an overview for applied research

    PubMed Central

    2011-01-01

    Background Estimating the required sample size and statistical power for a study is an integral part of study design. For standard designs, power equations provide an efficient solution to the problem, but they are unavailable for many complex study designs that arise in practice. For such complex study designs, computer simulation is a useful alternative for estimating study power. Although this approach is well known among statisticians, in our experience many epidemiologists and social scientists are unfamiliar with the technique. This article aims to address this knowledge gap. Methods We review an approach to estimate study power for individual- or cluster-randomized designs using computer simulation. This flexible approach arises naturally from the model used to derive conventional power equations, but extends those methods to accommodate arbitrarily complex designs. The method is universally applicable to a broad range of designs and outcomes, and we present the material in a way that is approachable for quantitative, applied researchers. We illustrate the method using two examples (one simple, one complex) based on sanitation and nutritional interventions to improve child growth. Results We first show how simulation reproduces conventional power estimates for simple randomized designs over a broad range of sample scenarios to familiarize the reader with the approach. We then demonstrate how to extend the simulation approach to more complex designs. Finally, we discuss extensions to the examples in the article, and provide computer code to efficiently run the example simulations in both R and Stata. Conclusions Simulation methods offer a flexible option to estimate statistical power for standard and non-traditional study designs and parameters of interest. The approach we have described is universally applicable for evaluating study designs used in epidemiologic and social science research. PMID:21689447

  3. Statistical issues in the design and planning of proteomic profiling experiments.

    PubMed

    Cairns, David A

    2015-01-01

    The statistical design of a clinical proteomics experiment is a critical part of well-undertaken investigation. Standard concepts from experimental design such as randomization, replication and blocking should be applied in all experiments, and this is possible when the experimental conditions are well understood by the investigator. The large number of proteins simultaneously considered in proteomic discovery experiments means that determining the number of required replicates to perform a powerful experiment is more complicated than in simple experiments. However, by using information about the nature of an experiment and making simple assumptions this is achievable for a variety of experiments useful for biomarker discovery and initial validation.

  4. A simple statistical model for geomagnetic reversals

    NASA Technical Reports Server (NTRS)

    Constable, Catherine

    1990-01-01

    The diversity of paleomagnetic records of geomagnetic reversals now available indicate that the field configuration during transitions cannot be adequately described by simple zonal or standing field models. A new model described here is based on statistical properties inferred from the present field and is capable of simulating field transitions like those observed. Some insight is obtained into what one can hope to learn from paleomagnetic records. In particular, it is crucial that the effects of smoothing in the remanence acquisition process be separated from true geomagnetic field behavior. This might enable us to determine the time constants associated with the dominant field configuration during a reversal.

  5. Statistical Properties of Online Auctions

    NASA Astrophysics Data System (ADS)

    Namazi, Alireza; Schadschneider, Andreas

    We characterize the statistical properties of a large number of online auctions run on eBay. Both stationary and dynamic properties, like distributions of prices, number of bids etc., as well as relations between these quantities are studied. The analysis of the data reveals surprisingly simple distributions and relations, typically of power-law form. Based on these findings we introduce a simple method to identify suspicious auctions that could be influenced by a form of fraud known as shill bidding. Furthermore the influence of bidding strategies is discussed. The results indicate that the observed behavior is related to a mixture of agents using a variety of strategies.

  6. A Simple Graphical Method for Quantification of Disaster Management Surge Capacity Using Computer Simulation and Process-control Tools.

    PubMed

    Franc, Jeffrey Michael; Ingrassia, Pier Luigi; Verde, Manuela; Colombo, Davide; Della Corte, Francesco

    2015-02-01

    Surge capacity, or the ability to manage an extraordinary volume of patients, is fundamental for hospital management of mass-casualty incidents. However, quantification of surge capacity is difficult and no universal standard for its measurement has emerged, nor has a standardized statistical method been advocated. As mass-casualty incidents are rare, simulation may represent a viable alternative to measure surge capacity. Hypothesis/Problem The objective of the current study was to develop a statistical method for the quantification of surge capacity using a combination of computer simulation and simple process-control statistical tools. Length-of-stay (LOS) and patient volume (PV) were used as metrics. The use of this method was then demonstrated on a subsequent computer simulation of an emergency department (ED) response to a mass-casualty incident. In the derivation phase, 357 participants in five countries performed 62 computer simulations of an ED response to a mass-casualty incident. Benchmarks for ED response were derived from these simulations, including LOS and PV metrics for triage, bed assignment, physician assessment, and disposition. In the application phase, 13 students of the European Master in Disaster Medicine (EMDM) program completed the same simulation scenario, and the results were compared to the standards obtained in the derivation phase. Patient-volume metrics included number of patients to be triaged, assigned to rooms, assessed by a physician, and disposed. Length-of-stay metrics included median time to triage, room assignment, physician assessment, and disposition. Simple graphical methods were used to compare the application phase group to the derived benchmarks using process-control statistical tools. The group in the application phase failed to meet the indicated standard for LOS from admission to disposition decision. This study demonstrates how simulation software can be used to derive values for objective benchmarks of ED surge capacity using PV and LOS metrics. These objective metrics can then be applied to other simulation groups using simple graphical process-control tools to provide a numeric measure of surge capacity. Repeated use in simulations of actual EDs may represent a potential means of objectively quantifying disaster management surge capacity. It is hoped that the described statistical method, which is simple and reusable, will be useful for investigators in this field to apply to their own research.

  7. Bandwidth-Tunable Fiber Bragg Gratings Based on UV Glue Technique

    NASA Astrophysics Data System (ADS)

    Fu, Ming-Yue; Liu, Wen-Feng; Chen, Hsin-Tsang; Chuang, Chia-Wei; Bor, Sheau-Shong; Tien, Chuen-Lin

    2007-07-01

    In this study, we have demonstrated that a uniform fiber Bragg grating (FBG) can be transformed into a chirped fiber grating by a simple UV glue adhesive technique without shifting the reflection band with respect to the center wavelength of the FBG. The technique is based on the induced strain of an FBG due to the UV glue adhesive force on the fiber surface that causes a grating period variation and an effective index change. This technique can provide a fast and simple method of obtaining the required chirp value of a grating for applications in the dispersion compensators, gain flattening in erbium-doped fiber amplifiers (EDFAs) or optical filters.

  8. Retrieval Capabilities of Hierarchical Networks: From Dyson to Hopfield

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Galluzzi, Andrea; Guerra, Francesco; Tantari, Daniele; Tavani, Flavia

    2015-01-01

    We consider statistical-mechanics models for spin systems built on hierarchical structures, which provide a simple example of non-mean-field framework. We show that the coupling decay with spin distance can give rise to peculiar features and phase diagrams much richer than their mean-field counterpart. In particular, we consider the Dyson model, mimicking ferromagnetism in lattices, and we prove the existence of a number of metastabilities, beyond the ordered state, which become stable in the thermodynamic limit. Such a feature is retained when the hierarchical structure is coupled with the Hebb rule for learning, hence mimicking the modular architecture of neurons, and gives rise to an associative network able to perform single pattern retrieval as well as multiple-pattern retrieval, depending crucially on the external stimuli and on the rate of interaction decay with distance; however, those emergent multitasking features reduce the network capacity with respect to the mean-field counterpart. The analysis is accomplished through statistical mechanics, Markov chain theory, signal-to-noise ratio technique, and numerical simulations in full consistency. Our results shed light on the biological complexity shown by real networks, and suggest future directions for understanding more realistic models.

  9. Probabilistic modelling of flood events using the entropy copula

    NASA Astrophysics Data System (ADS)

    Li, Fan; Zheng, Qian

    2016-11-01

    The estimation of flood frequency is vital for the flood control strategies and hydraulic structure design. Generating synthetic flood events according to statistical properties of observations is one of plausible methods to analyze the flood frequency. Due to the statistical dependence among the flood event variables (i.e. the flood peak, volume and duration), a multidimensional joint probability estimation is required. Recently, the copula method is widely used for multivariable dependent structure construction, however, the copula family should be chosen before application and the choice process is sometimes rather subjective. The entropy copula, a new copula family, employed in this research proposed a way to avoid the relatively subjective process by combining the theories of copula and entropy. The analysis shows the effectiveness of the entropy copula for probabilistic modelling the flood events of two hydrological gauges, and a comparison of accuracy with the popular copulas was made. The Gibbs sampling technique was applied for trivariate flood events simulation in order to mitigate the calculation difficulties of extending to three dimension directly. The simulation results indicate that the entropy copula is a simple and effective copula family for trivariate flood simulation.

  10. Genome-wide methylation study of diploid and triploid brown trout (Salmo trutta L.).

    PubMed

    Covelo-Soto, L; Leunda, P M; Pérez-Figueroa, A; Morán, P

    2015-06-01

    The induction of triploidization in fish is a very common practice in aquaculture. Although triploidization has been applied successfully in many salmonid species, little is known about the epigenetic mechanisms implicated in the maintenance of the normal functions of the new polyploid genome. By means of methylation-sensitive amplified polymorphism (MSAP) techniques, genome-wide methylation changes associated with triploidization were assessed in DNA samples obtained from diploid and triploid siblings of brown trout (Salmo trutta). Simple comparative body measurements showed that the triploid trout used in the study were statistically bigger, however, not heavier than their diploid counterparts. The statistical analysis of the MSAP data showed no significant differences between diploid and triploid brown trout in respect to brain, gill, heart, liver, kidney or muscle samples. Nonetheless, local analysis pointed to the possibility of differences in connection with concrete loci. This is the first study that has investigated DNA methylation alterations associated with triploidization in brown trout. Our results set the basis for new studies to be undertaken and provide a new approach concerning triploidization effects of the salmonid genome while also contributing to the better understanding of the genome-wide methylation processes. © 2015 Stichting International Foundation for Animal Genetics.

  11. A statistical examination of Nimbus 7 SMMR data and remote sensing of sea surface temperature, liquid water content in the atmosphere and surfaces wind speed

    NASA Technical Reports Server (NTRS)

    Prabhakara, C.; Wang, I.; Chang, A. T. C.; Gloersen, P.

    1982-01-01

    Nimbus 7 Scanning Multichannel Microwave Radiometer (SMMR) brightness temperature measurements over the global oceans have been examined with the help of statistical and empirical techniques. Such analyses show that zonal averages of brightness temperature measured by SMMR, over the oceans, on a large scale are primarily influenced by the water vapor in the atmosphere. Liquid water in the clouds and rain, which has a much smaller spatial and temporal scale, contributes substantially to the variability of the SMMR measurements within the latitudinal zones. The surface wind not only increases the surface emissivity but through its interactions with the atmosphere produces correlations, in the SMMR brightness temperature data, that have significant meteorological implications. It is found that a simple meteorological model can explain the general characteristics of the SMMR data. With the help of this model methods to infer over the global oceans, the surface temperature, liquid water content in the atmosphere, and surface wind speed are developed. Monthly mean estimates of the sea surface temperature and surface winds are compared with the ship measurements. Estimates of liquid water content in the atmosphere are consistent with earlier satellite measurements.

  12. Nanocluster building blocks of artificial square spin ice: Stray-field studies of thermal dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pohlit, Merlin, E-mail: pohlit@physik.uni-frankfurt.de; Porrati, Fabrizio; Huth, Michael

    We present measurements of the thermal dynamics of a Co-based single building block of an artificial square spin ice fabricated by focused electron-beam-induced deposition. We employ micro-Hall magnetometry, an ultra-sensitive tool to study the stray field emanating from magnetic nanostructures, as a new technique to access the dynamical properties during the magnetization reversal of the spin-ice nanocluster. The obtained hysteresis loop exhibits distinct steps, displaying a reduction of their “coercive field” with increasing temperature. Therefore, thermally unstable states could be repetitively prepared by relatively simple temperature and field protocols allowing one to investigate the statistics of their switching behavior withinmore » experimentally accessible timescales. For a selected switching event, we find a strong reduction of the so-prepared states' “survival time” with increasing temperature and magnetic field. Besides the possibility to control the lifetime of selected switching events at will, we find evidence for a more complex behavior caused by the special spin ice arrangement of the macrospins, i.e., that the magnetic reversal statistically follows distinct “paths” most likely driven by thermal perturbation.« less

  13. Biomechanical evaluation of different suture techniques for arthroscopic transtibial pull-out repair of posterior medial meniscus root tears.

    PubMed

    Feucht, Matthias J; Grande, Eduardo; Brunhuber, Johannes; Burgkart, Rainer; Imhoff, Andreas B; Braun, Sepp

    2013-12-01

    A tear of the posterior medial meniscus root (PMMR) is increasingly recognized as a serious knee joint injury. Several suture techniques for arthroscopic transtibial pull-out repair have been described; however, only limited data about the biomechanical properties of these techniques are currently available. There are significant differences between the tested suture techniques, with more complex suture configurations providing superior biomechanical properties. Controlled laboratory study. A total of 40 porcine medial menisci were randomly assigned to 1 of 4 groups (10 specimens each) according to suture technique: two simple stitches (TSS), horizontal mattress suture (HMS), modified Mason-Allen suture (MMA), and two modified loop stitches (TLS). Meniscus-suture constructs were subjected to cyclic loading followed by load-to-failure testing in a servohydraulic material testing machine. During cyclic loading, the HMS and TLS groups showed a significantly higher displacement after 100, 500, and 1000 cycles compared with the TSS and MMA groups. After 1000 cycles, the highest displacement was found for the TLS group, with significant differences compared with all other groups. During load-to-failure testing, the highest maximum load and yield load were observed for the MMA group, with statistically significant differences compared with the TSS and TLS groups. With regard to stiffness, the TSS and MMA groups showed significantly higher values compared with the HMS and TLS groups. The MMA technique provided the best biomechanical properties with regard to cyclic loading and load-to-failure testing. The TSS technique seems to be a valuable alternative. Both the HMS and TLS techniques have the disadvantage of lower stiffness and higher displacement during cyclic loading. Using a MMA technique may improve healing rates and avoid progressive extrusion of the medial meniscus after transtibial pull-out repair of PMMR tears. The TSS technique may be used as an alternative that is easier to perform, but a more careful rehabilitation program is possibly necessary to avoid early failure.

  14. Football fever: goal distributions and non-Gaussian statistics

    NASA Astrophysics Data System (ADS)

    Bittner, E.; Nußbaumer, A.; Janke, W.; Weigel, M.

    2009-02-01

    Analyzing football score data with statistical techniques, we investigate how the not purely random, but highly co-operative nature of the game is reflected in averaged properties such as the probability distributions of scored goals for the home and away teams. As it turns out, especially the tails of the distributions are not well described by the Poissonian or binomial model resulting from the assumption of uncorrelated random events. Instead, a good effective description of the data is provided by less basic distributions such as the negative binomial one or the probability densities of extreme value statistics. To understand this behavior from a microscopical point of view, however, no waiting time problem or extremal process need be invoked. Instead, modifying the Bernoulli random process underlying the Poissonian model to include a simple component of self-affirmation seems to describe the data surprisingly well and allows to understand the observed deviation from Gaussian statistics. The phenomenological distributions used before can be understood as special cases within this framework. We analyzed historical football score data from many leagues in Europe as well as from international tournaments, including data from all past tournaments of the “FIFA World Cup” series, and found the proposed models to be applicable rather universally. In particular, here we analyze the results of the German women’s premier football league and consider the two separate German men’s premier leagues in the East and West during the cold war times as well as the unified league after 1990 to see how scoring in football and the component of self-affirmation depend on cultural and political circumstances.

  15. A simple rapid approach using coupled multivariate statistical methods, GIS and trajectory models to delineate areas of common oil spill risk

    NASA Astrophysics Data System (ADS)

    Guillen, George; Rainey, Gail; Morin, Michelle

    2004-04-01

    Currently, the Minerals Management Service uses the Oil Spill Risk Analysis model (OSRAM) to predict the movement of potential oil spills greater than 1000 bbl originating from offshore oil and gas facilities. OSRAM generates oil spill trajectories using meteorological and hydrological data input from either actual physical measurements or estimates generated from other hydrological models. OSRAM and many other models produce output matrices of average, maximum and minimum contact probabilities to specific landfall or target segments (columns) from oil spills at specific points (rows). Analysts and managers are often interested in identifying geographic areas or groups of facilities that pose similar risks to specific targets or groups of targets if a spill occurred. Unfortunately, due to the potentially large matrix generated by many spill models, this question is difficult to answer without the use of data reduction and visualization methods. In our study we utilized a multivariate statistical method called cluster analysis to group areas of similar risk based on potential distribution of landfall target trajectory probabilities. We also utilized ArcView™ GIS to display spill launch point groupings. The combination of GIS and multivariate statistical techniques in the post-processing of trajectory model output is a powerful tool for identifying and delineating areas of similar risk from multiple spill sources. We strongly encourage modelers, statistical and GIS software programmers to closely collaborate to produce a more seamless integration of these technologies and approaches to analyzing data. They are complimentary methods that strengthen the overall assessment of spill risks.

  16. Manufacturing Squares: An Integrative Statistical Process Control Exercise

    ERIC Educational Resources Information Center

    Coy, Steven P.

    2016-01-01

    In the exercise, students in a junior-level operations management class are asked to manufacture a simple product. Given product specifications, they must design a production process, create roles and design jobs for each team member, and develop a statistical process control plan that efficiently and effectively controls quality during…

  17. PEOPLE IN PHYSICS: Nobel prize winners in physics from 1901 to 1990: simple statistics for physics teachers

    NASA Astrophysics Data System (ADS)

    Zhang, Weijia; Fuller, Robert G.

    1998-05-01

    A demographic database for the 139 Nobel prize winners in physics from 1901 to 1990 has been created from a variety of sources. The results of our statistical study are discussed in the light of the implications for physics teaching.

  18. Teaching Statistics with Minitab II.

    ERIC Educational Resources Information Center

    Ryan, T. A., Jr.; And Others

    Minitab is a statistical computing system which uses simple language, produces clear output, and keeps track of bookkeeping automatically. Error checking with English diagnostics and inclusion of several default options help to facilitate use of the system by students. Minitab II is an improved and expanded version of the original Minitab which…

  19. Applying Descriptive Statistics to Teaching the Regional Classification of Climate.

    ERIC Educational Resources Information Center

    Lindquist, Peter S.; Hammel, Daniel J.

    1998-01-01

    Describes an exercise for college and high school students that relates descriptive statistics to the regional climatic classification. The exercise introduces students to simple calculations of central tendency and dispersion, the construction and interpretation of scatterplots, and the definition of climatic regions. Forces students to engage…

  20. An Experimental Approach to Teaching and Learning Elementary Statistical Mechanics

    ERIC Educational Resources Information Center

    Ellis, Frank B.; Ellis, David C.

    2008-01-01

    Introductory statistical mechanics is studied for a simple two-state system using an inexpensive and easily built apparatus. A large variety of demonstrations, suitable for students in high school and introductory university chemistry courses, are possible. This article details demonstrations for exothermic and endothermic reactions, the dynamic…

  1. School District Enrollment Projections: A Comparison of Three Methods.

    ERIC Educational Resources Information Center

    Pettibone, Timothy J.; Bushan, Latha

    This study assesses three methods of forecasting school enrollments: the cohort-sruvival method (grade progression), the statistical forecasting procedure developed by the Statistical Analysis System (SAS) Institute, and a simple ratio computation. The three methods were used to forecast school enrollments for kindergarten through grade 12 in a…

  2. Application of Transformations in Parametric Inference

    ERIC Educational Resources Information Center

    Brownstein, Naomi; Pensky, Marianna

    2008-01-01

    The objective of the present paper is to provide a simple approach to statistical inference using the method of transformations of variables. We demonstrate performance of this powerful tool on examples of constructions of various estimation procedures, hypothesis testing, Bayes analysis and statistical inference for the stress-strength systems.…

  3. Chi-squared and C statistic minimization for low count per bin data

    NASA Astrophysics Data System (ADS)

    Nousek, John A.; Shue, David R.

    1989-07-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  4. Chi-squared and C statistic minimization for low count per bin data. [sampling in X ray astronomy

    NASA Technical Reports Server (NTRS)

    Nousek, John A.; Shue, David R.

    1989-01-01

    Results are presented from a computer simulation comparing two statistical fitting techniques on data samples with large and small counts per bin; the results are then related specifically to X-ray astronomy. The Marquardt and Powell minimization techniques are compared by using both to minimize the chi-squared statistic. In addition, Cash's C statistic is applied, with Powell's method, and it is shown that the C statistic produces better fits in the low-count regime than chi-squared.

  5. A Simple Demonstration of Atomic and Molecular Orbitals Using Circular Magnets

    ERIC Educational Resources Information Center

    Chakraborty, Maharudra; Mukhopadhyay, Subrata; Das, Ranendu Sekhar

    2014-01-01

    A quite simple and inexpensive technique is described here to represent the approximate shapes of atomic orbitals and the molecular orbitals formed by them following the principles of the linear combination of atomic orbitals (LCAO) method. Molecular orbitals of a few simple molecules can also be pictorially represented. Instructors can employ the…

  6. THE MATHEMATICAL ANALYSIS OF A SIMPLE DUEL

    DTIC Science & Technology

    The principles and techniques of simple Markov processes are used to analyze a simple duel to determine the limiting state probabilities (i.e., the...probabilities of occurrence of the various possible outcomes of the duel ). The duel is one in which A fires at B at a rate of r sub A shots per minute

  7. Statistical fluctuations in pedestrian evacuation times and the effect of social contagion

    NASA Astrophysics Data System (ADS)

    Nicolas, Alexandre; Bouzat, Sebastián; Kuperman, Marcelo N.

    2016-08-01

    Mathematical models of pedestrian evacuation and the associated simulation software have become essential tools for the assessment of the safety of public facilities and buildings. While a variety of models is now available, their calibration and test against empirical data are generally restricted to global averaged quantities; the statistics compiled from the time series of individual escapes ("microscopic" statistics) measured in recent experiments are thus overlooked. In the same spirit, much research has primarily focused on the average global evacuation time, whereas the whole distribution of evacuation times over some set of realizations should matter. In the present paper we propose and discuss the validity of a simple relation between this distribution and the microscopic statistics, which is theoretically valid in the absence of correlations. To this purpose, we develop a minimal cellular automaton, with features that afford a semiquantitative reproduction of the experimental microscopic statistics. We then introduce a process of social contagion of impatient behavior in the model and show that the simple relation under test may dramatically fail at high contagion strengths, the latter being responsible for the emergence of strong correlations in the system. We conclude with comments on the potential practical relevance for safety science of calculations based on microscopic statistics.

  8. Learning predictive statistics from temporal sequences: Dynamics and strategies.

    PubMed

    Wang, Rui; Shen, Yuan; Tino, Peter; Welchman, Andrew E; Kourtzi, Zoe

    2017-10-01

    Human behavior is guided by our expectations about the future. Often, we make predictions by monitoring how event sequences unfold, even though such sequences may appear incomprehensible. Event structures in the natural environment typically vary in complexity, from simple repetition to complex probabilistic combinations. How do we learn these structures? Here we investigate the dynamics of structure learning by tracking human responses to temporal sequences that change in structure unbeknownst to the participants. Participants were asked to predict the upcoming item following a probabilistic sequence of symbols. Using a Markov process, we created a family of sequences, from simple frequency statistics (e.g., some symbols are more probable than others) to context-based statistics (e.g., symbol probability is contingent on preceding symbols). We demonstrate the dynamics with which individuals adapt to changes in the environment's statistics-that is, they extract the behaviorally relevant structures to make predictions about upcoming events. Further, we show that this structure learning relates to individual decision strategy; faster learning of complex structures relates to selection of the most probable outcome in a given context (maximizing) rather than matching of the exact sequence statistics. Our findings provide evidence for alternate routes to learning of behaviorally relevant statistics that facilitate our ability to predict future events in variable environments.

  9. Durham Smith Vest-Over-Pant Technique: Simple Procedure for a Complex Problem (Post-Hypospadias Repair Fistula).

    PubMed

    Gite, Venkat A; Patil, Saurabh R; Bote, Sachin M; Siddiqui, Mohd Ayub Karam Nabi; Nikose, Jayant V; Kandi, Anitha J

    2017-01-01

    Urethrocutaneous fistula, which occurs after hypospadias surgery, is often a baffling problem and its treatment is challenging. The study aimed to evaluate the results of the simple procedure (Durham Smith vest-over-pant technique) for this complex problem (post-hypospadias repair fistula). During the period from 2011 to 2015, 20 patients with post-hypospadias repair fistulas underwent Durham Smith repair. Common age group was between 5 and 12 years. Site wise distribution of fistula was coronal 2 (10%), distal penile 7 (35%), mid-penile 7 (35%), and proximal-penile 4 (20%). Out of 20 patients, 15 had fistula of size <5 mm (75%) and 5 patients had fistula of size >5 mm (25%). All cases were repaired with Durham Smith vest-over-pant technique by a single surgeon. In case of multiple fistulas adjacent to each other, all fistulas were joined to form single fistula and repaired. We have successfully repaired all post-hypospadias surgery urethrocutaneous fistulas using the technique described by Durham Smith with 100% success rate. Durham Smith vest-over-pant technique is a simple solution for a complex problem (post hypospadias surgery penile fistulas) in properly selected patients. © 2017 S. Karger AG, Basel.

  10. Outcomes in Patients Treated with a Novel, Simple Method for Hemostasis of Dermal Avulsion Injuries.

    PubMed

    Dowling, Sean Taylor; Lin, Brian Wai

    2017-10-01

    A recently described technique proposes a simple method to achieve permanent hemostasis of distal fingertip dermal avulsion injuries. It is simple to learn and easy to perform with readily available materials found in most emergency departments. However, long-term outcomes for patients treated with this technique have not yet been evaluated. A primary objective of the current article is to provide safety data for the technique using an off-label product indication. Emergency department of Kaiser Permanente Medical Center, San Francisco, California. Six patients were treated in the emergency department for fingertip dermal avulsion injuries using a tourniquet and tissue adhesive glue (Dermabond by Ethicon, Somerville, New Jersey). Patients were subsequently contacted to assess healing and satisfaction with cosmetic outcome through interview and photographs of their wounds at 9 months following the date of injury. All 6 patients were satisfied with the cosmetic outcome of treatment, and none received a diagnosis of serious complications. This series demonstrates cosmetic outcomes for injuries treated with the technique, highlights potential problems that may be perceived by patients during their clinical course, and creates the groundwork for a larger clinical study examining the use of the technique.

  11. The Importance of Introductory Statistics Students Understanding Appropriate Sampling Techniques

    ERIC Educational Resources Information Center

    Menil, Violeta C.

    2005-01-01

    In this paper the author discusses the meaning of sampling, the reasons for sampling, the Central Limit Theorem, and the different techniques of sampling. Practical and relevant examples are given to make the appropriate sampling techniques understandable to students of Introductory Statistics courses. With a thorough knowledge of sampling…

  12. [Comparative clinical study of 2 surgical techniques for trapeziometacarpal osteoarthritis].

    PubMed

    Martínez-Martínez, F; García-Hortelano, S; García-Paños, J P; Moreno-Fernández, J M; Martín-Ferrero, M Á

    2016-01-01

    In trapeziometacarpal osteoarthritis (or rhizarthrosis), there is great controversy over the surgical technique to choose: simple trapeziectomy, resection-interposition arthroplasty, interposition arthroplasty suspension-or arthroplasty with implant or prosthesis. These latter 2 are the most used without consensus in the literature on the technique to choose and without sufficient comparative studies. The objective is to compare the 2 techniques most used today: suspension-interposition arthroplasty and arthroplasty with prosthesis. A prospective study was conducted on 15 patients diagnosed with grade 2-3 rhizarthrosis treated with interposition arthroplasty-suspension (group 1) and 15 with prosthesis (group 2) showing clinical outcomes, advantages and disadvantages of each. The study variables were the visual analogue scale (VAS), the DASH questionnaire, the grip strength, the strength of end to end and end-lateral clamp, the joint balance adduction-abduction and preemption-retropositioning, and the opposition. The 2 groups are from 2 different hospitals operated on by a hand surgeon from the Hand Unit. The follow-up time for all patients included in the study was 12 months. The VAS, DASH and grip strength at 12 months did not show significant differences. As regards the strength of end to end and end-lateral clamp, group 2 showed the highest values in all follow-up periods with statistically significant differences. Patient selection and surgical experience is essential, given the satisfactory results of both techniques. Arthroplasty prosthesis is reserved for grades 2 and 3, middle-aged patients, good trapezium architecture, and experienced surgeons. Copyright © 2014 SECOT. Published by Elsevier Espana. All rights reserved.

  13. The training and learning process of transseptal puncture using a modified technique.

    PubMed

    Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom

    2013-12-01

    As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.

  14. The Synthesis of Proteins-A Simple Experiment To Show the Procedures and Problems of Using Radioisotopes in Biochemical Studies

    NASA Astrophysics Data System (ADS)

    Hawcroft, David M.

    1996-11-01

    Courses of organic chemistry frequently include studies of biochemistry and hence of biochemical techniques. Radioisotopes have played a major role in the understanding of metabolic pathways, transport, enzyme activity and other processes. The experiment described in this paper uses simple techniques to illustrate the procedures involved in working with radioisotopes when following a simplified metabolic pathway. Safety considerations are discussed and a list of safety rules is provided, but the experiment itself uses very low levels of a weak beta-emitting isotope (tritium). Plant material is suggested to reduce legal, financial and emotive problems, but the techniques are applicable to all soft-tissued material. The problems involved in data interpretation in radioisotope experiments resulting from radiation quenching are resolved by simple correction calculations, and the merits of using radioisotopes shown by a calculation of the low mass of material being measured. Suggestions for further experiments are given.

  15. A Simple low-cost device enables four epi-illumination techniques on standard light microscopes.

    PubMed

    Ishmukhametov, Robert R; Russell, Aidan N; Wheeler, Richard J; Nord, Ashley L; Berry, Richard M

    2016-02-08

    Back-scattering darkfield (BSDF), epi-fluorescence (EF), interference reflection contrast (IRC), and darkfield surface reflection (DFSR) are advanced but expensive light microscopy techniques with limited availability. Here we show a simple optical design that combines these four techniques in a simple low-cost miniature epi-illuminator, which inserts into the differential interference-contrast (DIC) slider bay of a commercial microscope, without further additions required. We demonstrate with this device: 1) BSDF-based detection of Malarial parasites inside unstained human erythrocytes; 2) EF imaging with and without dichroic components, including detection of DAPI-stained Leishmania parasite without using excitation or emission filters; 3) RIC of black lipid membranes and other thin films, and 4) DFSR of patterned opaque and transparent surfaces. We believe that our design can expand the functionality of commercial bright field microscopes, provide easy field detection of parasites and be of interest to many users of light microscopy.

  16. A Simple low-cost device enables four epi-illumination techniques on standard light microscopes

    NASA Astrophysics Data System (ADS)

    Ishmukhametov, Robert R.; Russell, Aidan N.; Wheeler, Richard J.; Nord, Ashley L.; Berry, Richard M.

    2016-02-01

    Back-scattering darkfield (BSDF), epi-fluorescence (EF), interference reflection contrast (IRC), and darkfield surface reflection (DFSR) are advanced but expensive light microscopy techniques with limited availability. Here we show a simple optical design that combines these four techniques in a simple low-cost miniature epi-illuminator, which inserts into the differential interference-contrast (DIC) slider bay of a commercial microscope, without further additions required. We demonstrate with this device: 1) BSDF-based detection of Malarial parasites inside unstained human erythrocytes; 2) EF imaging with and without dichroic components, including detection of DAPI-stained Leishmania parasite without using excitation or emission filters; 3) RIC of black lipid membranes and other thin films, and 4) DFSR of patterned opaque and transparent surfaces. We believe that our design can expand the functionality of commercial bright field microscopes, provide easy field detection of parasites and be of interest to many users of light microscopy.

  17. Reversed inverse regression for the univariate linear calibration and its statistical properties derived using a new methodology

    NASA Astrophysics Data System (ADS)

    Kang, Pilsang; Koo, Changhoi; Roh, Hokyu

    2017-11-01

    Since simple linear regression theory was established at the beginning of the 1900s, it has been used in a variety of fields. Unfortunately, it cannot be used directly for calibration. In practical calibrations, the observed measurements (the inputs) are subject to errors, and hence they vary, thus violating the assumption that the inputs are fixed. Therefore, in the case of calibration, the regression line fitted using the method of least squares is not consistent with the statistical properties of simple linear regression as already established based on this assumption. To resolve this problem, "classical regression" and "inverse regression" have been proposed. However, they do not completely resolve the problem. As a fundamental solution, we introduce "reversed inverse regression" along with a new methodology for deriving its statistical properties. In this study, the statistical properties of this regression are derived using the "error propagation rule" and the "method of simultaneous error equations" and are compared with those of the existing regression approaches. The accuracy of the statistical properties thus derived is investigated in a simulation study. We conclude that the newly proposed regression and methodology constitute the complete regression approach for univariate linear calibrations.

  18. Should I Pack My Umbrella? Clinical versus Statistical Prediction of Mental Health Decisions

    ERIC Educational Resources Information Center

    Aegisdottir, Stefania; Spengler, Paul M.; White, Michael J.

    2006-01-01

    In this rejoinder, the authors respond to the insightful commentary of Strohmer and Arm, Chwalisz, and Hilton, Harris, and Rice about the meta-analysis on statistical versus clinical prediction techniques for mental health judgments. The authors address issues including the availability of statistical prediction techniques for real-life psychology…

  19. Change Detection in Rough Time Series

    DTIC Science & Technology

    2014-09-01

    Business Statistics : An Inferential Approach, Dellen: San Francisco. [18] Winston, W. (1997) Operations Research Applications and Algorithms, Duxbury...distribution that can present significant challenges to conventional statistical tracking techniques. To address this problem the proposed method...applies hybrid fuzzy statistical techniques to series granules instead of to individual measures. Three examples demonstrated the robust nature of the

  20. Enhancing Students' Ability to Use Statistical Reasoning with Everyday Problems

    ERIC Educational Resources Information Center

    Lawson, Timothy J.; Schwiers, Michael; Doellman, Maureen; Grady, Greg; Kelnhofer, Robert

    2003-01-01

    We discuss a technique for teaching students everyday applications of statistical concepts. We used this technique with students (n = 50) enrolled in several sections of an introductory statistics course; students (n = 45) in other sections served as a comparison group. A class of introductory psychology students (n = 24) served as a second…

  1. Simple New Screw Insertion Technique without Extraction for Broken Pedicle Screws.

    PubMed

    Kil, Jin-Sang; Park, Jong-Tae

    2018-05-01

    Spinal transpedicular screw fixation is widely performed. Broken pedicle screw rates range from 3%-7.1%. Several techniques have been described for extraction of broken pedicle screws. However, most of these techniques require special instruments. We describe a simple, modified technique for management of broken pedicle screws without extraction. No special instruments or drilling in an adjacent pedicle are required. We used a high-speed air drill with a round burr. With C-arm fluoroscopy guidance, the distal fragment of a broken pedicle screw was palpated using free-hand technique through the screw entry hole. A high-speed air drill with a round burr (not a diamond burr) was inserted through the hole. Drilling began slowly and continued until enough space was obtained for new screw insertion. Using this space, we performed new pedicle screw fixation medially alongside the distal fragment of the broken pedicle screw. We performed the insertion with a previously used entry hole and pathway in the pedicle. The same size pedicle screw was used. Three patients were treated with this modified technique. New screw insertion was successful in all cases after partial drilling of the distal broken pedicle screw fragment. There were no complications, such as screw loosening, dural tears, or root injury. We describe a simple, modified technique for management of broken pedicle screws without extraction. This technique is recommended in patients who require insertion of a new screw. Copyright © 2017. Published by Elsevier Inc.

  2. Pullout strength of standard vs. cement-augmented rotator cuff repair anchors in cadaveric bone.

    PubMed

    Aziz, Keith T; Shi, Brendan Y; Okafor, Louis C; Smalley, Jeremy; Belkoff, Stephen M; Srikumaran, Uma

    2018-05-01

    We evaluate a novel method of rotator cuff repair that uses arthroscopic equipment to inject bone cement into placed suture anchors. A cadaver model was used to assess the pullout strength of this technique versus anchors without augmentation. Six fresh-frozen matched pairs of upper extremities were screened to exclude those with prior operative procedures, fractures, or neoplasms. One side from each pair was randomized to undergo standard anchor fixation with the contralateral side to undergo anchor fixation augmented with bone cement. After anchor fixation, specimens were mounted on a servohydraulic testing system and suture anchors were pulled at 90° to the insertion to simulate the anatomic pull of the rotator cuff. Sutures were pulled at 1 mm/s until failure. The mean pullout strength was 540 N (95% confidence interval, 389 to 690 N) for augmented anchors and 202 N (95% confidence interval, 100 to 305 N) for standard anchors. The difference in pullout strength was statistically significant (P < 0.05). This study shows superior pullout strength of a novel augmented rotator cuff anchor technique. The described technique, which is achieved by extruding polymethylmethacrylate cement through a cannulated in situ suture anchor with fenestrations, significantly increased the ultimate failure load in cadaveric human humeri. This novel augmented fixation technique was simple and can be implemented with existing instrumentation. In osteoporotic bone, it may substantially reduce the rate of anchor failure. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. Simultaneous multiplexed materials characterization using a high-precision hard X-ray micro-slit array.

    PubMed

    Zhang, Fan; Allen, Andrew J; Levine, Lyle E; Mancini, Derrick C; Ilavsky, Jan

    2015-05-01

    The needs both for increased experimental throughput and for in operando characterization of functional materials under increasingly realistic experimental conditions have emerged as major challenges across the whole of crystallography. A novel measurement scheme that allows multiplexed simultaneous measurements from multiple nearby sample volumes is presented. This new approach enables better measurement statistics or direct probing of heterogeneous structure, dynamics or elemental composition. To illustrate, the submicrometer precision that optical lithography provides has been exploited to create a multiplexed form of ultra-small-angle scattering based X-ray photon correlation spectroscopy (USAXS-XPCS) using micro-slit arrays fabricated by photolithography. Multiplexed USAXS-XPCS is applied to follow the equilibrium dynamics of a simple colloidal suspension. While the dependence of the relaxation time on momentum transfer, and its relationship with the diffusion constant and the static structure factor, follow previous findings, this measurements-in-parallel approach reduces the statistical uncertainties of this photon-starved technique to below those associated with the instrument resolution. More importantly, we note the potential of the multiplexed scheme to elucidate the response of different components of a heterogeneous sample under identical experimental conditions in simultaneous measurements. In the context of the X-ray synchrotron community, this scheme is, in principle, applicable to all in-line synchrotron techniques. Indeed, it has the potential to open a new paradigm for in operando characterization of heterogeneous functional materials, a situation that will be even further enhanced by the ongoing development of multi-bend achromat storage ring designs as the next evolution of large-scale X-ray synchrotron facilities around the world.

  4. Paced respiration with end-expiration technique offers superior BOLD signal repeatability for breath-hold studies.

    PubMed

    Scouten, A; Schwarzbauer, C

    2008-11-01

    As a simple, non-invasive method of blood oxygenation level-dependent (BOLD) signal calibration, the breath-hold task offers considerable potential for the quantification of neuronal activity from functional magnetic resonance imaging (fMRI) measurements. With an aim to improve the precision of this calibration method, the impact of respiratory rate control on the BOLD signal achieved with the breath-hold task was investigated. In addition to self-paced breathing, three different computer-paced breathing rates were imposed during the periods between end-expiration breath-hold blocks. The resulting BOLD signal timecourses and statistical activation maps were compared in eleven healthy human subjects. Results indicate that computer-paced respiration produces a larger peak BOLD signal increase with breath-hold than self-paced breathing, in addition to lower variability between trials. This is due to the more significant post-breath-hold signal undershoot present in self-paced runs, a characteristic which confounds the definition of baseline and is difficult to accurately model. Interestingly, the specific respiratory rate imposed between breath-hold periods generally does not have a statistically significant impact on the BOLD signal change. This result can be explained by previous reports of humans adjusting their inhalation depth to compensate for changes in rate, with the end-goal of maintaining homeostatic ventilation. The advantage of using end-expiration relative to end-inspiration breath-hold is apparent in view of the high repeatability of the BOLD signal in the present study, which does not suffer from the previously reported high variability associated with uncontrolled inspiration depth when using the end-inspiration technique.

  5. Simultaneous multiplexed materials characterization using a high-precision hard X-ray micro-slit array

    DOE PAGES

    Zhang, Fan; Allen, Andrew J.; Levine, Lyle E.; ...

    2015-01-01

    Here, the needs both for increased experimental throughput and forin operandocharacterization of functional materials under increasingly realistic experimental conditions have emerged as major challenges across the whole of crystallography. A novel measurement scheme that allows multiplexed simultaneous measurements from multiple nearby sample volumes is presented. This new approach enables better measurement statistics or direct probing of heterogeneous structure, dynamics or elemental composition. To illustrate, the submicrometer precision that optical lithography provides has been exploited to create a multiplexed form of ultra-small-angle scattering based X-ray photon correlation spectroscopy (USAXS-XPCS) using micro-slit arrays fabricated by photolithography. Multiplexed USAXS-XPCS is applied to followmore » the equilibrium dynamics of a simple colloidal suspension. While the dependence of the relaxation time on momentum transfer, and its relationship with the diffusion constant and the static structure factor, follow previous findings, this measurements-in-parallel approach reduces the statistical uncertainties of this photon-starved technique to below those associated with the instrument resolution. More importantly, we note the potential of the multiplexed scheme to elucidate the response of different components of a heterogeneous sample underidenticalexperimental conditions in simultaneous measurements. Lastly, in the context of the X-ray synchrotron community, this scheme is, in principle, applicable to all in-line synchrotron techniques. Indeed, it has the potential to open a new paradigm for in operando characterization of heterogeneous functional materials, a situation that will be even further enhanced by the ongoing development of multi-bend achromat storage ring designs as the next evolution of large-scale X-ray synchrotron facilities around the world.« less

  6. Screening by imaging: scaling up single-DNA-molecule analysis with a novel parabolic VA-TIRF reflector and noise-reduction techniques.

    PubMed

    van 't Hoff, Marcel; Reuter, Marcel; Dryden, David T F; Oheim, Martin

    2009-09-21

    Bacteriophage lambda-DNA molecules are frequently used as a scaffold to characterize the action of single proteins unwinding, translocating, digesting or repairing DNA. However, scaling up such single-DNA-molecule experiments under identical conditions to attain statistically relevant sample sizes remains challenging. Additionally the movies obtained are frequently noisy and difficult to analyse with any precision. We address these two problems here using, firstly, a novel variable-angle total internal reflection fluorescence (VA-TIRF) reflector composed of a minimal set of optical reflective elements, and secondly, using single value decomposition (SVD) to improve the signal-to-noise ratio prior to analysing time-lapse image stacks. As an example, we visualize under identical optical conditions hundreds of surface-tethered single lambda-DNA molecules, stained with the intercalating dye YOYO-1 iodide, and stretched out in a microcapillary flow. Another novelty of our approach is that we arrange on a mechanically driven stage several capillaries containing saline, calibration buffer and lambda-DNA, respectively, thus extending the approach to high-content, high-throughput screening of single molecules. Our length measurements of individual DNA molecules from noise-reduced kymograph images using SVD display a 6-fold enhanced precision compared to raw-data analysis, reaching approximately 1 kbp resolution. Combining these two methods, our approach provides a straightforward yet powerful way of collecting statistically relevant amounts of data in a semi-automated manner. We believe that our conceptually simple technique should be of interest for a broader range of single-molecule studies, well beyond the specific example of lambda-DNA shown here.

  7. Traditional Postextractive Implant Site Preparation Compared with Pre-extractive Interradicular Implant Bed Preparation in the Mandibular Molar Region, Using an Ultrasonic Device: A Randomized Pilot Study.

    PubMed

    Scarano, Antonio

    The immediate placement of single postextractive implants is increasing in the everyday clinical practice. Due to insufficient bone tissue volume, proper primary stability, essential for subsequent osseointegration, is sometimes not reached. The aim of this work was to compare two different approaches: implant bed preparation before and after root extraction. Twenty-two patients of both sexes were selected who needed an implant-prosthetic rehabilitation of the fractured first mandibular molar or presented an untreatable endodontic pathology. The sites were randomly assigned to the test group (treated with implant bed preparation before molar extractions) or control group (treated with implant bed preparation after molar extractions) by a computer-generated table. All implants were placed by the same operator, who was experienced in both traditional and ultrasonic techniques. The implant stability quotient (ISQ) and the position of the implant were evaluated. Statistical analysis was carried out. In the control group, three implants were placed in the central portion of the bone septum, while eight implants were placed with a tilted axis in relation to the septum; in the test group, all implants were placed in ideal positions within the root extraction sockets. The different position of the implants between the two procedures was statistically significant. This work presented an innovative approach for implant placement at the time of mandibular molar extraction. Preparing the implant bed with an ultrasonic device before root extraction is a simple technique and also allows greater stability to be reached in a selective case.

  8. Technical Note: The Initial Stages of Statistical Data Analysis

    PubMed Central

    Tandy, Richard D.

    1998-01-01

    Objective: To provide an overview of several important data-related considerations in the design stage of a research project and to review the levels of measurement and their relationship to the statistical technique chosen for the data analysis. Background: When planning a study, the researcher must clearly define the research problem and narrow it down to specific, testable questions. The next steps are to identify the variables in the study, decide how to group and treat subjects, and determine how to measure, and the underlying level of measurement of, the dependent variables. Then the appropriate statistical technique can be selected for data analysis. Description: The four levels of measurement in increasing complexity are nominal, ordinal, interval, and ratio. Nominal data are categorical or “count” data, and the numbers are treated as labels. Ordinal data can be ranked in a meaningful order by magnitude. Interval data possess the characteristics of ordinal data and also have equal distances between levels. Ratio data have a natural zero point. Nominal and ordinal data are analyzed with nonparametric statistical techniques and interval and ratio data with parametric statistical techniques. Advantages: Understanding the four levels of measurement and when it is appropriate to use each is important in determining which statistical technique to use when analyzing data. PMID:16558489

  9. A joint source-channel distortion model for JPEG compressed images.

    PubMed

    Sabir, Muhammad F; Sheikh, Hamid Rahim; Heath, Robert W; Bovik, Alan C

    2006-06-01

    The need for efficient joint source-channel coding (JSCC) is growing as new multimedia services are introduced in commercial wireless communication systems. An important component of practical JSCC schemes is a distortion model that can predict the quality of compressed digital multimedia such as images and videos. The usual approach in the JSCC literature for quantifying the distortion due to quantization and channel errors is to estimate it for each image using the statistics of the image for a given signal-to-noise ratio (SNR). This is not an efficient approach in the design of real-time systems because of the computational complexity. A more useful and practical approach would be to design JSCC techniques that minimize average distortion for a large set of images based on some distortion model rather than carrying out per-image optimizations. However, models for estimating average distortion due to quantization and channel bit errors in a combined fashion for a large set of images are not available for practical image or video coding standards employing entropy coding and differential coding. This paper presents a statistical model for estimating the distortion introduced in progressive JPEG compressed images due to quantization and channel bit errors in a joint manner. Statistical modeling of important compression techniques such as Huffman coding, differential pulse-coding modulation, and run-length coding are included in the model. Examples show that the distortion in terms of peak signal-to-noise ratio (PSNR) can be predicted within a 2-dB maximum error over a variety of compression ratios and bit-error rates. To illustrate the utility of the proposed model, we present an unequal power allocation scheme as a simple application of our model. Results show that it gives a PSNR gain of around 6.5 dB at low SNRs, as compared to equal power allocation.

  10. Assessment of Surface Air Temperature over China Using Multi-criterion Model Ensemble Framework

    NASA Astrophysics Data System (ADS)

    Li, J.; Zhu, Q.; Su, L.; He, X.; Zhang, X.

    2017-12-01

    The General Circulation Models (GCMs) are designed to simulate the present climate and project future trends. It has been noticed that the performances of GCMs are not always in agreement with each other over different regions. Model ensemble techniques have been developed to post-process the GCMs' outputs and improve their prediction reliabilities. To evaluate the performances of GCMs, root-mean-square error, correlation coefficient, and uncertainty are commonly used statistical measures. However, the simultaneous achievements of these satisfactory statistics cannot be guaranteed when using many model ensemble techniques. Meanwhile, uncertainties and future scenarios are critical for Water-Energy management and operation. In this study, a new multi-model ensemble framework was proposed. It uses a state-of-art evolutionary multi-objective optimization algorithm, termed Multi-Objective Complex Evolution Global Optimization with Principle Component Analysis and Crowding Distance (MOSPD), to derive optimal GCM ensembles and demonstrate the trade-offs among various solutions. Such trade-off information was further analyzed with a robust Pareto front with respect to different statistical measures. A case study was conducted to optimize the surface air temperature (SAT) ensemble solutions over seven geographical regions of China for the historical period (1900-2005) and future projection (2006-2100). The results showed that the ensemble solutions derived with MOSPD algorithm are superior over the simple model average and any single model output during the historical simulation period. For the future prediction, the proposed ensemble framework identified that the largest SAT change would occur in the South Central China under RCP 2.6 scenario, North Eastern China under RCP 4.5 scenario, and North Western China under RCP 8.5 scenario, while the smallest SAT change would occur in the Inner Mongolia under RCP 2.6 scenario, South Central China under RCP 4.5 scenario, and South Central China under RCP 8.5 scenario.

  11. Are Assumptions of Well-Known Statistical Techniques Checked, and Why (Not)?

    PubMed Central

    Hoekstra, Rink; Kiers, Henk A. L.; Johnson, Addie

    2012-01-01

    A valid interpretation of most statistical techniques requires that one or more assumptions be met. In published articles, however, little information tends to be reported on whether the data satisfy the assumptions underlying the statistical techniques used. This could be due to self-selection: Only manuscripts with data fulfilling the assumptions are submitted. Another explanation could be that violations of assumptions are rarely checked for in the first place. We studied whether and how 30 researchers checked fictitious data for violations of assumptions in their own working environment. Participants were asked to analyze the data as they would their own data, for which often used and well-known techniques such as the t-procedure, ANOVA and regression (or non-parametric alternatives) were required. It was found that the assumptions of the techniques were rarely checked, and that if they were, it was regularly by means of a statistical test. Interviews afterward revealed a general lack of knowledge about assumptions, the robustness of the techniques with regards to the assumptions, and how (or whether) assumptions should be checked. These data suggest that checking for violations of assumptions is not a well-considered choice, and that the use of statistics can be described as opportunistic. PMID:22593746

  12. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures

    PubMed Central

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-01-01

    Aims and Objectives: The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. Materials and Methods: A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Results: Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions (P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant (P < 0.001). Conclusions: Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems. PMID:28713763

  13. Evaluation of Three Different Processing Techniques in the Fabrication of Complete Dentures.

    PubMed

    Chintalacheruvu, Vamsi Krishna; Balraj, Rajasekaran Uttukuli; Putchala, Lavanya Sireesha; Pachalla, Sreelekha

    2017-06-01

    The objective of the present study is to compare the effectiveness of three different processing techniques and to find out the accuracy of processing techniques through number of occlusal interferences and increase in vertical dimension after denture processing. A cross-sectional study was conducted on a sample of 18 patients indicated for complete denture fabrication was selected for the study and they were divided into three subgroups. Three processing techniques, compression molding and injection molding using prepolymerized resin and unpolymerized resin, were used to fabricate dentures for each of the groups. After processing, laboratory-remounted dentures were evaluated for number of occlusal interferences in centric and eccentric relations and change in vertical dimension through vertical pin rise in articulator. Data were analyzed using statistical test ANOVA and SPSS software version 19.0 by IBM was used. Data obtained from three groups were subjected to one-way ANOVA test. After ANOVA test, results with significant variations were subjected to post hoc test. Number of occlusal interferences with compression molding technique was reported to be more in both centric and eccentric positions as compared to the two injection molding techniques with statistical significance in centric, protrusive, right lateral nonworking, and left lateral working positions ( P < 0.05). Mean vertical pin rise (0.52 mm) was reported to more in compression molding technique as compared to injection molding techniques, which is statistically significant ( P < 0.001). Within the limitations of this study, injection molding techniques exhibited less processing errors as compared to compression molding technique with statistical significance. There was no statistically significant difference in processing errors reported within two injection molding systems.

  14. Use of refractometry and colorimetry as field methods to rapidly assess antimalarial drug quality.

    PubMed

    Green, Michael D; Nettey, Henry; Villalva Rojas, Ofelia; Pamanivong, Chansapha; Khounsaknalath, Lamphet; Grande Ortiz, Miguel; Newton, Paul N; Fernández, Facundo M; Vongsack, Latsamy; Manolin, Ot

    2007-01-04

    The proliferation of counterfeit and poor-quality drugs is a major public health problem; especially in developing countries lacking adequate resources to effectively monitor their prevalence. Simple and affordable field methods provide a practical means of rapidly monitoring drug quality in circumstances where more advanced techniques are not available. Therefore, we have evaluated refractometry, colorimetry and a technique combining both processes as simple and accurate field assays to rapidly test the quality of the commonly available antimalarial drugs; artesunate, chloroquine, quinine, and sulfadoxine. Method bias, sensitivity, specificity and accuracy relative to high-performance liquid chromatographic (HPLC) analysis of drugs collected in the Lao PDR were assessed for each technique. The HPLC method for each drug was evaluated in terms of assay variability and accuracy. The accuracy of the combined method ranged from 0.96 to 1.00 for artesunate tablets, chloroquine injectables, quinine capsules, and sulfadoxine tablets while the accuracy was 0.78 for enterically coated chloroquine tablets. These techniques provide a generally accurate, yet simple and affordable means to assess drug quality in resource-poor settings.

  15. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Deriving Criteria-supporting Benchmark Values from Empirical Response Relationships: Comparison of Statistical Techniques and Effect of Log-transforming the Nutrient Variable

    EPA Science Inventory

    In analyses supporting the development of numeric nutrient criteria, multiple statistical techniques can be used to extract critical values from stressor response relationships. However there is little guidance for choosing among techniques, and the extent to which log-transfor...

  17. Incorporating Nonparametric Statistics into Delphi Studies in Library and Information Science

    ERIC Educational Resources Information Center

    Ju, Boryung; Jin, Tao

    2013-01-01

    Introduction: The Delphi technique is widely used in library and information science research. However, many researchers in the field fail to employ standard statistical tests when using this technique. This makes the technique vulnerable to criticisms of its reliability and validity. The general goal of this article is to explore how…

  18. SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit

    PubMed Central

    Chu, Annie; Cui, Jenny; Dinov, Ivo D.

    2011-01-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994

  19. Direct Detection of Singlet-Triplet Interconversion in OLED Magnetoelectroluminescence with a Metal-Free Fluorescence-Phosphorescence Dual Emitter

    NASA Astrophysics Data System (ADS)

    Ratzke, Wolfram; Bange, Sebastian; Lupton, John M.

    2018-05-01

    We demonstrate that a simple phenazine derivative can serve as a dual emitter for organic light-emitting diodes, showing simultaneous luminescence from the singlet and triplet excited states at room temperature without the need of heavy-atom substituents. Although devices made with this emitter achieve only low quantum efficiencies of <0.2 % , changes in fluorescence and phosphorescence intensity on the subpercent scale caused by an external magnetic field of up to 30 mT are clearly resolved with an ultra-low-noise optical imaging technique. The results demonstrate the concept of using simple reporter molecules, available commercially, to optically detect the spin of excited states formed in an organic light-emitting diode and thereby probe the underlying spin statistics of recombining electron-hole pairs. A clear anticorrelation of the magnetic-field dependence of singlet and triplet emission shows that it is the spin interconversion between singlet and triplet which dominates the magnetoluminescence response: the phosphorescence intensity decreases by the same amount as the fluorescence intensity increases. The concurrent detection of singlet and triplet emission as well as device resistance at cryogenic and room temperature constitute a useful tool to disentangle the effects of spin-dependent recombination from spin-dependent transport mechanisms.

  20. Simple surgical approach with high-frequency radio-wave electrosurgery for conjunctivochalasis.

    PubMed

    Youm, Dong Ju; Kim, Joon Mo; Choi, Chul Young

    2010-11-01

    To introduce a new simple surgical approach with high-frequency radio-wave electrosurgery to reduce conjunctivochalasis (CCh). Prospective, noncomparative, interventional case series analysis. Twelve patients (20 eyes) with CCh were recruited from the outpatient service of the Department of Ophthalmology, Kangbuk Samsung Hospital, Seoul, Korea. On the inferior bulbar conjunctiva, subconjunctival coagulation was performed with a fine-needle electrode using a high-frequency radio-wave electrosurgical unit (Ellman Surgitron; Ellman International, Inc., Hewlett, NY) in coagulation mode. Conjunctivochalasis grade; epiphora and dry eye symptoms (the Ocular Surface Disease Index [OSDI]; Allergan Inc., Irvine, CA, holds the copyright); and intraoperative and postoperative complications. Eighteen eyes (90%) recovered a smooth, wet, and noninflamed conjunctival surface within 1 month and remained stable for a follow-up period of 3 months. At 3 months postoperatively, 18 eyes (90%) had grade 0 CCh. There was a statistically significant decrease of the OSDI score at 3 months postoperatively (P < 0.001). A surgical approach with high-frequency radio-wave electrosurgery produced a significant reduction in CCh and an improvement in symptoms. Radio-wave surgical techniques represent a favorable alternative to surgical treatment of CCh. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2010 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.

Top