Parolini, Giuditta
2015-01-01
During the twentieth century statistical methods have transformed research in the experimental and social sciences. Qualitative evidence has largely been replaced by quantitative results and the tools of statistical inference have helped foster a new ideal of objectivity in scientific knowledge. The paper will investigate this transformation by considering the genesis of analysis of variance and experimental design, statistical methods nowadays taught in every elementary course of statistics for the experimental and social sciences. These methods were developed by the mathematician and geneticist R. A. Fisher during the 1920s, while he was working at Rothamsted Experimental Station, where agricultural research was in turn reshaped by Fisher's methods. Analysis of variance and experimental design required new practices and instruments in field and laboratory research, and imposed a redistribution of expertise among statisticians, experimental scientists and the farm staff. On the other hand the use of statistical methods in agricultural science called for a systematization of information management and made computing an activity integral to the experimental research done at Rothamsted, permanently integrating the statisticians' tools and expertise into the station research programme. Fisher's statistical methods did not remain confined within agricultural research and by the end of the 1950s they had come to stay in psychology, sociology, education, chemistry, medicine, engineering, economics, quality control, just to mention a few of the disciplines which adopted them.
Experimental Mathematics and Computational Statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, David H.; Borwein, Jonathan M.
2009-04-30
The field of statistics has long been noted for techniques to detect patterns and regularities in numerical data. In this article we explore connections between statistics and the emerging field of 'experimental mathematics'. These includes both applications of experimental mathematics in statistics, as well as statistical methods applied to computational mathematics.
Statistical Methodologies to Integrate Experimental and Computational Research
NASA Technical Reports Server (NTRS)
Parker, P. A.; Johnson, R. T.; Montgomery, D. C.
2008-01-01
Development of advanced algorithms for simulating engine flow paths requires the integration of fundamental experiments with the validation of enhanced mathematical models. In this paper, we provide an overview of statistical methods to strategically and efficiently conduct experiments and computational model refinement. Moreover, the integration of experimental and computational research efforts is emphasized. With a statistical engineering perspective, scientific and engineering expertise is combined with statistical sciences to gain deeper insights into experimental phenomenon and code development performance; supporting the overall research objectives. The particular statistical methods discussed are design of experiments, response surface methodology, and uncertainty analysis and planning. Their application is illustrated with a coaxial free jet experiment and a turbulence model refinement investigation. Our goal is to provide an overview, focusing on concepts rather than practice, to demonstrate the benefits of using statistical methods in research and development, thereby encouraging their broader and more systematic application.
1989-03-01
statistical energy analysis , the finite clement method, and the power flow method. Experimental solutions are the most common in the literature. The authors of...to the added weights and inertias of the transducers attached to an experimental structure. Statistical energy analysis (SEA) is a computational method...Analysis and Diagnosis," Journal of Sound and Vibration, Vol. 115, No. 3, pp. 405-422 (1987). 8. Lyon, R.L., Statistical Energy Analysis of Dynamical Systems
A Mechanical Power Flow Capability for the Finite Element Code NASTRAN
1989-07-01
perimental methods. statistical energy analysis , the finite element method, and a finite element analog-,y using heat conduction equations. Experimental...weights and inertias of the transducers attached to an experimental structure may produce accuracy problems. Statistical energy analysis (SEA) is a...405-422 (1987). 8. Lyon, R.L., Statistical Energy Analysis of Dynamical Sistems, The M.I.T. Press, (1975). 9. Mickol, J.D., and R.J. Bernhard, "An
2015-06-30
7. Building Statistical Metamodels using Simulation Experimental Designs ............................................... 34 7.1. Statistical Design...system design drivers across several different domain models, our methodology uses statistical metamodeling to approximate the simulations’ behavior. A...output. We build metamodels using a number of statistical methods that include stepwise regression, boosted trees, neural nets, and bootstrap forest
2015-06-01
7. Building Statistical Metamodels using Simulation Experimental Designs ............................................... 34 7.1. Statistical Design...system design drivers across several different domain models, our methodology uses statistical metamodeling to approximate the simulations’ behavior. A...output. We build metamodels using a number of statistical methods that include stepwise regression, boosted trees, neural nets, and bootstrap forest
ERIC Educational Resources Information Center
Karadag, Engin
2010-01-01
To assess research methods and analysis of statistical techniques employed by educational researchers, this study surveyed unpublished doctoral dissertation from 2003 to 2007. Frequently used research methods consisted of experimental research; a survey; a correlational study; and a case study. Descriptive statistics, t-test, ANOVA, factor…
Alsaggaf, Rotana; O'Hara, Lyndsay M; Stafford, Kristen A; Leekha, Surbhi; Harris, Anthony D
2018-02-01
OBJECTIVE A systematic review of quasi-experimental studies in the field of infectious diseases was published in 2005. The aim of this study was to assess improvements in the design and reporting of quasi-experiments 10 years after the initial review. We also aimed to report the statistical methods used to analyze quasi-experimental data. DESIGN Systematic review of articles published from January 1, 2013, to December 31, 2014, in 4 major infectious disease journals. METHODS Quasi-experimental studies focused on infection control and antibiotic resistance were identified and classified based on 4 criteria: (1) type of quasi-experimental design used, (2) justification of the use of the design, (3) use of correct nomenclature to describe the design, and (4) statistical methods used. RESULTS Of 2,600 articles, 173 (7%) featured a quasi-experimental design, compared to 73 of 2,320 articles (3%) in the previous review (P<.01). Moreover, 21 articles (12%) utilized a study design with a control group; 6 (3.5%) justified the use of a quasi-experimental design; and 68 (39%) identified their design using the correct nomenclature. In addition, 2-group statistical tests were used in 75 studies (43%); 58 studies (34%) used standard regression analysis; 18 (10%) used segmented regression analysis; 7 (4%) used standard time-series analysis; 5 (3%) used segmented time-series analysis; and 10 (6%) did not utilize statistical methods for comparisons. CONCLUSIONS While some progress occurred over the decade, it is crucial to continue improving the design and reporting of quasi-experimental studies in the fields of infection control and antibiotic resistance to better evaluate the effectiveness of important interventions. Infect Control Hosp Epidemiol 2018;39:170-176.
Experimental design and statistical methods for improved hit detection in high-throughput screening.
Malo, Nathalie; Hanley, James A; Carlile, Graeme; Liu, Jing; Pelletier, Jerry; Thomas, David; Nadon, Robert
2010-09-01
Identification of active compounds in high-throughput screening (HTS) contexts can be substantially improved by applying classical experimental design and statistical inference principles to all phases of HTS studies. The authors present both experimental and simulated data to illustrate how true-positive rates can be maximized without increasing false-positive rates by the following analytical process. First, the use of robust data preprocessing methods reduces unwanted variation by removing row, column, and plate biases. Second, replicate measurements allow estimation of the magnitude of the remaining random error and the use of formal statistical models to benchmark putative hits relative to what is expected by chance. Receiver Operating Characteristic (ROC) analyses revealed superior power for data preprocessed by a trimmed-mean polish method combined with the RVM t-test, particularly for small- to moderate-sized biological hits.
Explorations in Statistics: Permutation Methods
ERIC Educational Resources Information Center
Curran-Everett, Douglas
2012-01-01
Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This eighth installment of "Explorations in Statistics" explores permutation methods, empiric procedures we can use to assess an experimental result--to test a null hypothesis--when we are reluctant to trust statistical…
ERIC Educational Resources Information Center
Hedges, Sarai
2017-01-01
The statistics education community continues to explore the differences in performance outcomes and in student attitudes between online and face-to-face delivery methods of statistics courses. In this quasi-experimental study student persistence, exam, quiz, and homework scores were compared between delivery methods, class status, and programs of…
ERIC Educational Resources Information Center
Geske, Jenenne A.; Mickelson, William T.; Bandalos, Deborah L.; Jonson, Jessica; Smith, Russell W.
The bulk of experimental research related to reforms in the teaching of statistics concentrates on the effects of alternative teaching methods on statistics achievement. This study expands on that research by including an examination of the effects of instructor and the interaction between instructor and method on achievement as well as attitudes,…
The Shock and Vibration Digest. Volume 16, Number 1
1984-01-01
investigation of the measure- ment of frequency band average loss factors of structural components for use in the statistical energy analysis method of...stiffness. Matrix methods Key Words: Finite element technique. Statistical energy analysis . Experimental techniques. Framed structures, Com- puter...programs In order to further understand the practical application of the statistical energy analysis , a two section plate-like frame structure is
[Review of research design and statistical methods in Chinese Journal of Cardiology].
Zhang, Li-jun; Yu, Jin-ming
2009-07-01
To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.
Shardell, Michelle; Harris, Anthony D; El-Kamary, Samer S; Furuno, Jon P; Miller, Ram R; Perencevich, Eli N
2007-10-01
Quasi-experimental study designs are frequently used to assess interventions that aim to limit the emergence of antimicrobial-resistant pathogens. However, previous studies using these designs have often used suboptimal statistical methods, which may result in researchers making spurious conclusions. Methods used to analyze quasi-experimental data include 2-group tests, regression analysis, and time-series analysis, and they all have specific assumptions, data requirements, strengths, and limitations. An example of a hospital-based intervention to reduce methicillin-resistant Staphylococcus aureus infection rates and reduce overall length of stay is used to explore these methods.
Reliability approach to rotating-component design. [fatigue life and stress concentration
NASA Technical Reports Server (NTRS)
Kececioglu, D. B.; Lalli, V. R.
1975-01-01
A probabilistic methodology for designing rotating mechanical components using reliability to relate stress to strength is explained. The experimental test machines and data obtained for steel to verify this methodology are described. A sample mechanical rotating component design problem is solved by comparing a deterministic design method with the new design-by reliability approach. The new method shows that a smaller size and weight can be obtained for specified rotating shaft life and reliability, and uses the statistical distortion-energy theory with statistical fatigue diagrams for optimum shaft design. Statistical methods are presented for (1) determining strength distributions for steel experimentally, (2) determining a failure theory for stress variations in a rotating shaft subjected to reversed bending and steady torque, and (3) relating strength to stress by reliability.
[Triple-type theory of statistics and its application in the scientific research of biomedicine].
Hu, Liang-ping; Liu, Hui-gang
2005-07-20
To point out the crux of why so many people failed to grasp statistics and to bring forth a "triple-type theory of statistics" to solve the problem in a creative way. Based on the experience in long-time teaching and research in statistics, the "three-type theory" was raised and clarified. Examples were provided to demonstrate that the 3 types, i.e., expressive type, prototype and the standardized type are the essentials for people to apply statistics rationally both in theory and practice, and moreover, it is demonstrated by some instances that the "three types" are correlated with each other. It can help people to see the essence by interpreting and analyzing the problems of experimental designs and statistical analyses in medical research work. Investigations reveal that for some questions, the three types are mutually identical; for some questions, the prototype is their standardized type; however, for some others, the three types are distinct from each other. It has been shown that in some multifactor experimental researches, it leads to the nonexistence of the standardized type corresponding to the prototype at all, because some researchers have committed the mistake of "incomplete control" in setting experimental groups. This is a problem which should be solved by the concept and method of "division". Once the "triple-type" for each question is clarified, a proper experimental design and statistical method can be carried out easily. "Triple-type theory of statistics" can help people to avoid committing statistical mistakes or at least to decrease the misuse rate dramatically and improve the quality, level and speed of biomedical research during the process of applying statistics. It can also help people to improve the quality of statistical textbooks and the teaching effect of statistics and it has demonstrated how to advance biomedical statistics.
NASA Astrophysics Data System (ADS)
Koparan, Timur
2016-02-01
In this study, the effect on the achievement and attitudes of prospective teachers is examined. With this aim ahead, achievement test, attitude scale for statistics and interviews were used as data collection tools. The achievement test comprises 8 problems based on statistical data, and the attitude scale comprises 13 Likert-type items. The study was carried out in 2014-2015 academic year fall semester at a university in Turkey. The study, which employed the pre-test-post-test control group design of quasi-experimental research method, was carried out on a group of 80 prospective teachers, 40 in the control group and 40 in the experimental group. Both groups had four-hour classes about descriptive statistics. The classes with the control group were carried out through traditional methods while dynamic statistics software was used in the experimental group. Five prospective teachers from the experimental group were interviewed clinically after the application for a deeper examination of their views about application. Qualitative data gained are presented under various themes. At the end of the study, it was found that there is a significant difference in favour of the experimental group in terms of achievement and attitudes, the prospective teachers have affirmative approach to the use of dynamic software and see it as an effective tool to enrich maths classes. In accordance with the findings of the study, it is suggested that dynamic software, which offers unique opportunities, be used in classes by teachers and students.
In pursuit of a science of agriculture: the role of statistics in field experiments.
Parolini, Giuditta
2015-09-01
Since the beginning of the twentieth century statistics has reshaped the experimental cultures of agricultural research taking part in the subtle dialectic between the epistemic and the material that is proper to experimental systems. This transformation has become especially relevant in field trials and the paper will examine the British agricultural institution, Rothamsted Experimental Station, where statistical methods nowadays popular in the planning and analysis of field experiments were developed in the 1920s. At Rothamsted statistics promoted randomisation over systematic arrangements, factorisation over one-question trials, and emphasised the importance of the experimental error in assessing field trials. These changes in methodology transformed also the material culture of agricultural science, and a new body, the Field Plots Committee, was created to manage the field research of the agricultural institution. Although successful, the vision of field experimentation proposed by the Rothamsted statisticians was not unproblematic. Experimental scientists closely linked to the farming community questioned it in favour of a field research that could be more easily understood by farmers. The clash between the two agendas reveals how the role attributed to statistics in field experimentation defined different pursuits of agricultural research, alternately conceived of as a scientists' science or as a farmers' science.
Study/experimental/research design: much more than statistics.
Knight, Kenneth L
2010-01-01
The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes "Methods" sections hard to read and understand. To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.
Effectiveness of Project Based Learning in Statistics for Lower Secondary Schools
ERIC Educational Resources Information Center
Siswono, Tatag Yuli Eko; Hartono, Sugi; Kohar, Ahmad Wachidul
2018-01-01
Purpose: This study aimed at investigating the effectiveness of implementing Project Based Learning (PBL) on the topic of statistics at a lower secondary school in Surabaya city, Indonesia, indicated by examining student learning outcomes, student responses, and student activity. Research Methods: A quasi experimental method was conducted over two…
Sarkar, Sumona; Lund, Steven P; Vyzasatya, Ravi; Vanguri, Padmavathy; Elliott, John T; Plant, Anne L; Lin-Gibson, Sheng
2017-12-01
Cell counting measurements are critical in the research, development and manufacturing of cell-based products, yet determining cell quantity with accuracy and precision remains a challenge. Validating and evaluating a cell counting measurement process can be difficult because of the lack of appropriate reference material. Here we describe an experimental design and statistical analysis approach to evaluate the quality of a cell counting measurement process in the absence of appropriate reference materials or reference methods. The experimental design is based on a dilution series study with replicate samples and observations as well as measurement process controls. The statistical analysis evaluates the precision and proportionality of the cell counting measurement process and can be used to compare the quality of two or more counting methods. As an illustration of this approach, cell counting measurement processes (automated and manual methods) were compared for a human mesenchymal stromal cell (hMSC) preparation. For the hMSC preparation investigated, results indicated that the automated method performed better than the manual counting methods in terms of precision and proportionality. By conducting well controlled dilution series experimental designs coupled with appropriate statistical analysis, quantitative indicators of repeatability and proportionality can be calculated to provide an assessment of cell counting measurement quality. This approach does not rely on the use of a reference material or comparison to "gold standard" methods known to have limited assurance of accuracy and precision. The approach presented here may help the selection, optimization, and/or validation of a cell counting measurement process. Published by Elsevier Inc.
A practical approach for the scale-up of roller compaction process.
Shi, Weixian; Sprockel, Omar L
2016-09-01
An alternative approach for the scale-up of ribbon formation during roller compaction was investigated, which required only one batch at the commercial scale to set the operational conditions. The scale-up of ribbon formation was based on a probability method. It was sufficient in describing the mechanism of ribbon formation at both scales. In this method, a statistical relationship between roller compaction parameters and ribbon attributes (thickness and density) was first defined with DoE using a pilot Alexanderwerk WP120 roller compactor. While the milling speed was included in the design, it has no practical effect on granule properties within the study range despite its statistical significance. The statistical relationship was then adapted to a commercial Alexanderwerk WP200 roller compactor with one experimental run. The experimental run served as a calibration of the statistical model parameters. The proposed transfer method was then confirmed by conducting a mapping study on the Alexanderwerk WP200 using a factorial DoE, which showed a match between the predictions and the verification experiments. The study demonstrates the applicability of the roller compaction transfer method using the statistical model from the development scale calibrated with one experiment point at the commercial scale. Copyright © 2016 Elsevier B.V. All rights reserved.
Görgen, Kai; Hebart, Martin N; Allefeld, Carsten; Haynes, John-Dylan
2017-12-27
Standard neuroimaging data analysis based on traditional principles of experimental design, modelling, and statistical inference is increasingly complemented by novel analysis methods, driven e.g. by machine learning methods. While these novel approaches provide new insights into neuroimaging data, they often have unexpected properties, generating a growing literature on possible pitfalls. We propose to meet this challenge by adopting a habit of systematic testing of experimental design, analysis procedures, and statistical inference. Specifically, we suggest to apply the analysis method used for experimental data also to aspects of the experimental design, simulated confounds, simulated null data, and control data. We stress the importance of keeping the analysis method the same in main and test analyses, because only this way possible confounds and unexpected properties can be reliably detected and avoided. We describe and discuss this Same Analysis Approach in detail, and demonstrate it in two worked examples using multivariate decoding. With these examples, we reveal two sources of error: A mismatch between counterbalancing (crossover designs) and cross-validation which leads to systematic below-chance accuracies, and linear decoding of a nonlinear effect, a difference in variance. Copyright © 2017 Elsevier Inc. All rights reserved.
Wang, Quanfu; Hou, Yanhua; Yan, Peisheng
2012-06-01
Statistical experimental designs were employed to optimize culture conditions for cold-adapted lysozyme production of a psychrophilic yeast Debaryomyces hansenii. In the first step of optimization using Plackett-Burman design (PBD), peptone, glucose, temperature, and NaCl were identified as significant variables that affected lysozyme production, the formula was further optimized using a four factor central composite design (CCD) to understand their interaction and to determine their optimal levels. A quadratic model was developed and validated. Compared to the initial level (18.8 U/mL), the maximum lysozyme production (65.8 U/mL) observed was approximately increased by 3.5-fold under the optimized conditions. Cold-adapted lysozymes production was first optimized using statistical experimental methods. A 3.5-fold enhancement of microbial lysozyme was gained after optimization. Such an improved production will facilitate the application of microbial lysozyme. Thus, D. hansenii lysozyme may be a good and new resource for the industrial production of cold-adapted lysozymes. © 2012 Institute of Food Technologists®
Effects of special composite stretching on the swing of amateur golf players
Lee, Joong-chul; Lee, Sung-wan; Yeo, Yun-ghi; Park, Gi Duck
2015-01-01
[Purpose] The study investigated stretching for safer a golf swing compared to present stretching methods for proper swings in order to examine the effects of stretching exercises on golf swings. [Subjects] The subjects were 20 amateur golf club members who were divided into two groups: an experimental group which performed stretching, and a control group which did not. The subjects had no bone deformity, muscle weakness, muscle soreness, or neurological problems. [Methods] A swing analyzer and a ROM measuring instrument were used as the measuring tools. The swing analyzer was a GS400-golf hit ball analyzer (Korea) and the ROM measuring instrument was a goniometer (Korea). [Results] The experimental group showed a statistically significant improvement in driving distance. After the special stretching training for golf, a statistically significant difference in hit-ball direction deviation after swings were found between the groups. The experimental group showed statistically significant decreases in hit ball direction deviation. After the special stretching training for golf, statistically significant differences in hit-ball speed were found between the groups. The experimental group showed significant increases in hit-ball speed. [Conclusion] To examine the effects of a special stretching program for golf on golf swing-related factors, 20 male amateur golf club members performed a 12-week stretching training program. After the golf stretching training, statistically significant differences were found between the groups in hit-ball driving distance, direction deviation, deflection distance, and speed. PMID:25995553
Effects of special composite stretching on the swing of amateur golf players.
Lee, Joong-Chul; Lee, Sung-Wan; Yeo, Yun-Ghi; Park, Gi Duck
2015-04-01
[Purpose] The study investigated stretching for safer a golf swing compared to present stretching methods for proper swings in order to examine the effects of stretching exercises on golf swings. [Subjects] The subjects were 20 amateur golf club members who were divided into two groups: an experimental group which performed stretching, and a control group which did not. The subjects had no bone deformity, muscle weakness, muscle soreness, or neurological problems. [Methods] A swing analyzer and a ROM measuring instrument were used as the measuring tools. The swing analyzer was a GS400-golf hit ball analyzer (Korea) and the ROM measuring instrument was a goniometer (Korea). [Results] The experimental group showed a statistically significant improvement in driving distance. After the special stretching training for golf, a statistically significant difference in hit-ball direction deviation after swings were found between the groups. The experimental group showed statistically significant decreases in hit ball direction deviation. After the special stretching training for golf, statistically significant differences in hit-ball speed were found between the groups. The experimental group showed significant increases in hit-ball speed. [Conclusion] To examine the effects of a special stretching program for golf on golf swing-related factors, 20 male amateur golf club members performed a 12-week stretching training program. After the golf stretching training, statistically significant differences were found between the groups in hit-ball driving distance, direction deviation, deflection distance, and speed.
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
Statistical tools for transgene copy number estimation based on real-time PCR.
Yuan, Joshua S; Burris, Jason; Stewart, Nathan R; Mentewab, Ayalew; Stewart, C Neal
2007-11-01
As compared with traditional transgene copy number detection technologies such as Southern blot analysis, real-time PCR provides a fast, inexpensive and high-throughput alternative. However, the real-time PCR based transgene copy number estimation tends to be ambiguous and subjective stemming from the lack of proper statistical analysis and data quality control to render a reliable estimation of copy number with a prediction value. Despite the recent progresses in statistical analysis of real-time PCR, few publications have integrated these advancements in real-time PCR based transgene copy number determination. Three experimental designs and four data quality control integrated statistical models are presented. For the first method, external calibration curves are established for the transgene based on serially-diluted templates. The Ct number from a control transgenic event and putative transgenic event are compared to derive the transgene copy number or zygosity estimation. Simple linear regression and two group T-test procedures were combined to model the data from this design. For the second experimental design, standard curves were generated for both an internal reference gene and the transgene, and the copy number of transgene was compared with that of internal reference gene. Multiple regression models and ANOVA models can be employed to analyze the data and perform quality control for this approach. In the third experimental design, transgene copy number is compared with reference gene without a standard curve, but rather, is based directly on fluorescence data. Two different multiple regression models were proposed to analyze the data based on two different approaches of amplification efficiency integration. Our results highlight the importance of proper statistical treatment and quality control integration in real-time PCR-based transgene copy number determination. These statistical methods allow the real-time PCR-based transgene copy number estimation to be more reliable and precise with a proper statistical estimation. Proper confidence intervals are necessary for unambiguous prediction of trangene copy number. The four different statistical methods are compared for their advantages and disadvantages. Moreover, the statistical methods can also be applied for other real-time PCR-based quantification assays including transfection efficiency analysis and pathogen quantification.
Application of Turchin's method of statistical regularization
NASA Astrophysics Data System (ADS)
Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey
2018-04-01
During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
Statistical reporting inconsistencies in experimental philosophy
Colombo, Matteo; Duev, Georgi; Nuijten, Michèle B.; Sprenger, Jan
2018-01-01
Experimental philosophy (x-phi) is a young field of research in the intersection of philosophy and psychology. It aims to make progress on philosophical questions by using experimental methods traditionally associated with the psychological and behavioral sciences, such as null hypothesis significance testing (NHST). Motivated by recent discussions about a methodological crisis in the behavioral sciences, questions have been raised about the methodological standards of x-phi. Here, we focus on one aspect of this question, namely the rate of inconsistencies in statistical reporting. Previous research has examined the extent to which published articles in psychology and other behavioral sciences present statistical inconsistencies in reporting the results of NHST. In this study, we used the R package statcheck to detect statistical inconsistencies in x-phi, and compared rates of inconsistencies in psychology and philosophy. We found that rates of inconsistencies in x-phi are lower than in the psychological and behavioral sciences. From the point of view of statistical reporting consistency, x-phi seems to do no worse, and perhaps even better, than psychological science. PMID:29649220
A pedagogical derivation of the matrix element method in particle physics data analysis
NASA Astrophysics Data System (ADS)
Sumowidagdo, Suharyo
2018-03-01
The matrix element method provides a direct connection between the underlying theory of particle physics processes and detector-level physical observables. I am presenting a pedagogically-oriented derivation of the matrix element method, drawing from elementary concepts in probability theory, statistics, and the process of experimental measurements. The level of treatment should be suitable for beginning research student in phenomenology and experimental high energy physics.
Highly Efficient Design-of-Experiments Methods for Combining CFD Analysis and Experimental Data
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.; Haller, Harold S.
2009-01-01
It is the purpose of this study to examine the impact of "highly efficient" Design-of-Experiments (DOE) methods for combining sets of CFD generated analysis data with smaller sets of Experimental test data in order to accurately predict performance results where experimental test data were not obtained. The study examines the impact of micro-ramp flow control on the shock wave boundary layer (SWBL) interaction where a complete paired set of data exist from both CFD analysis and Experimental measurements By combining the complete set of CFD analysis data composed of fifteen (15) cases with a smaller subset of experimental test data containing four/five (4/5) cases, compound data sets (CFD/EXP) were generated which allows the prediction of the complete set of Experimental results No statistical difference were found to exist between the combined (CFD/EXP) generated data sets and the complete Experimental data set composed of fifteen (15) cases. The same optimal micro-ramp configuration was obtained using the (CFD/EXP) generated data as obtained with the complete set of Experimental data, and the DOE response surfaces generated by the two data sets were also not statistically different.
Maric, Marija; de Haan, Else; Hogendoorn, Sanne M; Wolters, Lidewij H; Huizenga, Hilde M
2015-03-01
Single-case experimental designs are useful methods in clinical research practice to investigate individual client progress. Their proliferation might have been hampered by methodological challenges such as the difficulty applying existing statistical procedures. In this article, we describe a data-analytic method to analyze univariate (i.e., one symptom) single-case data using the common package SPSS. This method can help the clinical researcher to investigate whether an intervention works as compared with a baseline period or another intervention type, and to determine whether symptom improvement is clinically significant. First, we describe the statistical method in a conceptual way and show how it can be implemented in SPSS. Simulation studies were performed to determine the number of observation points required per intervention phase. Second, to illustrate this method and its implications, we present a case study of an adolescent with anxiety disorders treated with cognitive-behavioral therapy techniques in an outpatient psychotherapy clinic, whose symptoms were regularly assessed before each session. We provide a description of the data analyses and results of this case study. Finally, we discuss the advantages and shortcomings of the proposed method. Copyright © 2014. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Zhu, Meng-Hua; Liu, Liang-Gang; You, Zhong; Xu, Ao-Ao
2009-03-01
In this paper, a heuristic approach based on Slavic's peak searching method has been employed to estimate the width of peak regions for background removing. Synthetic and experimental data are used to test this method. With the estimated peak regions using the proposed method in the whole spectrum, we find it is simple and effective enough to be used together with the Statistics-sensitive Nonlinear Iterative Peak-Clipping method.
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
NASA Astrophysics Data System (ADS)
Zhang, Y.; Li, F.; Zhang, S.; Hao, W.; Zhu, T.; Yuan, L.; Xiao, F.
2017-09-01
In this paper, Statistical Distribution based Conditional Random Fields (STA-CRF) algorithm is exploited for improving marginal ice-water classification. Pixel level ice concentration is presented as the comparison of methods based on CRF. Furthermore, in order to explore the effective statistical distribution model to be integrated into STA-CRF, five statistical distribution models are investigated. The STA-CRF methods are tested on 2 scenes around Prydz Bay and Adélie Depression, where contain a variety of ice types during melt season. Experimental results indicate that the proposed method can resolve sea ice edge well in Marginal Ice Zone (MIZ) and show a robust distinction of ice and water.
Applied statistics in agricultural, biological, and environmental sciences.
USDA-ARS?s Scientific Manuscript database
Agronomic research often involves measurement and collection of multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate statistical methods encompass the simultaneous analysis of all random variables measured on each experimental or s...
Khan, Mohammad Jakir Hossain; Hussain, Mohd Azlan; Mujtaba, Iqbal Mohammed
2014-01-01
Propylene is one type of plastic that is widely used in our everyday life. This study focuses on the identification and justification of the optimum process parameters for polypropylene production in a novel pilot plant based fluidized bed reactor. This first-of-its-kind statistical modeling with experimental validation for the process parameters of polypropylene production was conducted by applying ANNOVA (Analysis of variance) method to Response Surface Methodology (RSM). Three important process variables i.e., reaction temperature, system pressure and hydrogen percentage were considered as the important input factors for the polypropylene production in the analysis performed. In order to examine the effect of process parameters and their interactions, the ANOVA method was utilized among a range of other statistical diagnostic tools such as the correlation between actual and predicted values, the residuals and predicted response, outlier t plot, 3D response surface and contour analysis plots. The statistical analysis showed that the proposed quadratic model had a good fit with the experimental results. At optimum conditions with temperature of 75°C, system pressure of 25 bar and hydrogen percentage of 2%, the highest polypropylene production obtained is 5.82% per pass. Hence it is concluded that the developed experimental design and proposed model can be successfully employed with over a 95% confidence level for optimum polypropylene production in a fluidized bed catalytic reactor (FBCR). PMID:28788576
ERIC Educational Resources Information Center
Schwabe, Robert A.
Interest in Total Quality Management (TQM) at institutions of higher education has been stressed in recent years as an important area of activity for institutional researchers. Two previous AIR Forum papers have presented some of the statistical and graphical methods used for TQM. This paper, the third in the series, first discusses some of the…
NASA Astrophysics Data System (ADS)
Colone, L.; Hovgaard, M. K.; Glavind, L.; Brincker, R.
2018-07-01
A method for mass change detection on wind turbine blades using natural frequencies is presented. The approach is based on two statistical tests. The first test decides if there is a significant mass change and the second test is a statistical group classification based on Linear Discriminant Analysis. The frequencies are identified by means of Operational Modal Analysis using natural excitation. Based on the assumption of Gaussianity of the frequencies, a multi-class statistical model is developed by combining finite element model sensitivities in 10 classes of change location on the blade, the smallest area being 1/5 of the span. The method is experimentally validated for a full scale wind turbine blade in a test setup and loaded by natural wind. Mass change from natural causes was imitated with sand bags and the algorithm was observed to perform well with an experimental detection rate of 1, localization rate of 0.88 and mass estimation rate of 0.72.
The Effect of Project Based Learning on the Statistical Literacy Levels of Student 8th Grade
ERIC Educational Resources Information Center
Koparan, Timur; Güven, Bülent
2014-01-01
This study examines the effect of project based learning on 8th grade students' statistical literacy levels. A performance test was developed for this aim. Quasi-experimental research model was used in this article. In this context, the statistics were taught with traditional method in the control group and it was taught using project based…
The Effect on the 8th Grade Students' Attitude towards Statistics of Project Based Learning
ERIC Educational Resources Information Center
Koparan, Timur; Güven, Bülent
2014-01-01
This study investigates the effect of the project based learning approach on 8th grade students' attitude towards statistics. With this aim, an attitude scale towards statistics was developed. Quasi-experimental research model was used in this study. Following this model in the control group the traditional method was applied to teach statistics…
Statistical analysis of experimental multifragmentation events in 64Zn+112Sn at 40 MeV/nucleon
NASA Astrophysics Data System (ADS)
Lin, W.; Zheng, H.; Ren, P.; Liu, X.; Huang, M.; Wada, R.; Chen, Z.; Wang, J.; Xiao, G. Q.; Qu, G.
2018-04-01
A statistical multifragmentation model (SMM) is applied to the experimentally observed multifragmentation events in an intermediate heavy-ion reaction. Using the temperature and symmetry energy extracted from the isobaric yield ratio (IYR) method based on the modified Fisher model (MFM), SMM is applied to the reaction 64Zn+112Sn at 40 MeV/nucleon. The experimental isotope distribution and mass distribution of the primary reconstructed fragments are compared without afterburner and they are well reproduced. The extracted temperature T and symmetry energy coefficient asym from SMM simulated events, using the IYR method, are also consistent with those from the experiment. These results strongly suggest that in the multifragmentation process there is a freezeout volume, in which the thermal and chemical equilibrium is established before or at the time of the intermediate-mass fragments emission.
Statistical process control in nursing research.
Polit, Denise F; Chaboyer, Wendy
2012-02-01
In intervention studies in which randomization to groups is not possible, researchers typically use quasi-experimental designs. Time series designs are strong quasi-experimental designs but are seldom used, perhaps because of technical and analytic hurdles. Statistical process control (SPC) is an alternative analytic approach to testing hypotheses about intervention effects using data collected over time. SPC, like traditional statistical methods, is a tool for understanding variation and involves the construction of control charts that distinguish between normal, random fluctuations (common cause variation), and statistically significant special cause variation that can result from an innovation. The purpose of this article is to provide an overview of SPC and to illustrate its use in a study of a nursing practice improvement intervention. Copyright © 2011 Wiley Periodicals, Inc.
2011-01-01
Background Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. Methods The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. Results The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. Conclusions The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance. PMID:21599963
Statistical assessment of crosstalk enrichment between gene groups in biological networks.
McCormack, Theodore; Frings, Oliver; Alexeyenko, Andrey; Sonnhammer, Erik L L
2013-01-01
Analyzing groups of functionally coupled genes or proteins in the context of global interaction networks has become an important aspect of bioinformatic investigations. Assessing the statistical significance of crosstalk enrichment between or within groups of genes can be a valuable tool for functional annotation of experimental gene sets. Here we present CrossTalkZ, a statistical method and software to assess the significance of crosstalk enrichment between pairs of gene or protein groups in large biological networks. We demonstrate that the standard z-score is generally an appropriate and unbiased statistic. We further evaluate the ability of four different methods to reliably recover crosstalk within known biological pathways. We conclude that the methods preserving the second-order topological network properties perform best. Finally, we show how CrossTalkZ can be used to annotate experimental gene sets using known pathway annotations and that its performance at this task is superior to gene enrichment analysis (GEA). CrossTalkZ (available at http://sonnhammer.sbc.su.se/download/software/CrossTalkZ/) is implemented in C++, easy to use, fast, accepts various input file formats, and produces a number of statistics. These include z-score, p-value, false discovery rate, and a test of normality for the null distributions.
NASA Technical Reports Server (NTRS)
Benediktsson, Jon A.; Swain, Philip H.; Ersoy, Okan K.
1990-01-01
Neural network learning procedures and statistical classificaiton methods are applied and compared empirically in classification of multisource remote sensing and geographic data. Statistical multisource classification by means of a method based on Bayesian classification theory is also investigated and modified. The modifications permit control of the influence of the data sources involved in the classification process. Reliability measures are introduced to rank the quality of the data sources. The data sources are then weighted according to these rankings in the statistical multisource classification. Four data sources are used in experiments: Landsat MSS data and three forms of topographic data (elevation, slope, and aspect). Experimental results show that two different approaches have unique advantages and disadvantages in this classification application.
Hybrid statistics-simulations based method for atom-counting from ADF STEM images.
De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra
2017-06-01
A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Wootten, A.; Dixon, K. W.; Lanzante, J. R.; Mcpherson, R. A.
2017-12-01
Empirical statistical downscaling (ESD) approaches attempt to refine global climate model (GCM) information via statistical relationships between observations and GCM simulations. The aim of such downscaling efforts is to create added-value climate projections by adding finer spatial detail and reducing biases. The results of statistical downscaling exercises are often used in impact assessments under the assumption that past performance provides an indicator of future results. Given prior research describing the danger of this assumption with regards to temperature, this study expands the perfect model experimental design from previous case studies to test the stationarity assumption with respect to precipitation. Assuming stationarity implies the performance of ESD methods are similar between the future projections and historical training. Case study results from four quantile-mapping based ESD methods demonstrate violations of the stationarity assumption for both central tendency and extremes of precipitation. These violations vary geographically and seasonally. For the four ESD methods tested the greatest challenges for downscaling of daily total precipitation projections occur in regions with limited precipitation and for extremes of precipitation along Southeast coastal regions. We conclude with a discussion of future expansion of the perfect model experimental design and the implications for improving ESD methods and providing guidance on the use of ESD techniques for impact assessments and decision-support.
Gokyildiz Surucu, Sule; Ozturk, Melike; Avcibay Vurgec, Burcu; Alan, Sultan; Akbas, Meltem
2018-02-01
This study aims at analyzing the effect of music on pain and anxiety felt by women in labor during their first pregnancy. When the pregnant women in the experimental group progressed into the active phase of the labor, they were made to listen to music in Acemasiran mode with earplugs for 3 h (20 min of listening with 10-min breaks). It was observed that after the first-hour women indicated that their pain was statistically less in the experimental group. Trait anxiety scores of the women in labor were similar for experimental and control groups. Following the practice, state anxiety average scores became lower in favor of the experimental group and the correlation was statistically significant. In order to facilitate women's coping with labor pain and improve their wellbeing with the activity during the labor, musicotherapy, a non-pharmacological method, is an effective, simple and economical method. Copyright © 2017 Elsevier Ltd. All rights reserved.
Mohamadirizi, Soheila; Fahami, Fariba; Bahadoran, Parvin; Ehsanpour, Soheila
2015-01-01
Background: An active teaching method has been used widely in medical education. The aim of this study was to determine the effectiveness of the four-phase teaching method on midwifery students’ emotional intelligence (EQ) in managing the childbirth. Materials and Methods: This was an experimental study that performed in 2013 in Isfahan University of Medical Sciences. Thirty midwifery students were involved in this study and selected through a random sampling method. The EQ questionnaire (43Q) was completed by both the groups, before and after the education. The collected data were analyzed using SPSS 14, the independent t-test, and the paired t-test. The statistically significant level was considered to be <0.05. Results: The findings of the independent t-test did not show any significant difference between EQ scores of the experimental and the control group before the intervention, whereas a statistically significant difference was observed after the intervention between the scores of two groups (P = 0.009). The paired t-test showed a statistically significant difference in EQ scores in the two groups after the intervention in the four-phase and the control group, respectively, as P = 0.005 and P = 0.018. Furthermore, the rate of self-efficiency has increased in the experimental group and control group as 66% and 13% (P = 0.024), respectively. Conclusion: The four-phase teaching method can increase the EQ levels of midwifery students. Therefore, the conduction of this educational model is recommended as an effective learning method. PMID:26097861
Protein Sectors: Statistical Coupling Analysis versus Conservation
Teşileanu, Tiberiu; Colwell, Lucy J.; Leibler, Stanislas
2015-01-01
Statistical coupling analysis (SCA) is a method for analyzing multiple sequence alignments that was used to identify groups of coevolving residues termed “sectors”. The method applies spectral analysis to a matrix obtained by combining correlation information with sequence conservation. It has been asserted that the protein sectors identified by SCA are functionally significant, with different sectors controlling different biochemical properties of the protein. Here we reconsider the available experimental data and note that it involves almost exclusively proteins with a single sector. We show that in this case sequence conservation is the dominating factor in SCA, and can alone be used to make statistically equivalent functional predictions. Therefore, we suggest shifting the experimental focus to proteins for which SCA identifies several sectors. Correlations in protein alignments, which have been shown to be informative in a number of independent studies, would then be less dominated by sequence conservation. PMID:25723535
Optimal Experimental Design for Model Discrimination
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it possible to determine these values, and thereby identify an optimal experimental design. After describing the method, it is demonstrated in two content areas in cognitive psychology in which models are highly competitive: retention (i.e., forgetting) and categorization. The optimal design is compared with the quality of designs used in the literature. The findings demonstrate that design optimization has the potential to increase the informativeness of the experimental method. PMID:19618983
Experimental and Computational Analysis of Modes in a Partially Constrained Plate
2004-03-01
way to quantify a structure. One technique utilizing an energy method is the Statistical Energy Analysis (SEA). The SEA process involves regarding...B.R. Mace. “ Statistical Energy Analysis of Two Edge- Coupled Rectangular Plates: Ensemble Averages,” Journal of Sound and Vibration, 193(4): 793-822
Non-homogeneous updates for the iterative coordinate descent algorithm
NASA Astrophysics Data System (ADS)
Yu, Zhou; Thibault, Jean-Baptiste; Bouman, Charles A.; Sauer, Ken D.; Hsieh, Jiang
2007-02-01
Statistical reconstruction methods show great promise for improving resolution, and reducing noise and artifacts in helical X-ray CT. In fact, statistical reconstruction seems to be particularly valuable in maintaining reconstructed image quality when the dosage is low and the noise is therefore high. However, high computational cost and long reconstruction times remain as a barrier to the use of statistical reconstruction in practical applications. Among the various iterative methods that have been studied for statistical reconstruction, iterative coordinate descent (ICD) has been found to have relatively low overall computational requirements due to its fast convergence. This paper presents a novel method for further speeding the convergence of the ICD algorithm, and therefore reducing the overall reconstruction time for statistical reconstruction. The method, which we call nonhomogeneous iterative coordinate descent (NH-ICD) uses spatially non-homogeneous updates to speed convergence by focusing computation where it is most needed. Experimental results with real data indicate that the method speeds reconstruction by roughly a factor of two for typical 3D multi-slice geometries.
Comparisons of non-Gaussian statistical models in DNA methylation analysis.
Ma, Zhanyu; Teschendorff, Andrew E; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-06-16
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance.
Comparisons of Non-Gaussian Statistical Models in DNA Methylation Analysis
Ma, Zhanyu; Teschendorff, Andrew E.; Yu, Hong; Taghia, Jalil; Guo, Jun
2014-01-01
As a key regulatory mechanism of gene expression, DNA methylation patterns are widely altered in many complex genetic diseases, including cancer. DNA methylation is naturally quantified by bounded support data; therefore, it is non-Gaussian distributed. In order to capture such properties, we introduce some non-Gaussian statistical models to perform dimension reduction on DNA methylation data. Afterwards, non-Gaussian statistical model-based unsupervised clustering strategies are applied to cluster the data. Comparisons and analysis of different dimension reduction strategies and unsupervised clustering methods are presented. Experimental results show that the non-Gaussian statistical model-based methods are superior to the conventional Gaussian distribution-based method. They are meaningful tools for DNA methylation analysis. Moreover, among several non-Gaussian methods, the one that captures the bounded nature of DNA methylation data reveals the best clustering performance. PMID:24937687
Polat, Hatice; Ergüney, Seher
The purpose of this study was to determine the effect of reflexology on reducing dyspnea and fatigue in patients with chronic obstructive pulmonary disease (COPD). The study was conducted as a pretest-posttest experimental design. The population of the study consisted of 60 patients (30 in experimental group and 30 in control group). Patient Description Form, Baseline Dyspnea Index (BDI) and Visual Analogue Scale-Fatigue (VAS-F) were used to collect the data. The difference between pretest-posttest dyspnea and fatigue mean scores of patients in the experimental group was statistically significant (p < .01). The difference between pretest-posttest dyspnea and fatigue mean scores of patients in the control group was statistically insignificant (p > .05). It was determined that the reflexology reduced dyspnea and fatigue in patients with COPD. Complementary methods such as reflexology should be used with pharmacological methods to reduce dyspnea and fatigue of COPD patients.
Experimental statistics for biological sciences.
Bang, Heejung; Davidian, Marie
2010-01-01
In this chapter, we cover basic and fundamental principles and methods in statistics - from "What are Data and Statistics?" to "ANOVA and linear regression," which are the basis of any statistical thinking and undertaking. Readers can easily find the selected topics in most introductory statistics textbooks, but we have tried to assemble and structure them in a succinct and reader-friendly manner in a stand-alone chapter. This text has long been used in real classroom settings for both undergraduate and graduate students who do or do not major in statistical sciences. We hope that from this chapter, readers would understand the key statistical concepts and terminologies, how to design a study (experimental or observational), how to analyze the data (e.g., describe the data and/or estimate the parameter(s) and make inference), and how to interpret the results. This text would be most useful if it is used as a supplemental material, while the readers take their own statistical courses or it would serve as a great reference text associated with a manual for any statistical software as a self-teaching guide.
NASA Astrophysics Data System (ADS)
Flores-Marquez, Leticia Elsa; Ramirez Rojaz, Alejandro; Telesca, Luciano
2015-04-01
The study of two statistical approaches is analyzed for two different types of data sets, one is the seismicity generated by the subduction processes occurred at south Pacific coast of Mexico between 2005 and 2012, and the other corresponds to the synthetic seismic data generated by a stick-slip experimental model. The statistical methods used for the present study are the visibility graph in order to investigate the time dynamics of the series and the scaled probability density function in the natural time domain to investigate the critical order of the system. This comparison has the purpose to show the similarities between the dynamical behaviors of both types of data sets, from the point of view of critical systems. The observed behaviors allow us to conclude that the experimental set up globally reproduces the behavior observed in the statistical approaches used to analyses the seismicity of the subduction zone. The present study was supported by the Bilateral Project Italy-Mexico Experimental Stick-slip models of tectonic faults: innovative statistical approaches applied to synthetic seismic sequences, jointly funded by MAECI (Italy) and AMEXCID (Mexico) in the framework of the Bilateral Agreement for Scientific and Technological Cooperation PE 2014-2016.
NASA Astrophysics Data System (ADS)
Santiago-Lona, Cynthia V.; Hernández-Montes, María del Socorro; Mendoza-Santoyo, Fernando; Esquivel-Tejeda, Jesús
2018-02-01
The study and quantification of the tympanic membrane (TM) displacements add important information to advance the knowledge about the hearing process. A comparative statistical analysis between two commonly used demodulation methods employed to recover the optical phase in digital holographic interferometry, namely the fast Fourier transform and phase-shifting interferometry, is presented as applied to study thin tissues such as the TM. The resulting experimental TM surface displacement data are used to contrast both methods through the analysis of variance and F tests. Data are gathered when the TMs are excited with continuous sound stimuli at levels 86, 89 and 93 dB SPL for the frequencies of 800, 1300 and 2500 Hz under the same experimental conditions. The statistical analysis shows repeatability in z-direction displacements with a standard deviation of 0.086, 0.098 and 0.080 μm using the Fourier method, and 0.080, 0.104 and 0.055 μm with the phase-shifting method at a 95% confidence level for all frequencies. The precision and accuracy are evaluated by means of the coefficient of variation; the results with the Fourier method are 0.06143, 0.06125, 0.06154 and 0.06154, 0.06118, 0.06111 with phase-shifting. The relative error between both methods is 7.143, 6.250 and 30.769%. On comparing the measured displacements, the results indicate that there is no statistically significant difference between both methods for frequencies at 800 and 1300 Hz; however, errors and other statistics increase at 2500 Hz.
Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby
2016-06-01
This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through 'doing' quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a 'magic bullet' and that a wider programme of content and assessment diversification across the curriculum is preferential.
NASA Astrophysics Data System (ADS)
Gunton, James D.; Shiryayev, Andrey; Pagan, Daniel L.
2007-09-01
Preface; 1. Introduction; 2. Globular protein structure; 3. Experimental methods; 4. Thermodynamics and statistical mechanics; 5. Protein-protein interactions; 6. Theoretical studies of equilibrium; 7. Nucleation theory; 8. Experimental studies of nucleation; 9. Lysozyme; 10. Some other globular proteins; 11. Membrane proteins; 12. Crystallins and cataracts; 13. Sickle hemoglobin and sickle cell anemia; 14, Alzheimer's disease; Index.
NASA Astrophysics Data System (ADS)
Gunton, James D.; Shiryayev, Andrey; Pagan, Daniel L.
2014-07-01
Preface; 1. Introduction; 2. Globular protein structure; 3. Experimental methods; 4. Thermodynamics and statistical mechanics; 5. Protein-protein interactions; 6. Theoretical studies of equilibrium; 7. Nucleation theory; 8. Experimental studies of nucleation; 9. Lysozyme; 10. Some other globular proteins; 11. Membrane proteins; 12. Crystallins and cataracts; 13. Sickle hemoglobin and sickle cell anemia; 14, Alzheimer's disease; Index.
Optimal Experimental Design for Model Discrimination
ERIC Educational Resources Information Center
Myung, Jay I.; Pitt, Mark A.
2009-01-01
Models of a psychological process can be difficult to discriminate experimentally because it is not easy to determine the values of the critical design variables (e.g., presentation schedule, stimulus structure) that will be most informative in differentiating them. Recent developments in sampling-based search methods in statistics make it…
Quasi-Experimental Analysis: A Mixture of Methods and Judgment.
ERIC Educational Resources Information Center
Cordray, David S.
1986-01-01
The role of human judgment in the development and synthesis of evidence has not been adequately developed or acknowledged within quasi-experimental analysis. Corrective solutions need to confront the fact that causal analysis within complex environments will require a more active assessment that entails reasoning and statistical modeling.…
A Web-Based Learning Tool Improves Student Performance in Statistics: A Randomized Masked Trial
ERIC Educational Resources Information Center
Gonzalez, Jose A.; Jover, Lluis; Cobo, Erik; Munoz, Pilar
2010-01-01
Background: e-status is a web-based tool able to generate different statistical exercises and to provide immediate feedback to students' answers. Although the use of Information and Communication Technologies (ICTs) is becoming widespread in undergraduate education, there are few experimental studies evaluating its effects on learning. Method: All…
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Statistical tools, previously developed for nonlinear least-squares estimation of multivariate sensor calibration parameters and the associated calibration uncertainty analysis, have been applied to single- and multiple-axis inertial model attitude sensors used in wind tunnel testing to measure angle of attack and roll angle. The analysis provides confidence and prediction intervals of calibrated sensor measurement uncertainty as functions of applied input pitch and roll angles. A comparative performance study of various experimental designs for inertial sensor calibration is presented along with corroborating experimental data. The importance of replicated calibrations over extended time periods has been emphasized; replication provides independent estimates of calibration precision and bias uncertainties, statistical tests for calibration or modeling bias uncertainty, and statistical tests for sensor parameter drift over time. A set of recommendations for a new standardized model attitude sensor calibration method and usage procedures is included. The statistical information provided by these procedures is necessary for the uncertainty analysis of aerospace test results now required by users of industrial wind tunnel test facilities.
Blind image quality assessment based on aesthetic and statistical quality-aware features
NASA Astrophysics Data System (ADS)
Jenadeleh, Mohsen; Masaeli, Mohammad Masood; Moghaddam, Mohsen Ebrahimi
2017-07-01
The main goal of image quality assessment (IQA) methods is the emulation of human perceptual image quality judgments. Therefore, the correlation between objective scores of these methods with human perceptual scores is considered as their performance metric. Human judgment of the image quality implicitly includes many factors when assessing perceptual image qualities such as aesthetics, semantics, context, and various types of visual distortions. The main idea of this paper is to use a host of features that are commonly employed in image aesthetics assessment in order to improve blind image quality assessment (BIQA) methods accuracy. We propose an approach that enriches the features of BIQA methods by integrating a host of aesthetics image features with the features of natural image statistics derived from multiple domains. The proposed features have been used for augmenting five different state-of-the-art BIQA methods, which use statistical natural scene statistics features. Experiments were performed on seven benchmark image quality databases. The experimental results showed significant improvement of the accuracy of the methods.
Experimental and environmental factors affect spurious detection of ecological thresholds
Daily, Jonathan P.; Hitt, Nathaniel P.; Smith, David; Snyder, Craig D.
2012-01-01
Threshold detection methods are increasingly popular for assessing nonlinear responses to environmental change, but their statistical performance remains poorly understood. We simulated linear change in stream benthic macroinvertebrate communities and evaluated the performance of commonly used threshold detection methods based on model fitting (piecewise quantile regression [PQR]), data partitioning (nonparametric change point analysis [NCPA]), and a hybrid approach (significant zero crossings [SiZer]). We demonstrated that false detection of ecological thresholds (type I errors) and inferences on threshold locations are influenced by sample size, rate of linear change, and frequency of observations across the environmental gradient (i.e., sample-environment distribution, SED). However, the relative importance of these factors varied among statistical methods and between inference types. False detection rates were influenced primarily by user-selected parameters for PQR (τ) and SiZer (bandwidth) and secondarily by sample size (for PQR) and SED (for SiZer). In contrast, the location of reported thresholds was influenced primarily by SED. Bootstrapped confidence intervals for NCPA threshold locations revealed strong correspondence to SED. We conclude that the choice of statistical methods for threshold detection should be matched to experimental and environmental constraints to minimize false detection rates and avoid spurious inferences regarding threshold location.
Effect of simulation on the ability of first year nursing students to learn vital signs.
Eyikara, Evrim; Baykara, Zehra Göçmen
2018-01-01
The acquisition of cognitive, affective and psychomotor knowledge and skills are required in nursing, made possible via an interactive teaching method, such as simulation. This study conducted to identify the impact of simulation on first-year nursing students' ability to learn vital signs. A convenience sample of 90 first-year nursing students enrolled at a University, Ankara, in 2014-2015. Ninety students enrolled for lessons on the "Fundamentals of Nursing" were identified using a simple random sampling method. The students were taught vital signs theory via traditional methods. They were grouped into experimental 1, experimental 2 and control group, of 30 students each. Students in the experimental 1 group attended sessions on simulation and those in the experimental 2 group sessions on laboratory work, followed by simulation. The control group were taught via traditional methods and only attended the laboratory work sessions. The students' cognitive knowledge acquisition was evaluated using a knowledge test before and after the lessons. The ability to measure vital signs in adults (healthy ones and patients) was evaluated using a skill control list. A statistically significant difference was not observed between the groups in terms of the average pre-test scores on knowledge (p>0.050). Groups exposed to simulation obtained statistically significantly higher scores than the control group in post-test knowledge (p<0.050). The ability of the groups exposed to simulation to measure vital signs in healthy adults and patients was more successful than that the control group (p<0.050). This was statistically significant. Simulation had a positive effect on the ability of nursing students to measure vital signs. Thus, simulation should be included in the mainstream curriculum in order to effectively impart nursing knowledge and skills. Copyright © 2017 Elsevier Ltd. All rights reserved.
Variational Method in the Statistical Theory of Turbulence
1991-01-01
3.1.4) where 6 and c are the Kronecker and Levi - Civita tensors, respectively. Th u AB AB functions Z1, Z 2 and Z3 are invariant under rotations around...flow profide and certain two point correlation functions of a cylindrically syrametric free jet were carried out using the Rayleigh-Ritz method. A...reasonable agreement with experimental data. 14. SUBJECT TERMS 15. NUMBER OF PAGES 33 Two Phase Flow, Turbulence, Statistical Turbulence 16. PRICE CODE 17
Sujithra, S
2014-01-01
An experimental study was conducted among 60 menopausal women, 30 each in experimental and control group who met inclusion criteria. The menopausal women were identified in both the groups and level of depression was assessed using Cornell Dysthmia rating scale. Simple random sampling technique by lottery method was used for selecting the sample. Autogenic relaxation was practiced by the menopausal women for four weeks. The findings revealed that in experimental group, after intervention of autogenic relaxation on depression among menopausal women, 23 (76.7%) had mild depression. There was a statistically significant effectiveness in experimental group at the level of p < 0.05. There was a statistically significant association between the effectiveness of autogenic relaxation on depression among menopausal women in the post-experimental group with the type of family at the level of p < 0.05.
Return to Our Roots: Raising Radishes to Teach Experimental Design. Methods and Techniques.
ERIC Educational Resources Information Center
Stallings, William M.
1993-01-01
Reviews research in teaching applied statistics. Concludes that students should analyze data from studies they have designed and conducted. Describes an activity in which students study germination and growth of radish seeds. Includes a table providing student instructions for both the experimental procedure and data analysis. (CFR)
Williams, Malcolm; Sloan, Luke; Cheung, Sin Yi; Sutton, Carole; Stevens, Sebastian; Runham, Libby
2015-01-01
This paper reports on a quasi-experiment in which quantitative methods (QM) are embedded within a substantive sociology module. Through measuring student attitudes before and after the intervention alongside control group comparisons, we illustrate the impact that embedding has on the student experience. Our findings are complex and even contradictory. Whilst the experimental group were less likely to be distrustful of statistics and appreciate how QM inform social research, they were also less confident about their statistical abilities, suggesting that through ‘doing’ quantitative sociology the experimental group are exposed to the intricacies of method and their optimism about their own abilities is challenged. We conclude that embedding QM in a single substantive module is not a ‘magic bullet’ and that a wider programme of content and assessment diversification across the curriculum is preferential. PMID:27330225
Inference of missing data and chemical model parameters using experimental statistics
NASA Astrophysics Data System (ADS)
Casey, Tiernan; Najm, Habib
2017-11-01
A method for determining the joint parameter density of Arrhenius rate expressions through the inference of missing experimental data is presented. This approach proposes noisy hypothetical data sets from target experiments and accepts those which agree with the reported statistics, in the form of nominal parameter values and their associated uncertainties. The data exploration procedure is formalized using Bayesian inference, employing maximum entropy and approximate Bayesian computation methods to arrive at a joint density on data and parameters. The method is demonstrated in the context of reactions in the H2-O2 system for predictive modeling of combustion systems of interest. Work supported by the US DOE BES CSGB. Sandia National Labs is a multimission lab managed and operated by Nat. Technology and Eng'g Solutions of Sandia, LLC., a wholly owned subsidiary of Honeywell Intl, for the US DOE NCSA under contract DE-NA-0003525.
Assessment of Current Jet Noise Prediction Capabilities
NASA Technical Reports Server (NTRS)
Hunter, Craid A.; Bridges, James E.; Khavaran, Abbas
2008-01-01
An assessment was made of the capability of jet noise prediction codes over a broad range of jet flows, with the objective of quantifying current capabilities and identifying areas requiring future research investment. Three separate codes in NASA s possession, representative of two classes of jet noise prediction codes, were evaluated, one empirical and two statistical. The empirical code is the Stone Jet Noise Module (ST2JET) contained within the ANOPP aircraft noise prediction code. It is well documented, and represents the state of the art in semi-empirical acoustic prediction codes where virtual sources are attributed to various aspects of noise generation in each jet. These sources, in combination, predict the spectral directivity of a jet plume. A total of 258 jet noise cases were examined on the ST2JET code, each run requiring only fractions of a second to complete. Two statistical jet noise prediction codes were also evaluated, JeNo v1, and Jet3D. Fewer cases were run for the statistical prediction methods because they require substantially more resources, typically a Reynolds-Averaged Navier-Stokes solution of the jet, volume integration of the source statistical models over the entire plume, and a numerical solution of the governing propagation equation within the jet. In the evaluation process, substantial justification of experimental datasets used in the evaluations was made. In the end, none of the current codes can predict jet noise within experimental uncertainty. The empirical code came within 2dB on a 1/3 octave spectral basis for a wide range of flows. The statistical code Jet3D was within experimental uncertainty at broadside angles for hot supersonic jets, but errors in peak frequency and amplitude put it out of experimental uncertainty at cooler, lower speed conditions. Jet3D did not predict changes in directivity in the downstream angles. The statistical code JeNo,v1 was within experimental uncertainty predicting noise from cold subsonic jets at all angles, but did not predict changes with heating of the jet and did not account for directivity changes at supersonic conditions. Shortcomings addressed here give direction for future work relevant to the statistical-based prediction methods. A full report will be released as a chapter in a NASA publication assessing the state of the art in aircraft noise prediction.
ERIC Educational Resources Information Center
Schweizer, Karl; Steinwascher, Merle; Moosbrugger, Helfried; Reiss, Siegbert
2011-01-01
The development of research methodology competency is a major aim of the psychology curriculum at universities. Usually, three courses concentrating on basic statistics, advanced statistics and experimental methods, respectively, serve the achievement of this aim. However, this traditional curriculum-based course structure gives rise to the…
The Adequacy of Different Robust Statistical Tests in Comparing Two Independent Groups
ERIC Educational Resources Information Center
Pero-Cebollero, Maribel; Guardia-Olmos, Joan
2013-01-01
In the current study, we evaluated various robust statistical methods for comparing two independent groups. Two scenarios for simulation were generated: one of equality and another of population mean differences. In each of the scenarios, 33 experimental conditions were used as a function of sample size, standard deviation and asymmetry. For each…
Experimental Determination of Dynamical Lee-Yang Zeros
NASA Astrophysics Data System (ADS)
Brandner, Kay; Maisi, Ville F.; Pekola, Jukka P.; Garrahan, Juan P.; Flindt, Christian
2017-05-01
Statistical physics provides the concepts and methods to explain the phase behavior of interacting many-body systems. Investigations of Lee-Yang zeros—complex singularities of the free energy in systems of finite size—have led to a unified understanding of equilibrium phase transitions. The ideas of Lee and Yang, however, are not restricted to equilibrium phenomena. Recently, Lee-Yang zeros have been used to characterize nonequilibrium processes such as dynamical phase transitions in quantum systems after a quench or dynamic order-disorder transitions in glasses. Here, we experimentally realize a scheme for determining Lee-Yang zeros in such nonequilibrium settings. We extract the dynamical Lee-Yang zeros of a stochastic process involving Andreev tunneling between a normal-state island and two superconducting leads from measurements of the dynamical activity along a trajectory. From the short-time behavior of the Lee-Yang zeros, we predict the large-deviation statistics of the activity which is typically difficult to measure. Our method paves the way for further experiments on the statistical mechanics of many-body systems out of equilibrium.
Experimental Keratitis Due to Pseudomonas aeruginosa: Model for Evaluation of Antimicrobial Drugs
Davis, Starkey D.; Chandler, John W.
1975-01-01
An improved method for experimental keratitis due to Pseudomonas aeruginosa is described. Essential features of the method are use of inbred guinea pigs, intracorneal injection of bacteria, subconjunctival injection of antibiotics, “blind” evaluation of results, and statistical analysis of data. Untreated ocular infections were most severe 5 to 7 days after infection. Sterilized bacterial suspensions caused no abnormalities on day 5. Tobramycin and polymyxin B were more active than gentamicin against two strains of Pseudomonas. This model is suitable for many types of quantitative studies on experimental keratitis. Images PMID:810084
NASA Astrophysics Data System (ADS)
Pardimin, H.; Arcana, N.
2018-01-01
Many types of research in the field of mathematics education apply the Quasi-Experimental method and statistical analysis use t-test. Quasi-experiment has a weakness that is difficult to fulfil “the law of a single independent variable”. T-test also has a weakness that is a generalization of the conclusions obtained is less powerful. This research aimed to find ways to reduce the weaknesses of the Quasi-experimental method and improved the generalization of the research results. The method applied in the research was a non-interactive qualitative method, and the type was concept analysis. Concepts analysed are the concept of statistics, research methods of education, and research reports. The result represented a way to overcome the weaknesses of quasi-Experiments and T-test. In addition, the way was to apply a combination of Factorial Design and Balanced Design, which the authors refer to as Factorial-Balanced Design. The advantages of this design are: (1) almost fulfilling “the low of single independent variable” so no need to test the similarity of the academic ability, (2) the sample size of the experimental group and the control group became larger and equal; so it becomes robust to deal with violations of the assumptions of the ANOVA test.
Plant Taxonomy as a Field Study
ERIC Educational Resources Information Center
Dalby, D. H.
1970-01-01
Suggests methods of teaching plant identification and taxonomic theory using keys, statistical analyses, and biometrics. Population variation, genotype- environment interaction and experimental taxonomy are used in laboratory and field. (AL)
ERIC Educational Resources Information Center
Ercolani, Gianfranco
2005-01-01
The finite-difference boundary-value method is a numerical method suited for the solution of the one-dimensional Schrodinger equation encountered in problems of hindered rotation. Further, the application of the method, in combination with experimental results for the evaluation of the rotational energy barrier in ethane is presented.
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY.
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2009-08-07
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application.
SEGMENTING CT PROSTATE IMAGES USING POPULATION AND PATIENT-SPECIFIC STATISTICS FOR RADIOTHERAPY
Feng, Qianjin; Foskey, Mark; Tang, Songyuan; Chen, Wufan; Shen, Dinggang
2010-01-01
This paper presents a new deformable model using both population and patient-specific statistics to segment the prostate from CT images. There are two novelties in the proposed method. First, a modified scale invariant feature transform (SIFT) local descriptor, which is more distinctive than general intensity and gradient features, is used to characterize the image features. Second, an online training approach is used to build the shape statistics for accurately capturing intra-patient variation, which is more important than inter-patient variation for prostate segmentation in clinical radiotherapy. Experimental results show that the proposed method is robust and accurate, suitable for clinical application. PMID:21197416
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
The Shock and Vibration Digest. Volume 15, Number 7
1983-07-01
systems noise -- for tant analytical tool, the statistical energy analysis example, from a specific metal, chain driven, con- method, has been the subject...34Experimental Determination of Vibration Parameters Re- ~~~quired in the Statistical Energy Analysis Meth- .,i. 31. Dubowsky, S. and Morris, T.L., "An...34Coupling Loss Factors for 55. Upton, R., "Sound Intensity -. A Powerful New Statistical Energy Analysis of Sound Trans- Measurement Tool," S/V, Sound
Diestelkamp, Wiebke S; Krane, Carissa M; Pinnell, Margaret F
2011-05-20
Energy-based surgical scalpels are designed to efficiently transect and seal blood vessels using thermal energy to promote protein denaturation and coagulation. Assessment and design improvement of ultrasonic scalpel performance relies on both in vivo and ex vivo testing. The objective of this work was to design and implement a robust, experimental test matrix with randomization restrictions and predictive statistical power, which allowed for identification of those experimental variables that may affect the quality of the seal obtained ex vivo. The design of the experiment included three factors: temperature (two levels); the type of solution used to perfuse the artery during transection (three types); and artery type (two types) resulting in a total of twelve possible treatment combinations. Burst pressures of porcine carotid and renal arteries sealed ex vivo were assigned as the response variable. The experimental test matrix was designed and carried out as a split-plot experiment in order to assess the contributions of several variables and their interactions while accounting for randomization restrictions present in the experimental setup. The statistical software package SAS was utilized and PROC MIXED was used to account for the randomization restrictions in the split-plot design. The combination of temperature, solution, and vessel type had a statistically significant impact on seal quality. The design and implementation of a split-plot experimental test-matrix provided a mechanism for addressing the existing technical randomization restrictions of ex vivo ultrasonic scalpel performance testing, while preserving the ability to examine the potential effects of independent factors or variables. This method for generating the experimental design and the statistical analyses of the resulting data are adaptable to a wide variety of experimental problems involving large-scale tissue-based studies of medical or experimental device efficacy and performance.
Learning physics: A comparative analysis between instructional design methods
NASA Astrophysics Data System (ADS)
Mathew, Easow
The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p < .01) in academic performance between students who participated in traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in the control group which may indicate that traditional teaching methods are more effective in females, whereas there was no significant difference noted in the experimental group between male and female participants. There was a statistically significant and negative relationship (r = -.61, p = .01) between age and physics pretest scores in the control group. No statistical analyses yielded significantly different average academic performance values in either group as delineated by ethnicity.
1990-02-01
CAlA WACe Mns. b. Amalgamated for all tank runs: (1) Significant wave height and mdal period of achieved wave condition. (2) Mean And .S mortions...experimental conditions. It is impossible to set sa jndtrd run lengths for all experimental conditions and so a method should be developed to analyse the
Full statistical mode reconstruction of a light field via a photon-number-resolved measurement
NASA Astrophysics Data System (ADS)
Burenkov, I. A.; Sharma, A. K.; Gerrits, T.; Harder, G.; Bartley, T. J.; Silberhorn, C.; Goldschmidt, E. A.; Polyakov, S. V.
2017-05-01
We present a method to reconstruct the complete statistical mode structure and optical losses of multimode conjugated optical fields using an experimentally measured joint photon-number probability distribution. We demonstrate that this method evaluates classical and nonclassical properties using a single measurement technique and is well suited for quantum mesoscopic state characterization. We obtain a nearly perfect reconstruction of a field comprised of up to ten modes based on a minimal set of assumptions. To show the utility of this method, we use it to reconstruct the mode structure of an unknown bright parametric down-conversion source.
Khodadadi, Esmail; Ebrahimi, Hossein; Moghaddasian, Sima; Babapour, Jalil
2013-01-01
Introduction: Having an effective relationship with the patient in the process of treatment is essential. Nurses must have communication skills in order to establish effective relationships with the patients. This study evaluated the impact of communication skills training on quality of care, self-efficacy, job satisfaction and communication skills of nurses. Methods: This is an experimental study with a control group that has been done in 2012. The study sample consisted of 73 nurses who work in hospitals of Tabriz; they were selected by proportional randomizing method. The intervention was only conducted on the experimental group. In order to measure the quality of care 160 patients, who had received care by nurses, participated in this study. The Data were analyzed by SPSS (ver.13). Results: Comparing the mean scores of communication skills showed a statistically significant difference between control and experimental groups after intervention. The paired t-test showed a statistically significant difference in the experimental group before and after the intervention. Independent t-test showed a statistically significant difference between the rate of quality of care in patients of control and experimental groups after the intervention. Conclusion: The results showed that the training of communication skills can increase the nurse's rate of communication skills and cause elevation in quality of nursing care. Therefore, in order to improve the quality of nursing care it is recommended that communication skills be established and taught as a separate course in nursing education. PMID:25276707
On-chip generation of Einstein-Podolsky-Rosen states with arbitrary symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gräfe, Markus; Heilmann, René; Nolte, Stefan
We experimentally demonstrate a method for integrated-optical generation of two-photon Einstein-Podolsky-Rosen states featuring arbitrary symmetries. In our setting, we employ detuned directional couplers to impose a freely tailorable phase between the two modes of the state. Our results allow to mimic the quantum random walk statistics of bosons, fermions, and anyons, particles with fractional exchange statistics.
A Hybrid Template-Based Composite Classification System
2009-02-01
Hybrid Classifier: Forced Decision . . . . 116 5.3.2 Forced Decision Experimental Results . . . . . 119 5.3.3 Test for Statistical Significance ...Results . . . . . . . . . . 127 5.4.2 Test for Statistical Significance : NDEC Option 129 5.5 Implementing the Hyrid Classifier with OOL Targets . 130...comple- mentary in nature . Complementary classifiers are observed by finding an optimal method for partitioning the problem space. For example, the
Validating Coherence Measurements Using Aligned and Unaligned Coherence Functions
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2006-01-01
This paper describes a novel approach based on the use of coherence functions and statistical theory for sensor validation in a harsh environment. By the use of aligned and unaligned coherence functions and statistical theory one can test for sensor degradation, total sensor failure or changes in the signal. This advanced diagnostic approach and the novel data processing methodology discussed provides a single number that conveys this information. This number as calculated with standard statistical procedures for comparing the means of two distributions is compared with results obtained using Yuen's robust statistical method to create confidence intervals. Examination of experimental data from Kulite pressure transducers mounted in a Pratt & Whitney PW4098 combustor using spectrum analysis methods on aligned and unaligned time histories has verified the effectiveness of the proposed method. All the procedures produce good results which demonstrates how robust the technique is.
On the Spike Train Variability Characterized by Variance-to-Mean Power Relationship.
Koyama, Shinsuke
2015-07-01
We propose a statistical method for modeling the non-Poisson variability of spike trains observed in a wide range of brain regions. Central to our approach is the assumption that the variance and the mean of interspike intervals are related by a power function characterized by two parameters: the scale factor and exponent. It is shown that this single assumption allows the variability of spike trains to have an arbitrary scale and various dependencies on the firing rate in the spike count statistics, as well as in the interval statistics, depending on the two parameters of the power function. We also propose a statistical model for spike trains that exhibits the variance-to-mean power relationship. Based on this, a maximum likelihood method is developed for inferring the parameters from rate-modulated spike trains. The proposed method is illustrated on simulated and experimental spike trains.
Peng, Fei; Li, Jiao-ting; Long, Min
2015-03-01
To discriminate the acquisition pipelines of digital images, a novel scheme for the identification of natural images and computer-generated graphics is proposed based on statistical and textural features. First, the differences between them are investigated from the view of statistics and texture, and 31 dimensions of feature are acquired for identification. Then, LIBSVM is used for the classification. Finally, the experimental results are presented. The results show that it can achieve an identification accuracy of 97.89% for computer-generated graphics, and an identification accuracy of 97.75% for natural images. The analyses also demonstrate the proposed method has excellent performance, compared with some existing methods based only on statistical features or other features. The method has a great potential to be implemented for the identification of natural images and computer-generated graphics. © 2014 American Academy of Forensic Sciences.
An experimental comparison of various methods of nearfield acoustic holography
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
2017-05-19
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
An experimental comparison of various methods of nearfield acoustic holography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chelliah, Kanthasamy; Raman, Ganesh; Muehleisen, Ralph T.
An experimental comparison of four different methods of nearfield acoustic holography (NAH) is presented in this study for planar acoustic sources. The four NAH methods considered in this study are based on: (1) spatial Fourier transform, (2) equivalent sources model, (3) boundary element methods and (4) statistically optimized NAH. Two dimensional measurements were obtained at different distances in front of a tonal sound source and the NAH methods were used to reconstruct the sound field at the source surface. Reconstructed particle velocity and acoustic pressure fields presented in this study showed that the equivalent sources model based algorithm along withmore » Tikhonov regularization provided the best localization of the sources. Reconstruction errors were found to be smaller for the equivalent sources model based algorithm and the statistically optimized NAH algorithm. Effect of hologram distance on the performance of various algorithms is discussed in detail. The study also compares the computational time required by each algorithm to complete the comparison. Four different regularization parameter choice methods were compared. The L-curve method provided more accurate reconstructions than the generalized cross validation and the Morozov discrepancy principle. Finally, the performance of fixed parameter regularization was comparable to that of the L-curve method.« less
Imaging of neural oscillations with embedded inferential and group prevalence statistics.
Donhauser, Peter W; Florin, Esther; Baillet, Sylvain
2018-02-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience.
Imaging of neural oscillations with embedded inferential and group prevalence statistics
2018-01-01
Magnetoencephalography and electroencephalography (MEG, EEG) are essential techniques for studying distributed signal dynamics in the human brain. In particular, the functional role of neural oscillations remains to be clarified. For that reason, imaging methods need to identify distinct brain regions that concurrently generate oscillatory activity, with adequate separation in space and time. Yet, spatial smearing and inhomogeneous signal-to-noise are challenging factors to source reconstruction from external sensor data. The detection of weak sources in the presence of stronger regional activity nearby is a typical complication of MEG/EEG source imaging. We propose a novel, hypothesis-driven source reconstruction approach to address these methodological challenges. The imaging with embedded statistics (iES) method is a subspace scanning technique that constrains the mapping problem to the actual experimental design. A major benefit is that, regardless of signal strength, the contributions from all oscillatory sources, which activity is consistent with the tested hypothesis, are equalized in the statistical maps produced. We present extensive evaluations of iES on group MEG data, for mapping 1) induced oscillations using experimental contrasts, 2) ongoing narrow-band oscillations in the resting-state, 3) co-modulation of brain-wide oscillatory power with a seed region, and 4) co-modulation of oscillatory power with peripheral signals (pupil dilation). Along the way, we demonstrate several advantages of iES over standard source imaging approaches. These include the detection of oscillatory coupling without rejection of zero-phase coupling, and detection of ongoing oscillations in deeper brain regions, where signal-to-noise conditions are unfavorable. We also show that iES provides a separate evaluation of oscillatory synchronization and desynchronization in experimental contrasts, which has important statistical advantages. The flexibility of iES allows it to be adjusted to many experimental questions in systems neuroscience. PMID:29408902
Özdelikara, Afitap; Tan, Mehtap
2017-01-01
Objective: Patients receiving chemotherapy struggle with the side effects of this treatment. These side effects obligate the patients to use not only the pharmacological methods but also non-pharmacological relaxing methods. This study was conducted to determine the effect of reflexology on chemotherapy-induced nausea, vomiting, and fatigue in breast cancer patients. Methods: The study was conducted as a pretest–posttest experimental design. The study was conducted with sixty patients, thirty as the control and thirty as the experimental groups. A sociodemographic form, Rhodes index of nausea, vomiting, and retching (INVR), and Brief Fatigue Inventory (BFI) were used to collect the data. Analysis of variance, t-test, percentage calculations, and Chi-square methods were used to evaluate the data. The data obtained were assessed using the “Statistical Package for Social Science 21.0” software. Results: It was determined that the difference between the total mean scores of INVR in the experimental and control groups was significant on the onset and first and second measurements, and the difference between total mean scores of development and distress between the groups was statistically significant in the third measurement (P < 0.05). The results of the study showed that the BFI mean scores of patients in the experimental group gradually decreased in the first, second, and third measurements (P < 0.05). Conclusions: The present study proved that reflexology decreased the experience, development, distress of nausea, vomiting, and retching as well as fatigue in the experimental group. Hence, the use of reflexology is recommended for chemotherapy-induced nausea, vomiting, and fatigue. PMID:28695171
GenomeGraphs: integrated genomic data visualization with R.
Durinck, Steffen; Bullard, James; Spellman, Paul T; Dudoit, Sandrine
2009-01-06
Biological studies involve a growing number of distinct high-throughput experiments to characterize samples of interest. There is a lack of methods to visualize these different genomic datasets in a versatile manner. In addition, genomic data analysis requires integrated visualization of experimental data along with constantly changing genomic annotation and statistical analyses. We developed GenomeGraphs, as an add-on software package for the statistical programming environment R, to facilitate integrated visualization of genomic datasets. GenomeGraphs uses the biomaRt package to perform on-line annotation queries to Ensembl and translates these to gene/transcript structures in viewports of the grid graphics package. This allows genomic annotation to be plotted together with experimental data. GenomeGraphs can also be used to plot custom annotation tracks in combination with different experimental data types together in one plot using the same genomic coordinate system. GenomeGraphs is a flexible and extensible software package which can be used to visualize a multitude of genomic datasets within the statistical programming environment R.
Texture Classification by Texton: Statistical versus Binary
Guo, Zhenhua; Zhang, Zhongcheng; Li, Xiu; Li, Qin; You, Jane
2014-01-01
Using statistical textons for texture classification has shown great success recently. The maximal response 8 (Statistical_MR8), image patch (Statistical_Joint) and locally invariant fractal (Statistical_Fractal) are typical statistical texton algorithms and state-of-the-art texture classification methods. However, there are two limitations when using these methods. First, it needs a training stage to build a texton library, thus the recognition accuracy will be highly depended on the training samples; second, during feature extraction, local feature is assigned to a texton by searching for the nearest texton in the whole library, which is time consuming when the library size is big and the dimension of feature is high. To address the above two issues, in this paper, three binary texton counterpart methods were proposed, Binary_MR8, Binary_Joint, and Binary_Fractal. These methods do not require any training step but encode local feature into binary representation directly. The experimental results on the CUReT, UIUC and KTH-TIPS databases show that binary texton could get sound results with fast feature extraction, especially when the image size is not big and the quality of image is not poor. PMID:24520346
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilson, William; Krakowiak, Konrad J.; Ulm, Franz-Josef, E-mail: ulm@mit.edu
2014-01-15
According to recent developments in cement clinker engineering, the optimization of chemical substitutions in the main clinker phases offers a promising approach to improve both reactivity and grindability of clinkers. Thus, monitoring the chemistry of the phases may become part of the quality control at the cement plants, along with the usual measurements of the abundance of the mineralogical phases (quantitative X-ray diffraction) and the bulk chemistry (X-ray fluorescence). This paper presents a new method to assess these three complementary quantities with a single experiment. The method is based on electron microprobe spot analyses, performed over a grid located onmore » a representative surface of the sample and interpreted with advanced statistical tools. This paper describes the method and the experimental program performed on industrial clinkers to establish the accuracy in comparison to conventional methods. -- Highlights: •A new method of clinker characterization •Combination of electron probe technique with cluster analysis •Simultaneous assessment of phase abundance, composition and bulk chemistry •Experimental validation performed on industrial clinkers.« less
The Performance of Methods to Test Upper-Level Mediation in the Presence of Nonnormal Data
ERIC Educational Resources Information Center
Pituch, Keenan A.; Stapleton, Laura M.
2008-01-01
A Monte Carlo study compared the statistical performance of standard and robust multilevel mediation analysis methods to test indirect effects for a cluster randomized experimental design under various departures from normality. The performance of these methods was examined for an upper-level mediation process, where the indirect effect is a fixed…
Study/Experimental/Research Design: Much More Than Statistics
Knight, Kenneth L.
2010-01-01
Abstract Context: The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand. Objective: To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs. Description: The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style. At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary. Advantages: Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results. PMID:20064054
Vertical Enhancement of Second-Year Psychology Research
ERIC Educational Resources Information Center
Morys-Carter, Wakefield L.; Paltoglou, Aspasia E.; Davies, Emma L.
2015-01-01
Statistics and Research Methods modules are often unpopular with psychology students; however, at Oxford Brookes University the seminar component of the second-year research methods module tends to get very positive feedback. Over half of the seminars work towards the submission of a research-based experimental lab report. This article introduces…
ERIC Educational Resources Information Center
Ferrari, Pier Alda; Barbiero, Alessandro
2012-01-01
The increasing use of ordinal variables in different fields has led to the introduction of new statistical methods for their analysis. The performance of these methods needs to be investigated under a number of experimental conditions. Procedures to simulate from ordinal variables are then required. In this article, we deal with simulation from…
NASA Astrophysics Data System (ADS)
Kerr, Laura T.; Adams, Aine; O'Dea, Shirley; Domijan, Katarina; Cullen, Ivor; Hennelly, Bryan M.
2014-05-01
Raman microspectroscopy can be applied to the urinary bladder for highly accurate classification and diagnosis of bladder cancer. This technique can be applied in vitro to bladder epithelial cells obtained from urine cytology or in vivo as an optical biopsy" to provide results in real-time with higher sensitivity and specificity than current clinical methods. However, there exists a high degree of variability across experimental parameters which need to be standardised before this technique can be utilized in an everyday clinical environment. In this study, we investigate different laser wavelengths (473 nm and 532 nm), sample substrates (glass, fused silica and calcium fluoride) and multivariate statistical methods in order to gain insight into how these various experimental parameters impact on the sensitivity and specificity of Raman cytology.
Noise removing in encrypted color images by statistical analysis
NASA Astrophysics Data System (ADS)
Islam, N.; Puech, W.
2012-03-01
Cryptographic techniques are used to secure confidential data from unauthorized access but these techniques are very sensitive to noise. A single bit change in encrypted data can have catastrophic impact over the decrypted data. This paper addresses the problem of removing bit error in visual data which are encrypted using AES algorithm in the CBC mode. In order to remove the noise, a method is proposed which is based on the statistical analysis of each block during the decryption. The proposed method exploits local statistics of the visual data and confusion/diffusion properties of the encryption algorithm to remove the errors. Experimental results show that the proposed method can be used at the receiving end for the possible solution for noise removing in visual data in encrypted domain.
Students perception on the usage of PowerPoint in learning calculus
NASA Astrophysics Data System (ADS)
Othman, Zarith Sofiah; Tarmuji, Nor Habibah; Hilmi, Zulkifli Ab Ghani
2017-04-01
Mathematics is a core subject in most of the science and technology courses and in some social sciences programs. However, the low achievement of students in the subject especially in topics such as Differentiation and Integration is always an issue. Many factors contribute to the low performance such as motivation, environment, method of learning, academic background and others. The purpose of this paper is to determine the perception of learning mathematics using PowerPoint on Integration concepts at the undergraduate level with respect to mathematics anxiety, learning enjoyment, mobility and learning satisfaction. The main content of the PowerPoint presentation focused on the integration method with historical elements as an added value. The study was conducted on 48 students randomly selected from students in computer and applied sciences program as experimental group. Questionnaires were distributed to students to explore their learning experiences. Another 51 students who were taught using the traditional chalkboard method were used as the control group. Both groups were given a test on Integration. The statistical methods used were descriptive statistics and independent sample t-test between the experimental and the control group. The finding showed that most students perceived positively to the PowerPoint presentations with respect to mobility and learning satisfaction. The experimental group performed better than the control group.
NASA Astrophysics Data System (ADS)
Decraene, Carolina; Dijckmans, Arne; Reynders, Edwin P. B.
2018-05-01
A method is developed for computing the mean and variance of the diffuse field sound transmission loss of finite-sized layered wall and floor systems that consist of solid, fluid and/or poroelastic layers. This is achieved by coupling a transfer matrix model of the wall or floor to statistical energy analysis subsystem models of the adjacent room volumes. The modal behavior of the wall is approximately accounted for by projecting the wall displacement onto a set of sinusoidal lateral basis functions. This hybrid modal transfer matrix-statistical energy analysis method is validated on multiple wall systems: a thin steel plate, a polymethyl methacrylate panel, a thick brick wall, a sandwich panel, a double-leaf wall with poro-elastic material in the cavity, and a double glazing. The predictions are compared with experimental data and with results obtained using alternative prediction methods such as the transfer matrix method with spatial windowing, the hybrid wave based-transfer matrix method, and the hybrid finite element-statistical energy analysis method. These comparisons confirm the prediction accuracy of the proposed method and the computational efficiency against the conventional hybrid finite element-statistical energy analysis method.
Goudouri, Ourania-Menti; Kontonasaki, Eleana; Papadopoulou, Lambrini; Manda, Marianthi; Kavouras, Panagiotis; Triantafyllidis, Konstantinos S; Stefanidou, Maria; Koidis, Petros; Paraskevopoulos, Konstantinos M
2017-02-01
The aim of this study was the evaluation of the textural characteristics of an experimental sol-gel derived feldspathic dental ceramic, which has already been proven bioactive and the investigation of its flexural strength through Weibull Statistical Analysis. The null hypothesis was that the flexural strength of the experimental and the commercial dental ceramic would be of the same order, resulting in a dental ceramic with apatite forming ability and adequate mechanical integrity. Although the flexural strength of the experimental ceramics was not statistically significant different compared to the commercial one, the amount of blind pores due to processing was greater. The textural characteristics of the experimental ceramic were in accordance with the standard low porosity levels reported for dental ceramics used for fixed prosthetic restorations. Feldspathic dental ceramics with typical textural characteristics and advanced mechanical properties as well as enhanced apatite forming ability can be synthesized through the sol-gel method. Copyright © 2016 Elsevier Ltd. All rights reserved.
Experimental design and quantitative analysis of microbial community multiomics.
Mallick, Himel; Ma, Siyuan; Franzosa, Eric A; Vatanen, Tommi; Morgan, Xochitl C; Huttenhower, Curtis
2017-11-30
Studies of the microbiome have become increasingly sophisticated, and multiple sequence-based, molecular methods as well as culture-based methods exist for population-scale microbiome profiles. To link the resulting host and microbial data types to human health, several experimental design considerations, data analysis challenges, and statistical epidemiological approaches must be addressed. Here, we survey current best practices for experimental design in microbiome molecular epidemiology, including technologies for generating, analyzing, and integrating microbiome multiomics data. We highlight studies that have identified molecular bioactives that influence human health, and we suggest steps for scaling translational microbiome research to high-throughput target discovery across large populations.
NASA Astrophysics Data System (ADS)
Ma, Nan; Jena, Debdeep
2015-03-01
In this work, the consequence of the high band-edge density of states on the carrier statistics and quantum capacitance in transition metal dichalcogenide two-dimensional semiconductor devices is explored. The study questions the validity of commonly used expressions for extracting carrier densities and field-effect mobilities from the transfer characteristics of transistors with such channel materials. By comparison to experimental data, a new method for the accurate extraction of carrier densities and mobilities is outlined. The work thus highlights a fundamental difference between these materials and traditional semiconductors that must be considered in future experimental measurements.
NASA Astrophysics Data System (ADS)
Gao, Jike
2018-01-01
Through using the method of literature review, instrument measuring, questionnaire and mathematical statistics, this paper analyzed the current situation in Mass Sports of Tibetan Areas Plateau in Gansu Province. Through experimental test access to Tibetan areas in gansu province of air pollutants and meteorological index data as the foundation, control related national standard and exercise science, statistical analysis of data, the Tibetan plateau, gansu province people participate in physical exercise is dedicated to providing you with scientific methods and appropriate time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.
The precision of double-beta ββ-decay experimental half lives and their uncertainties is reanalyzed. The method of Benford's distributions has been applied to nuclear reaction, structure and decay data sets. First-digit distribution trend for ββ-decay T 2v 1/2 is consistent with large nuclear reaction and structure data sets and provides validation of experimental half-lives. A complementary analysis of the decay uncertainties indicates deficiencies due to small size of statistical samples, and incomplete collection of experimental information. Further experimental and theoretical efforts would lead toward more precise values of-decay half-lives and nuclear matrix elements.
Parametric study of the swimming performance of a fish robot propelled by a flexible caudal fin.
Low, K H; Chong, C W
2010-12-01
In this paper, we aim to study the swimming performance of fish robots by using a statistical approach. A fish robot employing a carangiform swimming mode had been used as an experimental platform for the performance study. The experiments conducted aim to investigate the effect of various design parameters on the thrust capability of the fish robot with a flexible caudal fin. The controllable parameters associated with the fin include frequency, amplitude of oscillation, aspect ratio and the rigidity of the caudal fin. The significance of these parameters was determined in the first set of experiments by using a statistical approach. A more detailed parametric experimental study was then conducted with only those significant parameters. As a result, the parametric study could be completed with a reduced number of experiments and time spent. With the obtained experimental result, we were able to understand the relationship between various parameters and a possible adjustment of parameters to obtain a higher thrust. The proposed statistical method for experimentation provides an objective and thorough analysis of the effects of individual or combinations of parameters on the swimming performance. Such an efficient experimental design helps to optimize the process and determine factors that influence variability.
Forecasting runout of rock and debris avalanches
Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.
2006-01-01
Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.
Gong, Caixia; Yan, Miao; Jiang, Fei; Chen, Zehua; Long, Yuan; Chen, Lixian; Zheng, Qian; Shi, Bing
2014-06-01
This study aimed to observe the postoperative pain rate and degree of pain in preschool children with cleft lip and palate, and investigate the effect of nursing intervention on pain relief. A total of 120 hospitalized cases of three- to seven-year-old preschool children with cleft lip and palate were selected from May to October 2011. The subjects were randomly divided into the control group and experimental groups 1, 2, and 3. The control group used conventional nursing methods, experimental group 1 used analgesic drug treatment, experimental group 2 used psychological nursing interventions, and experimental group 3 used both psychological nursing intervention and analgesic drug treatment. After 6, 12, 24, and 48 h, pain self-assessment, pain parent-assessment, and pain nurse-assessment were calculated for the four groups using the pain assessment forms, and their ratings were compared. The postoperative pain rates of the four groups ranged from 50.0% to 73.3%. The difference among the four groups was statistically significant (P < 0.001). The differences among the control group and experimental groups 1 and 2 were not statistically significant (P = 0.871), whereas the differences among experimental group 3 and the other groups were statistically significant (P < 0.001). Postoperative pain in preschool children with cleft lip and palate is common. Psychological nursing intervention with analgesic treatment is effective in relieving postoperative pain.
Petersson, K M; Nichols, T E; Poline, J B; Holmes, A P
1999-01-01
Functional neuroimaging (FNI) provides experimental access to the intact living brain making it possible to study higher cognitive functions in humans. In this review and in a companion paper in this issue, we discuss some common methods used to analyse FNI data. The emphasis in both papers is on assumptions and limitations of the methods reviewed. There are several methods available to analyse FNI data indicating that none is optimal for all purposes. In order to make optimal use of the methods available it is important to know the limits of applicability. For the interpretation of FNI results it is also important to take into account the assumptions, approximations and inherent limitations of the methods used. This paper gives a brief overview over some non-inferential descriptive methods and common statistical models used in FNI. Issues relating to the complex problem of model selection are discussed. In general, proper model selection is a necessary prerequisite for the validity of the subsequent statistical inference. The non-inferential section describes methods that, combined with inspection of parameter estimates and other simple measures, can aid in the process of model selection and verification of assumptions. The section on statistical models covers approaches to global normalization and some aspects of univariate, multivariate, and Bayesian models. Finally, approaches to functional connectivity and effective connectivity are discussed. In the companion paper we review issues related to signal detection and statistical inference. PMID:10466149
Schwämmle, Veit; León, Ileana Rodríguez; Jensen, Ole Nørregaard
2013-09-06
Large-scale quantitative analyses of biological systems are often performed with few replicate experiments, leading to multiple nonidentical data sets due to missing values. For example, mass spectrometry driven proteomics experiments are frequently performed with few biological or technical replicates due to sample-scarcity or due to duty-cycle or sensitivity constraints, or limited capacity of the available instrumentation, leading to incomplete results where detection of significant feature changes becomes a challenge. This problem is further exacerbated for the detection of significant changes on the peptide level, for example, in phospho-proteomics experiments. In order to assess the extent of this problem and the implications for large-scale proteome analysis, we investigated and optimized the performance of three statistical approaches by using simulated and experimental data sets with varying numbers of missing values. We applied three tools, including standard t test, moderated t test, also known as limma, and rank products for the detection of significantly changing features in simulated and experimental proteomics data sets with missing values. The rank product method was improved to work with data sets containing missing values. Extensive analysis of simulated and experimental data sets revealed that the performance of the statistical analysis tools depended on simple properties of the data sets. High-confidence results were obtained by using the limma and rank products methods for analyses of triplicate data sets that exhibited more than 1000 features and more than 50% missing values. The maximum number of differentially represented features was identified by using limma and rank products methods in a complementary manner. We therefore recommend combined usage of these methods as a novel and optimal way to detect significantly changing features in these data sets. This approach is suitable for large quantitative data sets from stable isotope labeling and mass spectrometry experiments and should be applicable to large data sets of any type. An R script that implements the improved rank products algorithm and the combined analysis is available.
NASA Astrophysics Data System (ADS)
Lin, Y. Q.; Ren, W. X.; Fang, S. E.
2011-11-01
Although most vibration-based damage detection methods can acquire satisfactory verification on analytical or numerical structures, most of them may encounter problems when applied to real-world structures under varying environments. The damage detection methods that directly extract damage features from the periodically sampled dynamic time history response measurements are desirable but relevant research and field application verification are still lacking. In this second part of a two-part paper, the robustness and performance of the statistics-based damage index using the forward innovation model by stochastic subspace identification of a vibrating structure proposed in the first part have been investigated against two prestressed reinforced concrete (RC) beams tested in the laboratory and a full-scale RC arch bridge tested in the field under varying environments. Experimental verification is focused on temperature effects. It is demonstrated that the proposed statistics-based damage index is insensitive to temperature variations but sensitive to the structural deterioration or state alteration. This makes it possible to detect the structural damage for the real-scale structures experiencing ambient excitations and varying environmental conditions.
Revisiting the Scale-Invariant, Two-Dimensional Linear Regression Method
ERIC Educational Resources Information Center
Patzer, A. Beate C.; Bauer, Hans; Chang, Christian; Bolte, Jan; Su¨lzle, Detlev
2018-01-01
The scale-invariant way to analyze two-dimensional experimental and theoretical data with statistical errors in both the independent and dependent variables is revisited by using what we call the triangular linear regression method. This is compared to the standard least-squares fit approach by applying it to typical simple sets of example data…
Petukh, Marharyta; Li, Minghui; Alexov, Emil
2015-07-01
A new methodology termed Single Amino Acid Mutation based change in Binding free Energy (SAAMBE) was developed to predict the changes of the binding free energy caused by mutations. The method utilizes 3D structures of the corresponding protein-protein complexes and takes advantage of both approaches: sequence- and structure-based methods. The method has two components: a MM/PBSA-based component, and an additional set of statistical terms delivered from statistical investigation of physico-chemical properties of protein complexes. While the approach is rigid body approach and does not explicitly consider plausible conformational changes caused by the binding, the effect of conformational changes, including changes away from binding interface, on electrostatics are mimicked with amino acid specific dielectric constants. This provides significant improvement of SAAMBE predictions as indicated by better match against experimentally determined binding free energy changes over 1300 mutations in 43 proteins. The final benchmarking resulted in a very good agreement with experimental data (correlation coefficient 0.624) while the algorithm being fast enough to allow for large-scale calculations (the average time is less than a minute per mutation).
Kinetic analysis of single molecule FRET transitions without trajectories
NASA Astrophysics Data System (ADS)
Schrangl, Lukas; Göhring, Janett; Schütz, Gerhard J.
2018-03-01
Single molecule Förster resonance energy transfer (smFRET) is a popular tool to study biological systems that undergo topological transitions on the nanometer scale. smFRET experiments typically require recording of long smFRET trajectories and subsequent statistical analysis to extract parameters such as the states' lifetimes. Alternatively, analysis of probability distributions exploits the shapes of smFRET distributions at well chosen exposure times and hence works without the acquisition of time traces. Here, we describe a variant that utilizes statistical tests to compare experimental datasets with Monte Carlo simulations. For a given model, parameters are varied to cover the full realistic parameter space. As output, the method yields p-values which quantify the likelihood for each parameter setting to be consistent with the experimental data. The method provides suitable results even if the actual lifetimes differ by an order of magnitude. We also demonstrated the robustness of the method to inaccurately determine input parameters. As proof of concept, the new method was applied to the determination of transition rate constants for Holliday junctions.
Tenan, Matthew S; Tweedell, Andrew J; Haynes, Courtney A
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60-90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity.
Analysis of Flexural Fatigue Strength of Self Compacting Fibre Reinforced Concrete Beams
NASA Astrophysics Data System (ADS)
Murali, G.; Sudar Celestina, J. P. Arul; Subhashini, N.; Vigneshwari, M.
2017-07-01
This study presents the extensive statistical investigation ofvariations in flexural fatigue life of self-compacting Fibrous Concrete (FC) beams. For this purpose, the experimental data of earlier researchers were examined by two parameter Weibull distribution.Two methods namely Graphical and moment wereused to analyse the variations in experimental data and the results have been presented in the form of probability of survival. The Weibull parameters values obtained from graphical and method of moments are precise. At 0.7 stress level, the fatigue life shows 59861 cyclesfor areliability of 90%.
Mathematics in modern immunology
Castro, Mario; Lythe, Grant; Molina-París, Carmen; ...
2016-02-19
Mathematical and statistical methods enable multidisciplinary approaches that catalyse discovery. Together with experimental methods, they identify key hypotheses, define measurable observables and reconcile disparate results. Here, we collect a representative sample of studies in T-cell biology that illustrate the benefits of modelling–experimental collaborations and that have proven valuable or even groundbreaking. Furthermore, we conclude that it is possible to find excellent examples of synergy between mathematical modelling and experiment in immunology, which have brought significant insight that would not be available without these collaborations, but that much remains to be discovered.
Mathematics in modern immunology
Castro, Mario; Lythe, Grant; Molina-París, Carmen; Ribeiro, Ruy M.
2016-01-01
Mathematical and statistical methods enable multidisciplinary approaches that catalyse discovery. Together with experimental methods, they identify key hypotheses, define measurable observables and reconcile disparate results. We collect a representative sample of studies in T-cell biology that illustrate the benefits of modelling–experimental collaborations and that have proven valuable or even groundbreaking. We conclude that it is possible to find excellent examples of synergy between mathematical modelling and experiment in immunology, which have brought significant insight that would not be available without these collaborations, but that much remains to be discovered. PMID:27051512
NASA Astrophysics Data System (ADS)
Batic, Matej; Begalli, Marcia; Han, Min Cheol; Hauf, Steffen; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Han Sung; Grazia Pia, Maria; Saracco, Paolo; Weidenspointner, Georg
2014-06-01
A systematic review of methods and data for the Monte Carlo simulation of photon interactions is in progress: it concerns a wide set of theoretical modeling approaches and data libraries available for this purpose. Models and data libraries are assessed quantitatively with respect to an extensive collection of experimental measurements documented in the literature to determine their accuracy; this evaluation exploits rigorous statistical analysis methods. The computational performance of the associated modeling algorithms is evaluated as well. An overview of the assessment of photon interaction models and results of the experimental validation are presented.
Statistical reconstruction for cosmic ray muon tomography.
Schultz, Larry J; Blanpied, Gary S; Borozdin, Konstantin N; Fraser, Andrew M; Hengartner, Nicolas W; Klimenko, Alexei V; Morris, Christopher L; Orum, Chris; Sossong, Michael J
2007-08-01
Highly penetrating cosmic ray muons constantly shower the earth at a rate of about 1 muon per cm2 per minute. We have developed a technique which exploits the multiple Coulomb scattering of these particles to perform nondestructive inspection without the use of artificial radiation. In prior work [1]-[3], we have described heuristic methods for processing muon data to create reconstructed images. In this paper, we present a maximum likelihood/expectation maximization tomographic reconstruction algorithm designed for the technique. This algorithm borrows much from techniques used in medical imaging, particularly emission tomography, but the statistics of muon scattering dictates differences. We describe the statistical model for multiple scattering, derive the reconstruction algorithm, and present simulated examples. We also propose methods to improve the robustness of the algorithm to experimental errors and events departing from the statistical model.
Rákosi, Csilla
2018-01-22
This paper proposes the use of the tools of statistical meta-analysis as a method of conflict resolution with respect to experiments in cognitive linguistics. With the help of statistical meta-analysis, the effect size of similar experiments can be compared, a well-founded and robust synthesis of the experimental data can be achieved, and possible causes of any divergence(s) in the outcomes can be revealed. This application of statistical meta-analysis offers a novel method of how diverging evidence can be dealt with. The workability of this idea is exemplified by a case study dealing with a series of experiments conducted as non-exact replications of Thibodeau and Boroditsky (PLoS ONE 6(2):e16782, 2011. https://doi.org/10.1371/journal.pone.0016782 ).
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods--the cluster size statistic (CSS) and cluster mass statistic (CMS)--are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity.
Ing, Alex; Schwarzbauer, Christian
2014-01-01
Functional connectivity has become an increasingly important area of research in recent years. At a typical spatial resolution, approximately 300 million connections link each voxel in the brain with every other. This pattern of connectivity is known as the functional connectome. Connectivity is often compared between experimental groups and conditions. Standard methods used to control the type 1 error rate are likely to be insensitive when comparisons are carried out across the whole connectome, due to the huge number of statistical tests involved. To address this problem, two new cluster based methods – the cluster size statistic (CSS) and cluster mass statistic (CMS) – are introduced to control the family wise error rate across all connectivity values. These methods operate within a statistical framework similar to the cluster based methods used in conventional task based fMRI. Both methods are data driven, permutation based and require minimal statistical assumptions. Here, the performance of each procedure is evaluated in a receiver operator characteristic (ROC) analysis, utilising a simulated dataset. The relative sensitivity of each method is also tested on real data: BOLD (blood oxygen level dependent) fMRI scans were carried out on twelve subjects under normal conditions and during the hypercapnic state (induced through the inhalation of 6% CO2 in 21% O2 and 73%N2). Both CSS and CMS detected significant changes in connectivity between normal and hypercapnic states. A family wise error correction carried out at the individual connection level exhibited no significant changes in connectivity. PMID:24906136
NASA Astrophysics Data System (ADS)
Liu, X.; Zhang, J. X.; Zhao, Z.; Ma, A. D.
2015-06-01
Synthetic aperture radar in the application of remote sensing technology is becoming more and more widely because of its all-time and all-weather operation, feature extraction research in high resolution SAR image has become a hot topic of concern. In particular, with the continuous improvement of airborne SAR image resolution, image texture information become more abundant. It's of great significance to classification and extraction. In this paper, a novel method for built-up areas extraction using both statistical and structural features is proposed according to the built-up texture features. First of all, statistical texture features and structural features are respectively extracted by classical method of gray level co-occurrence matrix and method of variogram function, and the direction information is considered in this process. Next, feature weights are calculated innovatively according to the Bhattacharyya distance. Then, all features are weighted fusion. At last, the fused image is classified with K-means classification method and the built-up areas are extracted after post classification process. The proposed method has been tested by domestic airborne P band polarization SAR images, at the same time, two groups of experiments based on the method of statistical texture and the method of structural texture were carried out respectively. On the basis of qualitative analysis, quantitative analysis based on the built-up area selected artificially is enforced, in the relatively simple experimentation area, detection rate is more than 90%, in the relatively complex experimentation area, detection rate is also higher than the other two methods. In the study-area, the results show that this method can effectively and accurately extract built-up areas in high resolution airborne SAR imagery.
Su, Cheng; Zhou, Lei; Hu, Zheng; Weng, Winnie; Subramani, Jayanthi; Tadkod, Vineet; Hamilton, Kortney; Bautista, Ami; Wu, Yu; Chirmule, Narendra; Zhong, Zhandong Don
2015-10-01
Biotherapeutics can elicit immune responses, which can alter the exposure, safety, and efficacy of the therapeutics. A well-designed and robust bioanalytical method is critical for the detection and characterization of relevant anti-drug antibody (ADA) and the success of an immunogenicity study. As a fundamental criterion in immunogenicity testing, assay cut points need to be statistically established with a risk-based approach to reduce subjectivity. This manuscript describes the development of a validated, web-based, multi-tier customized assay statistical tool (CAST) for assessing cut points of ADA assays. The tool provides an intuitive web interface that allows users to import experimental data generated from a standardized experimental design, select the assay factors, run the standardized analysis algorithms, and generate tables, figures, and listings (TFL). It allows bioanalytical scientists to perform complex statistical analysis at a click of the button to produce reliable assay parameters in support of immunogenicity studies. Copyright © 2015 Elsevier B.V. All rights reserved.
A Weibull characterization for tensile fracture of multicomponent brittle fibers
NASA Technical Reports Server (NTRS)
Barrows, R. G.
1977-01-01
A statistical characterization for multicomponent brittle fibers in presented. The method, which is an extension of usual Weibull distribution procedures, statistically considers the components making up a fiber (e.g., substrate, sheath, and surface) as separate entities and taken together as in a fiber. Tensile data for silicon carbide fiber and for an experimental carbon-boron alloy fiber are evaluated in terms of the proposed multicomponent Weibull characterization.
Becker, Betsy Jane; Aloe, Ariel M; Duvendack, Maren; Stanley, T D; Valentine, Jeffrey C; Fretheim, Atle; Tugwell, Peter
2017-09-01
To outline issues of importance to analytic approaches to the synthesis of quasi-experiments (QEs) and to provide a statistical model for use in analysis. We drew on studies of statistics, epidemiology, and social-science methodology to outline methods for synthesis of QE studies. The design and conduct of QEs, effect sizes from QEs, and moderator variables for the analysis of those effect sizes were discussed. Biases, confounding, design complexities, and comparisons across designs offer serious challenges to syntheses of QEs. Key components of meta-analyses of QEs were identified, including the aspects of QE study design to be coded and analyzed. Of utmost importance are the design and statistical controls implemented in the QEs. Such controls and any potential sources of bias and confounding must be modeled in analyses, along with aspects of the interventions and populations studied. Because of such controls, effect sizes from QEs are more complex than those from randomized experiments. A statistical meta-regression model that incorporates important features of the QEs under review was presented. Meta-analyses of QEs provide particular challenges, but thorough coding of intervention characteristics and study methods, along with careful analysis, should allow for sound inferences. Copyright © 2017 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Garrido, Mariquita; Payne, David A.
Minimum competency cut-off scores on a statistics exam were estimated under four conditions: the Angoff judging method with item data (n=20), and without data available (n=19); and the Modified Angoff method with (n=19), and without (n=19) item data available to judges. The Angoff method required free response percentage estimates (0-100) percent,…
The role of causal criteria in causal inferences: Bradford Hill's "aspects of association".
Ward, Andrew C
2009-06-17
As noted by Wesley Salmon and many others, causal concepts are ubiquitous in every branch of theoretical science, in the practical disciplines and in everyday life. In the theoretical and practical sciences especially, people often base claims about causal relations on applications of statistical methods to data. However, the source and type of data place important constraints on the choice of statistical methods as well as on the warrant attributed to the causal claims based on the use of such methods. For example, much of the data used by people interested in making causal claims come from non-experimental, observational studies in which random allocations to treatment and control groups are not present. Thus, one of the most important problems in the social and health sciences concerns making justified causal inferences using non-experimental, observational data. In this paper, I examine one method of justifying such inferences that is especially widespread in epidemiology and the health sciences generally - the use of causal criteria. I argue that while the use of causal criteria is not appropriate for either deductive or inductive inferences, they do have an important role to play in inferences to the best explanation. As such, causal criteria, exemplified by what Bradford Hill referred to as "aspects of [statistical] associations", have an indispensible part to play in the goal of making justified causal claims.
The role of causal criteria in causal inferences: Bradford Hill's "aspects of association"
Ward, Andrew C
2009-01-01
As noted by Wesley Salmon and many others, causal concepts are ubiquitous in every branch of theoretical science, in the practical disciplines and in everyday life. In the theoretical and practical sciences especially, people often base claims about causal relations on applications of statistical methods to data. However, the source and type of data place important constraints on the choice of statistical methods as well as on the warrant attributed to the causal claims based on the use of such methods. For example, much of the data used by people interested in making causal claims come from non-experimental, observational studies in which random allocations to treatment and control groups are not present. Thus, one of the most important problems in the social and health sciences concerns making justified causal inferences using non-experimental, observational data. In this paper, I examine one method of justifying such inferences that is especially widespread in epidemiology and the health sciences generally – the use of causal criteria. I argue that while the use of causal criteria is not appropriate for either deductive or inductive inferences, they do have an important role to play in inferences to the best explanation. As such, causal criteria, exemplified by what Bradford Hill referred to as "aspects of [statistical] associations", have an indispensible part to play in the goal of making justified causal claims. PMID:19534788
The effect of teaching medical ethics on medical students' moral reasoning.
Self, D J; Wolinsky, F D; Baldwin, D C
1989-12-01
A study assessed the effect of incorporating medical ethics into the medical curriculum and the relative effects of two methods of implementing that curriculum, namely, lecture and case-study discussions. Results indicate a statistically significant increase (p less than or equal to .0001) in the level of moral reasoning of students exposed to the medical ethics course, regardless of format. Moreover, the unadjusted posttest scores indicated that the case-study method was significantly (p less than or equal to .03) more effective than the lecture method in increasing students' level of moral reasoning. When adjustment were made for the pretest scores, however, this difference was not statistically significant (p less than or equal to .18). Regression analysis by linear panel techniques revealed that age, gender, undergraduate grade-point average, and scores on the Medical College Admission Test were not related to the changes in moral-reasoning scores. All of the variance that could be explained was due to the students' being in one of the two experimental groups. In comparison with the control group, the change associated with each experimental format was statistically significant (lecture, p less than or equal to .004; case study, p less than or equal to .0001). Various explanations for these findings and their implications are given.
Blind identification of image manipulation type using mixed statistical moments
NASA Astrophysics Data System (ADS)
Jeong, Bo Gyu; Moon, Yong Ho; Eom, Il Kyu
2015-01-01
We present a blind identification of image manipulation types such as blurring, scaling, sharpening, and histogram equalization. Motivated by the fact that image manipulations can change the frequency characteristics of an image, we introduce three types of feature vectors composed of statistical moments. The proposed statistical moments are generated from separated wavelet histograms, the characteristic functions of the wavelet variance, and the characteristic functions of the spatial image. Our method can solve the n-class classification problem. Through experimental simulations, we demonstrate that our proposed method can achieve high performance in manipulation type detection. The average rate of the correctly identified manipulation types is as high as 99.22%, using 10,800 test images and six manipulation types including the authentic image.
Experimental analysis of computer system dependability
NASA Technical Reports Server (NTRS)
Iyer, Ravishankar, K.; Tang, Dong
1993-01-01
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance.
Scene-based nonuniformity correction with reduced ghosting using a gated LMS algorithm.
Hardie, Russell C; Baxley, Frank; Brys, Brandon; Hytla, Patrick
2009-08-17
In this paper, we present a scene-based nouniformity correction (NUC) method using a modified adaptive least mean square (LMS) algorithm with a novel gating operation on the updates. The gating is designed to significantly reduce ghosting artifacts produced by many scene-based NUC algorithms by halting updates when temporal variation is lacking. We define the algorithm and present a number of experimental results to demonstrate the efficacy of the proposed method in comparison to several previously published methods including other LMS and constant statistics based methods. The experimental results include simulated imagery and a real infrared image sequence. We show that the proposed method significantly reduces ghosting artifacts, but has a slightly longer convergence time. (c) 2009 Optical Society of America
Air ions and respiratory function outcomes: a comprehensive review
2013-01-01
Background From a mechanistic or physical perspective there is no basis to suspect that electric charges on clusters of air molecules (air ions) would have beneficial or deleterious effects on respiratory function. Yet, there is a large lay and scientific literature spanning 80 years that asserts exposure to air ions affects the respiratory system and has other biological effects. Aims This review evaluates the scientific evidence in published human experimental studies regarding the effects of exposure to air ions on respiratory performance and symptoms. Methods We identified 23 studies (published 1933–1993) that met our inclusion criteria. Relevant data pertaining to study population characteristics, study design, experimental methods, statistical techniques, and study results were assessed. Where relevant, random effects meta-analysis models were utilized to quantify similar exposure and outcome groupings. Results The included studies examined the therapeutic benefits of exposure to negative air ions on respiratory outcomes, such as ventilatory function and asthmatic symptoms. Study specific sample sizes ranged between 7 and 23, and studies varied considerably by subject characteristics (e.g., infants with asthma, adults with emphysema), experimental method, outcomes measured (e.g., subjective symptoms, sensitivity, clinical pulmonary function), analytical design, and statistical reporting. Conclusions Despite numerous experimental and analytical differences across studies, the literature does not clearly support a beneficial role in exposure to negative air ions and respiratory function or asthmatic symptom alleviation. Further, collectively, the human experimental studies do not indicate a significant detrimental effect of exposure to positive air ions on respiratory measures. Exposure to negative or positive air ions does not appear to play an appreciable role in respiratory function. PMID:24016271
ERIC Educational Resources Information Center
Guerin, Stephen M.; Guerin, Clark L.
1979-01-01
Discusses a phenomenon called Extrasensory Perception (ESP) whereby information is gained directly by the mind without the use of the ordinary senses. Experiments in ESP and the basic equipment and methods are presented. Statistical evaluation of ESP experimental results are also included. (HM)
Bayesian methods in reliability
NASA Astrophysics Data System (ADS)
Sander, P.; Badoux, R.
1991-11-01
The present proceedings from a course on Bayesian methods in reliability encompasses Bayesian statistical methods and their computational implementation, models for analyzing censored data from nonrepairable systems, the traits of repairable systems and growth models, the use of expert judgment, and a review of the problem of forecasting software reliability. Specific issues addressed include the use of Bayesian methods to estimate the leak rate of a gas pipeline, approximate analyses under great prior uncertainty, reliability estimation techniques, and a nonhomogeneous Poisson process. Also addressed are the calibration sets and seed variables of expert judgment systems for risk assessment, experimental illustrations of the use of expert judgment for reliability testing, and analyses of the predictive quality of software-reliability growth models such as the Weibull order statistics.
What Is Design-Based Causal Inference for RCTs and Why Should I Use It? NCEE 2017-4025
ERIC Educational Resources Information Center
Schochet, Peter Z.
2017-01-01
Design-based methods have recently been developed as a way to analyze data from impact evaluations of interventions, programs, and policies. The impact estimators are derived using the building blocks of experimental designs with minimal assumptions, and have good statistical properties. The methods apply to randomized controlled trials (RCTs) and…
A Comparison of Methods to Test for Mediation in Multisite Experiments
ERIC Educational Resources Information Center
Pituch, Keenan A.; Whittaker, Tiffany A.; Stapleton, Laura M.
2005-01-01
A Monte Carlo study extended the research of MacKinnon, Lockwood, Hoffman, West, and Sheets (2002) for single-level designs by examining the statistical performance of four methods to test for mediation in a multilevel experimental design. The design studied was a two-group experiment that was replicated across several sites, included a single…
Lingner, Thomas; Kataya, Amr R. A.; Reumann, Sigrun
2012-01-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences.1 As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity.” Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals. PMID:22415050
Lingner, Thomas; Kataya, Amr R A; Reumann, Sigrun
2012-02-01
We recently developed the first algorithms specifically for plants to predict proteins carrying peroxisome targeting signals type 1 (PTS1) from genome sequences. As validated experimentally, the prediction methods are able to correctly predict unknown peroxisomal Arabidopsis proteins and to infer novel PTS1 tripeptides. The high prediction performance is primarily determined by the large number and sequence diversity of the underlying positive example sequences, which mainly derived from EST databases. However, a few constructs remained cytosolic in experimental validation studies, indicating sequencing errors in some ESTs. To identify erroneous sequences, we validated subcellular targeting of additional positive example sequences in the present study. Moreover, we analyzed the distribution of prediction scores separately for each orthologous group of PTS1 proteins, which generally resembled normal distributions with group-specific mean values. The cytosolic sequences commonly represented outliers of low prediction scores and were located at the very tail of a fitted normal distribution. Three statistical methods for identifying outliers were compared in terms of sensitivity and specificity." Their combined application allows elimination of erroneous ESTs from positive example data sets. This new post-validation method will further improve the prediction accuracy of both PTS1 and PTS2 protein prediction models for plants, fungi, and mammals.
Evaluating the decision accuracy and speed of clinical data visualizations.
Pieczkiewicz, David S; Finkelstein, Stanley M
2010-01-01
Clinicians face an increasing volume of biomedical data. Assessing the efficacy of systems that enable accurate and timely clinical decision making merits corresponding attention. This paper discusses the multiple-reader multiple-case (MRMC) experimental design and linear mixed models as means of assessing and comparing decision accuracy and latency (time) for decision tasks in which clinician readers must interpret visual displays of data. These tools can assess and compare decision accuracy and latency (time). These experimental and statistical techniques, used extensively in radiology imaging studies, offer a number of practical and analytic advantages over more traditional quantitative methods such as percent-correct measurements and ANOVAs, and are recommended for their statistical efficiency and generalizability. An example analysis using readily available, free, and commercial statistical software is provided as an appendix. While these techniques are not appropriate for all evaluation questions, they can provide a valuable addition to the evaluative toolkit of medical informatics research.
NASA Astrophysics Data System (ADS)
Guala, M.; Liu, M.
2017-12-01
The kinematics of sediment particles is investigated by non-intrusive imaging methods to provide a statistical description of bedload transport in conditions near the threshold of motion. In particular, we focus on the cyclic transition between motion and rest regimes to quantify the waiting time statistics inferred to be responsible for anomalous diffusion, and so far elusive. Despite obvious limitations in the spatio-temporal domain of the observations, we are able to identify the probability distributions of the particle step time and length, velocity, acceleration, waiting time, and thus distinguish which quantities exhibit well converged mean values, based on the thickness of their respective tails. The experimental results shown here for four different transport conditions highlight the importance of the waiting time distribution and represent a benchmark dataset for the stochastic modeling of bedload transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Labby, Z.
Physicists are often expected to have a solid grounding in experimental design and statistical analysis, sometimes filling in when biostatisticians or other experts are not available for consultation. Unfortunately, graduate education on these topics is seldom emphasized and few opportunities for continuing education exist. Clinical physicists incorporate new technology and methods into their practice based on published literature. A poor understanding of experimental design and analysis could Result in inappropriate use of new techniques. Clinical physicists also improve current practice through quality initiatives that require sound experimental design and analysis. Academic physicists with a poor understanding of design and analysismore » may produce ambiguous (or misleading) results. This can Result in unnecessary rewrites, publication rejection, and experimental redesign (wasting time, money, and effort). This symposium will provide a practical review of error and uncertainty, common study designs, and statistical tests. Instruction will primarily focus on practical implementation through examples and answer questions such as: where would you typically apply the test/design and where is the test/design typically misapplied (i.e., common pitfalls)? An analysis of error and uncertainty will also be explored using biological studies and associated modeling as a specific use case. Learning Objectives: Understand common experimental testing and clinical trial designs, what questions they can answer, and how to interpret the results Determine where specific statistical tests are appropriate and identify common pitfalls Understand the how uncertainty and error are addressed in biological testing and associated biological modeling.« less
2011-10-14
landscapes. It is motivated by statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and...statistical learning arguments and unifies the tasks of biasing the molecular dynamics to escape free energy wells and estimating the free energy...experimentally, to characterize global changes as well as investigate relative stabilities. In most applications, a brute- force computation based on
NASA Technical Reports Server (NTRS)
Lee, Sangsan; Lele, Sanjiva K.; Moin, Parviz
1992-01-01
For the numerical simulation of inhomogeneous turbulent flows, a method is developed for generating stochastic inflow boundary conditions with a prescribed power spectrum. Turbulence statistics from spatial simulations using this method with a low fluctuation Mach number are in excellent agreement with the experimental data, which validates the procedure. Turbulence statistics from spatial simulations are also compared to those from temporal simulations using Taylor's hypothesis. Statistics such as turbulence intensity, vorticity, and velocity derivative skewness compare favorably with the temporal simulation. However, the statistics of dilatation show a significant departure from those obtained in the temporal simulation. To directly check the applicability of Taylor's hypothesis, space-time correlations of fluctuations in velocity, vorticity, and dilatation are investigated. Convection velocities based on vorticity and velocity fluctuations are computed as functions of the spatial and temporal separations. The profile of the space-time correlation of dilatation fluctuations is explained via a wave propagation model.
NASA Astrophysics Data System (ADS)
Kovalevsky, Louis; Langley, Robin S.; Caro, Stephane
2016-05-01
Due to the high cost of experimental EMI measurements significant attention has been focused on numerical simulation. Classical methods such as Method of Moment or Finite Difference Time Domain are not well suited for this type of problem, as they require a fine discretisation of space and failed to take into account uncertainties. In this paper, the authors show that the Statistical Energy Analysis is well suited for this type of application. The SEA is a statistical approach employed to solve high frequency problems of electromagnetically reverberant cavities at a reduced computational cost. The key aspects of this approach are (i) to consider an ensemble of system that share the same gross parameter, and (ii) to avoid solving Maxwell's equations inside the cavity, using the power balance principle. The output is an estimate of the field magnitude distribution in each cavity. The method is applied on a typical aircraft structure.
Statistical methods for thermonuclear reaction rates and nucleosynthesis simulations
NASA Astrophysics Data System (ADS)
Iliadis, Christian; Longland, Richard; Coc, Alain; Timmes, F. X.; Champagne, Art E.
2015-03-01
Rigorous statistical methods for estimating thermonuclear reaction rates and nucleosynthesis are becoming increasingly established in nuclear astrophysics. The main challenge being faced is that experimental reaction rates are highly complex quantities derived from a multitude of different measured nuclear parameters (e.g., astrophysical S-factors, resonance energies and strengths, particle and γ-ray partial widths). We discuss the application of the Monte Carlo method to two distinct, but related, questions. First, given a set of measured nuclear parameters, how can one best estimate the resulting thermonuclear reaction rates and associated uncertainties? Second, given a set of appropriate reaction rates, how can one best estimate the abundances from nucleosynthesis (i.e., reaction network) calculations? The techniques described here provide probability density functions that can be used to derive statistically meaningful reaction rates and final abundances for any desired coverage probability. Examples are given for applications to s-process neutron sources, core-collapse supernovae, classical novae, and Big Bang nucleosynthesis.
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Multivariate analysis: greater insights into complex systems
USDA-ARS?s Scientific Manuscript database
Many agronomic researchers measure and collect multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate (MV) statistical methods encompass the simultaneous analysis of all random variables (RV) measured on each experimental or sampling ...
Crans, Gerald G; Shuster, Jonathan J
2008-08-15
The debate as to which statistical methodology is most appropriate for the analysis of the two-sample comparative binomial trial has persisted for decades. Practitioners who favor the conditional methods of Fisher, Fisher's exact test (FET), claim that only experimental outcomes containing the same amount of information should be considered when performing analyses. Hence, the total number of successes should be fixed at its observed level in hypothetical repetitions of the experiment. Using conditional methods in clinical settings can pose interpretation difficulties, since results are derived using conditional sample spaces rather than the set of all possible outcomes. Perhaps more importantly from a clinical trial design perspective, this test can be too conservative, resulting in greater resource requirements and more subjects exposed to an experimental treatment. The actual significance level attained by FET (the size of the test) has not been reported in the statistical literature. Berger (J. R. Statist. Soc. D (The Statistician) 2001; 50:79-85) proposed assessing the conservativeness of conditional methods using p-value confidence intervals. In this paper we develop a numerical algorithm that calculates the size of FET for sample sizes, n, up to 125 per group at the two-sided significance level, alpha = 0.05. Additionally, this numerical method is used to define new significance levels alpha(*) = alpha+epsilon, where epsilon is a small positive number, for each n, such that the size of the test is as close as possible to the pre-specified alpha (0.05 for the current work) without exceeding it. Lastly, a sample size and power calculation example are presented, which demonstrates the statistical advantages of implementing the adjustment to FET (using alpha(*) instead of alpha) in the two-sample comparative binomial trial. 2008 John Wiley & Sons, Ltd
Tweedell, Andrew J.; Haynes, Courtney A.
2017-01-01
The timing of muscle activity is a commonly applied analytic method to understand how the nervous system controls movement. This study systematically evaluates six classes of standard and statistical algorithms to determine muscle onset in both experimental surface electromyography (EMG) and simulated EMG with a known onset time. Eighteen participants had EMG collected from the biceps brachii and vastus lateralis while performing a biceps curl or knee extension, respectively. Three established methods and three statistical methods for EMG onset were evaluated. Linear envelope, Teager-Kaiser energy operator + linear envelope and sample entropy were the established methods evaluated while general time series mean/variance, sequential and batch processing of parametric and nonparametric tools, and Bayesian changepoint analysis were the statistical techniques used. Visual EMG onset (experimental data) and objective EMG onset (simulated data) were compared with algorithmic EMG onset via root mean square error and linear regression models for stepwise elimination of inferior algorithms. The top algorithms for both data types were analyzed for their mean agreement with the gold standard onset and evaluation of 95% confidence intervals. The top algorithms were all Bayesian changepoint analysis iterations where the parameter of the prior (p0) was zero. The best performing Bayesian algorithms were p0 = 0 and a posterior probability for onset determination at 60–90%. While existing algorithms performed reasonably, the Bayesian changepoint analysis methodology provides greater reliability and accuracy when determining the singular onset of EMG activity in a time series. Further research is needed to determine if this class of algorithms perform equally well when the time series has multiple bursts of muscle activity. PMID:28489897
Bounding the Set of Classical Correlations of a Many-Body System
NASA Astrophysics Data System (ADS)
Fadel, Matteo; Tura, Jordi
2017-12-01
We present a method to certify the presence of Bell correlations in experimentally observed statistics, and to obtain new Bell inequalities. Our approach is based on relaxing the conditions defining the set of correlations obeying a local hidden variable model, yielding a convergent hierarchy of semidefinite programs (SDP's). Because the size of these SDP's is independent of the number of parties involved, this technique allows us to characterize correlations in many-body systems. As an example, we illustrate our method with the experimental data presented in Science 352, 441 (2016), 10.1126/science.aad8665.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khromov, K. Yu.; Vaks, V. G., E-mail: vaks@mbslab.kiae.ru; Zhuravlev, I. A.
2013-02-15
The previously developed ab initio model and the kinetic Monte Carlo method (KMCM) are used to simulate precipitation in a number of iron-copper alloys with different copper concentrations x and temperatures T. The same simulations are also made using an improved version of the previously suggested stochastic statistical method (SSM). The results obtained enable us to make a number of general conclusions about the dependences of the decomposition kinetics in Fe-Cu alloys on x and T. We also show that the SSM usually describes the precipitation kinetics in good agreement with the KMCM, and using the SSM in conjunction withmore » the KMCM allows extending the KMC simulations to the longer evolution times. The results of simulations seem to agree with available experimental data for Fe-Cu alloys within statistical errors of simulations and the scatter of experimental results. Comparison of simulation results with experiments for some multicomponent Fe-Cu-based alloys allows making certain conclusions about the influence of alloying elements in these alloys on the precipitation kinetics at different stages of evolution.« less
Plackett-Burman experimental design to facilitate syntactic foam development
Smith, Zachary D.; Keller, Jennie R.; Bello, Mollie; ...
2015-09-14
The use of an eight-experiment Plackett–Burman method can assess six experimental variables and eight responses in a polysiloxane-glass microsphere syntactic foam. The approach aims to decrease the time required to develop a tunable polymer composite by identifying a reduced set of variables and responses suitable for future predictive modeling. The statistical design assesses the main effects of mixing process parameters, polymer matrix composition, microsphere density and volume loading, and the blending of two grades of microspheres, using a dummy factor in statistical calculations. Responses cover rheological, physical, thermal, and mechanical properties. The cure accelerator content of the polymer matrix andmore » the volume loading of the microspheres have the largest effects on foam properties. These factors are the most suitable for controlling the gel point of the curing foam, and the density of the cured foam. The mixing parameters introduce widespread variability and therefore should be fixed at effective levels during follow-up testing. Some responses may require greater contrast in microsphere-related factors. As a result, compared to other possible statistical approaches, the run economy of the Plackett–Burman method makes it a valuable tool for rapidly characterizing new foams.« less
A Sub-filter Scale Noise Equation far Hybrid LES Simulations
NASA Technical Reports Server (NTRS)
Goldstein, Marvin E.
2006-01-01
Hybrid LES/subscale modeling approaches have an important advantage over the current noise prediction methods in that they only involve modeling of the relatively universal subscale motion and not the configuration dependent larger scale turbulence . Previous hybrid approaches use approximate statistical techniques or extrapolation methods to obtain the requisite information about the sub-filter scale motion. An alternative approach would be to adopt the modeling techniques used in the current noise prediction methods and determine the unknown stresses from experimental data. The present paper derives an equation for predicting the sub scale sound from information that can be obtained with currently available experimental procedures. The resulting prediction method would then be intermediate between the current noise prediction codes and previously proposed hybrid techniques.
Microscopic saw mark analysis: an empirical approach.
Love, Jennifer C; Derrick, Sharon M; Wiersema, Jason M; Peters, Charles
2015-01-01
Microscopic saw mark analysis is a well published and generally accepted qualitative analytical method. However, little research has focused on identifying and mitigating potential sources of error associated with the method. The presented study proposes the use of classification trees and random forest classifiers as an optimal, statistically sound approach to mitigate the potential for error of variability and outcome error in microscopic saw mark analysis. The statistical model was applied to 58 experimental saw marks created with four types of saws. The saw marks were made in fresh human femurs obtained through anatomical gift and were analyzed using a Keyence digital microscope. The statistical approach weighed the variables based on discriminatory value and produced decision trees with an associated outcome error rate of 8.62-17.82%. © 2014 American Academy of Forensic Sciences.
Application of multivariate statistical techniques in microbial ecology
Paliy, O.; Shankar, V.
2016-01-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large scale ecological datasets. Especially noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions, and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amounts of data, powerful statistical techniques of multivariate analysis are well suited to analyze and interpret these datasets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular dataset. In this review we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive, and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and dataset structure. PMID:26786791
NASA Astrophysics Data System (ADS)
Ha, Vu Thi Thanh; Hung, Vu Van; Hanh, Pham Thi Minh; Tuyen, Nguyen Viet; Hai, Tran Thi; Hieu, Ho Khac
2018-03-01
The thermodynamic and mechanical properties of III-V zinc-blende AlP, InP semiconductors and their alloys have been studied in detail from statistical moment method taking into account the anharmonicity effects of the lattice vibrations. The nearest neighbor distance, thermal expansion coefficient, bulk moduli, specific heats at the constant volume and constant pressure of the zincblende AlP, InP and AlyIn1-yP alloys are calculated as functions of the temperature. The statistical moment method calculations are performed by using the many-body Stillinger-Weber potential. The concentration dependences of the thermodynamic quantities of zinc-blende AlyIn1-yP crystals have also been discussed and compared with those of the experimental results. Our results are reasonable agreement with earlier density functional theory calculations and can provide useful qualitative information for future experiments. The moment method then can be developed extensively for studying the atomistic structure and thermodynamic properties of nanoscale materials as well.
Calculation of recoil implantation profiles using known range statistics
NASA Technical Reports Server (NTRS)
Fung, C. D.; Avila, R. E.
1985-01-01
A method has been developed to calculate the depth distribution of recoil atoms that result from ion implantation onto a substrate covered with a thin surface layer. The calculation includes first order recoils considering projected range straggles, and lateral straggles of recoils but neglecting lateral straggles of projectiles. Projectile range distributions at intermediate energies in the surface layer are deduced from look-up tables of known range statistics. A great saving of computing time and human effort is thus attained in comparison with existing procedures. The method is used to calculate recoil profiles of oxygen from implantation of arsenic through SiO2 and of nitrogen from implantation of phosphorus through Si3N4 films on silicon. The calculated recoil profiles are in good agreement with results obtained by other investigators using the Boltzmann transport equation and they also compare very well with available experimental results in the literature. The deviation between calculated and experimental results is discussed in relation to lateral straggles. From this discussion, a range of surface layer thickness for which the method applies is recommended.
Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S.; Sinha, Saurabh
2011-01-01
Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, ‘enhancers’), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for ‘motif-blind’ CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to ‘supervise’ the search. We propose a new statistical method, based on ‘Interpolated Markov Models’, for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. PMID:21821659
A Comparison of Single Sample and Bootstrap Methods to Assess Mediation in Cluster Randomized Trials
ERIC Educational Resources Information Center
Pituch, Keenan A.; Stapleton, Laura M.; Kang, Joo Youn
2006-01-01
A Monte Carlo study examined the statistical performance of single sample and bootstrap methods that can be used to test and form confidence interval estimates of indirect effects in two cluster randomized experimental designs. The designs were similar in that they featured random assignment of clusters to one of two treatment conditions and…
Code of Federal Regulations, 2012 CFR
2012-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... fluids. 1.4This method has been designed to show positive contamination for 5% of representative crude....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
Code of Federal Regulations, 2013 CFR
2013-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... fluids. 1.4This method has been designed to show positive contamination for 5% of representative crude....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
Code of Federal Regulations, 2014 CFR
2014-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... fluids. 1.4This method has been designed to show positive contamination for 5% of representative crude....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
Code of Federal Regulations, 2011 CFR
2011-07-01
... rigorous statistical experimental design and interpretation (Reference 16.4). 14.0Pollution Prevention 14... oil contamination in drilling fluids. 1.4This method has been designed to show positive contamination....1Sample collection bottles/jars—New, pre-cleaned bottles/jars, lot-certified to be free of artifacts...
The US EPA ToxCast program aims to develop methods for mechanistically-based chemical prioritization using a suite of high throughput, in vitro assays that probe relevant biological pathways, and coupling them with statistical and machine learning methods that produce predictive ...
MODELING A MIXTURE: PBPK/PD APPROACHES FOR PREDICTING CHEMICAL INTERACTIONS.
Since environmental chemical exposures generally involve multiple chemicals, there are both regulatory and scientific drivers to develop methods to predict outcomes of these exposures. Even using efficient statistical and experimental designs, it is not possible to test in vivo a...
Scientific computations section monthly report, November 1993
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buckner, M.R.
1993-12-30
This progress report from the Savannah River Technology Center contains abstracts from papers from the computational modeling, applied statistics, applied physics, experimental thermal hydraulics, and packaging and transportation groups. Specific topics covered include: engineering modeling and process simulation, criticality methods and analysis, plutonium disposition.
Hobbs, Brian P.; Carlin, Bradley P.; Mandrekar, Sumithra J.; Sargent, Daniel J.
2011-01-01
Summary Bayesian clinical trial designs offer the possibility of a substantially reduced sample size, increased statistical power, and reductions in cost and ethical hazard. However when prior and current information conflict, Bayesian methods can lead to higher than expected Type I error, as well as the possibility of a costlier and lengthier trial. This motivates an investigation of the feasibility of hierarchical Bayesian methods for incorporating historical data that are adaptively robust to prior information that reveals itself to be inconsistent with the accumulating experimental data. In this paper, we present several models that allow for the commensurability of the information in the historical and current data to determine how much historical information is used. A primary tool is elaborating the traditional power prior approach based upon a measure of commensurability for Gaussian data. We compare the frequentist performance of several methods using simulations, and close with an example of a colon cancer trial that illustrates a linear models extension of our adaptive borrowing approach. Our proposed methods produce more precise estimates of the model parameters, in particular conferring statistical significance to the observed reduction in tumor size for the experimental regimen as compared to the control regimen. PMID:21361892
Improving UWB-Based Localization in IoT Scenarios with Statistical Models of Distance Error.
Monica, Stefania; Ferrari, Gianluigi
2018-05-17
Interest in the Internet of Things (IoT) is rapidly increasing, as the number of connected devices is exponentially growing. One of the application scenarios envisaged for IoT technologies involves indoor localization and context awareness. In this paper, we focus on a localization approach that relies on a particular type of communication technology, namely Ultra Wide Band (UWB). UWB technology is an attractive choice for indoor localization, owing to its high accuracy. Since localization algorithms typically rely on estimated inter-node distances, the goal of this paper is to evaluate the improvement brought by a simple (linear) statistical model of the distance error. On the basis of an extensive experimental measurement campaign, we propose a general analytical framework, based on a Least Square (LS) method, to derive a novel statistical model for the range estimation error between a pair of UWB nodes. The proposed statistical model is then applied to improve the performance of a few illustrative localization algorithms in various realistic scenarios. The obtained experimental results show that the use of the proposed statistical model improves the accuracy of the considered localization algorithms with a reduction of the localization error up to 66%.
Evaporation residue cross-section measurements for 48Ti-induced reactions
NASA Astrophysics Data System (ADS)
Sharma, Priya; Behera, B. R.; Mahajan, Ruchi; Thakur, Meenu; Kaur, Gurpreet; Kapoor, Kushal; Rani, Kavita; Madhavan, N.; Nath, S.; Gehlot, J.; Dubey, R.; Mazumdar, I.; Patel, S. M.; Dhibar, M.; Hosamani, M. M.; Khushboo, Kumar, Neeraj; Shamlath, A.; Mohanto, G.; Pal, Santanu
2017-09-01
Background: A significant research effort is currently aimed at understanding the synthesis of heavy elements. For this purpose, heavy ion induced fusion reactions are used and various experimental observations have indicated the influence of shell and deformation effects in the compound nucleus (CN) formation. There is a need to understand these two effects. Purpose: To investigate the effect of proton shell closure and deformation through the comparison of evaporation residue (ER) cross sections for the systems involving heavy compound nuclei around the ZCN=82 region. Methods: A systematic study of ER cross-section measurements was carried out for the 48Ti+Nd,150142 , 144Sm systems in the energy range of 140 -205 MeV . The measurement has been performed using the gas-filled mode of the hybrid recoil mass analyzer present at the Inter University Accelerator Centre (IUAC), New Delhi. Theoretical calculations based on a statistical model were carried out incorporating an adjustable barrier scaling factor to fit the experimental ER cross section. Coupled-channel calculations were also performed using the ccfull code to obtain the spin distribution of the CN, which was used as an input in the calculations. Results: Experimental ER cross sections for 48Ti+Nd,150142 were found to be considerably smaller than the statistical model predictions whereas experimental and statistical model predictions for 48Ti+144Sm were of comparable magnitudes. Conclusion: Though comparison of experimental ER cross sections with statistical model predictions indicate considerable non-compound-nuclear processes for 48Ti+Nd,150142 reactions, no such evidence is found for the 48Ti+144Sm system. Further investigations are required to understand the difference in fusion probabilities of 48Ti+142Nd and 48Ti+144Sm systems.
Factors that Affect Operational Reliability of Turbojet Engines
NASA Technical Reports Server (NTRS)
1956-01-01
The problem of improving operational reliability of turbojet engines is studied in a series of papers. Failure statistics for this engine are presented, the theory and experimental evidence on how engine failures occur are described, and the methods available for avoiding failure in operation are discussed. The individual papers of the series are Objectives, Failure Statistics, Foreign-Object Damage, Compressor Blades, Combustor Assembly, Nozzle Diaphrams, Turbine Buckets, Turbine Disks, Rolling Contact Bearings, Engine Fuel Controls, and Summary Discussion.
Study on Hybrid Image Search Technology Based on Texts and Contents
NASA Astrophysics Data System (ADS)
Wang, H. T.; Ma, F. L.; Yan, C.; Pan, H.
2018-05-01
Image search was studied first here based on texts and contents, respectively. The text-based image feature extraction was put forward by integrating the statistical and topic features in view of the limitation of extraction of keywords only by means of statistical features of words. On the other hand, a search-by-image method was put forward based on multi-feature fusion in view of the imprecision of the content-based image search by means of a single feature. The layered-searching method depended on primarily the text-based image search method and additionally the content-based image search was then put forward in view of differences between the text-based and content-based methods and their difficult direct fusion. The feasibility and effectiveness of the hybrid search algorithm were experimentally verified.
Rydberg Atoms in Strong Fields: a Testing Ground for Quantum Chaos.
NASA Astrophysics Data System (ADS)
Courtney, Michael
1995-01-01
Rydberg atoms in strong static electric and magnetic fields provide experimentally accessible systems for studying the connections between classical chaos and quantum mechanics in the semiclassical limit. This experimental accessibility has motivated the development of reliable quantum mechanical solutions. This thesis uses both experimental and computed quantum spectra to test the central approaches to quantum chaos. These central approaches consist mainly of developing methods to compute the spectra of quantum systems in non -perturbative regimes, correlating statistical descriptions of eigenvalues with the classical behavior of the same Hamiltonian, and the development of semiclassical methods such as periodic-orbit theory. Particular emphasis is given to identifying the spectral signature of recurrences --quantum wave packets which follow classical orbits. The new findings include: the breakdown of the connection between energy-level statistics and classical chaos in odd-parity diamagnetic lithium, the discovery of the signature of very long period orbits in atomic spectra, quantitative evidence for the scattering of recurrences by the alkali -metal core, quantitative description of the behavior of recurrences near bifurcations, and a semiclassical interpretation of the evolution of continuum Stark spectra. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.).
Uysal, Tancan; Amasyali, Mihri; Enhos, Sukru; Sonmez, Mehmet Fatih; Sagdic, Deniz
2009-01-01
Objectives The aim of this experimental study was to evaluate the effects of ED-71, a new active vitamin D analog, on bone regeneration in response to expansion of the mid-palatal suture, in rats, histomorphometrically. Methods Sixteen male 50–60 days old Wistar rats were separated into two equal groups (control and experimental). Both groups were subjected to expansion, and 30 grams of force was applied to the maxillary incisors with a helical-spring. Experimental group was treated with single-dose ED-71 (0.8 μg/kg body weight) in the mid-palatal suture locally and eight control animals received vehicle solution. Bone regeneration in the mid-palatal suture was evaluated by bone histomorphometric method and mineralized area (Md.Ar), fibrosis area (Fb.Ar), mineralized area/fibrosis area (Md.Ar/Fb.Ar), bone area (B.Ar) and osteoblast number (N.Ob) parameters were evaluated. Mann Whitney-U test was used for statistical evaluation at P<.05 level. Results Statistical analysis showed significant differences between groups for all investigated histomorphometric parameters. Md.Ar (P<.001), Md.Ar/Fb.Ar (P<.001), B.Ar (P<.01) and N.Ob (P<.001) parameters were significantly increased and Fb.Ar (P<.001) measurement was significantly decreased in experimental group. ED-71 group with a mean of 24.55±6.47 showed statistically higher N.Ob than the control group (mean N.Ob: 12.82±5.81). Conclusions ED-71 has positive effects on early phase of bone regeneration in the mid-palatal suture in response to expansion and may be beneficial in routine maxillary expansion procedures. PMID:19756189
NASA Astrophysics Data System (ADS)
Klemmer, Cynthia Davis
Science literacy refers to a basic knowledge and understanding of science concepts and processes needed to consider issues and make choices on a daily basis in an increasingly technology-driven society. A critical precursor to producing science literate adults is actively involving children in science while they are young. National and state (TX) science standards advocate the use of constructivist methods including hands-on, experiential activities that foster the development of science process skills through real-world investigations. School gardens show promise as a tool for implementing these guidelines by providing living laboratories for active science. Gardens offer opportunities for a variety of hands-on investigations, enabling students to apply and practice science skills. School gardens are increasing in popularity; however, little research data exists attesting to their actual effectiveness in enhancing students' science achievement. The study used a quasi-experimental posttest-only research design to assess the effects of a school gardening program on the science achievement of 3rd, 4th, and 5th grade elementary students. The sample consisted of 647 students from seven elementary schools in Temple, Texas. The experimental group participated in school gardening activities as part of their science curriculum. The control group did not garden and were taught using traditional classroom-based methods. Results showed higher scores for students in the experimental group which were statistically significant. Post-hoc tests using Scheffe's method revealed that these differences were attributed to the 5th grade. No statistical significance was found between girls and boys in the experimental group, indicating that gardening was equally effective for both genders. Within each gender, statistical significance was found between males in the experimental and control groups at all three grade levels, and for females in the 5 th grade. This research indicated that gardening was a successful teaching method for raising science achievement scores for boys in 3rd, 4 th, and 5th grades, and for girls in the 5th grade. The finding for girls may be important because it mediated a trend of decreasing scores in the control group at an age just prior to the onset of adolescence, when achievement and interest in science typically decrease.
Review of research designs and statistical methods employed in dental postgraduate dissertations.
Shirahatti, Ravi V; Hegde-Shetiya, Sahana
2015-01-01
There is a need to evaluate the quality of postgraduate dissertations of dentistry submitted to university in the light of the international standards of reporting. We conducted the review with an objective to document the use of sampling methods, measurement standardization, blinding, methods to eliminate bias, appropriate use of statistical tests, appropriate use of data presentation in postgraduate dental research and suggest and recommend modifications. The public access database of the dissertations from Rajiv Gandhi University of Health Sciences was reviewed. Three hundred and thirty-three eligible dissertations underwent preliminary evaluation followed by detailed evaluation of 10% of randomly selected dissertations. The dissertations were assessed based on international reporting guidelines such as strengthening the reporting of observational studies in epidemiology (STROBE), consolidated standards of reporting trials (CONSORT), and other scholarly resources. The data were compiled using MS Excel and SPSS 10.0. Numbers and percentages were used for describing the data. The "in vitro" studies were the most common type of research (39%), followed by observational (32%) and experimental studies (29%). The disciplines conservative dentistry (92%) and prosthodontics (75%) reported high numbers of in vitro research. Disciplines oral surgery (80%) and periodontics (67%) had conducted experimental studies as a major share of their research. Lacunae in the studies included observational studies not following random sampling (70%), experimental studies not following random allocation (75%), not mentioning about blinding, confounding variables and calibrations in measurements, misrepresenting the data by inappropriate data presentation, errors in reporting probability values and not reporting confidence intervals. Few studies showed grossly inappropriate choice of statistical tests and many studies needed additional tests. Overall observations indicated the need to comply with standard guidelines of reporting research.
VoroMQA: Assessment of protein structure quality using interatomic contact areas.
Olechnovič, Kliment; Venclovas, Česlovas
2017-06-01
In the absence of experimentally determined protein structure many biological questions can be addressed using computational structural models. However, the utility of protein structural models depends on their quality. Therefore, the estimation of the quality of predicted structures is an important problem. One of the approaches to this problem is the use of knowledge-based statistical potentials. Such methods typically rely on the statistics of distances and angles of residue-residue or atom-atom interactions collected from experimentally determined structures. Here, we present VoroMQA (Voronoi tessellation-based Model Quality Assessment), a new method for the estimation of protein structure quality. Our method combines the idea of statistical potentials with the use of interatomic contact areas instead of distances. Contact areas, derived using Voronoi tessellation of protein structure, are used to describe and seamlessly integrate both explicit interactions between protein atoms and implicit interactions of protein atoms with solvent. VoroMQA produces scores at atomic, residue, and global levels, all in the fixed range from 0 to 1. The method was tested on the CASP data and compared to several other single-model quality assessment methods. VoroMQA showed strong performance in the recognition of the native structure and in the structural model selection tests, thus demonstrating the efficacy of interatomic contact areas in estimating protein structure quality. The software implementation of VoroMQA is freely available as a standalone application and as a web server at http://bioinformatics.lt/software/voromqa. Proteins 2017; 85:1131-1145. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Bubbles and denaturation in DNA
NASA Astrophysics Data System (ADS)
van Erp, T. S.; Cuesta-López, S.; Peyrard, M.
2006-08-01
The local opening of DNA is an intriguing phenomenon from a statistical-physics point of view, but is also essential for its biological function. For instance, the transcription and replication of our genetic code cannot take place without the unwinding of the DNA double helix. Although these biological processes are driven by proteins, there might well be a relation between these biological openings and the spontaneous bubble formation due to thermal fluctuations. Mesoscopic models, like the Peyrard-Bishop-Dauxois (PBD) model, have fairly accurately reproduced some experimental denaturation curves and the sharp phase transition in the thermodynamic limit. It is, hence, tempting to see whether these models could be used to predict the biological activity of DNA. In a previous study, we introduced a method that allows to obtain very accurate results on this subject, which showed that some previous claims in this direction, based on molecular-dynamics studies, were premature. This could either imply that the present PBD model should be improved or that biological activity can only be predicted in a more complex framework that involves interactions with proteins and super helical stresses. In this article, we give a detailed description of the statistical method introduced before. Moreover, for several DNA sequences, we give a thorough analysis of the bubble-statistics as a function of position and bubble size and the so-called l-denaturation curves that can be measured experimentally. These show that some important experimental observations are missing in the present model. We discuss how the present model could be improved.
Behnammoghadam, Mohammad; Alamdari, Ali Karam; Behnammoghadam, Aziz; Darban, Fatemeh
2015-01-01
Background: Coronary heart disease is the most important cause of death and inability in all communities. Depressive symptoms are frequent among post-myocardial infarction (MI) patients and may cause negative effects on cardiac prognosis. This study was conducted to identify efficacy of EMDR on depression of patients with MI. Methods: This study is a clinical trial. Sixty patients with MI were selected by simple sampling, and were separated randomly into experimental and control groups. To collect data, demographic questionnaire and Beck Depression Questionnaire were used. In experimental group, EMDR therapy were performed in three sessions alternate days for 45–90 minutes, during four months after their MI. Depression level of patients was measured before, and a week after EMDR therapy. Data were analyzed using paired –t- test, t–test, and Chi-square. Results: The mean depression level in experimental group 27.26± 6.41 before intervention, and it was 11.76 ± 3.71 after intervention. Hence, it showed a statistically significant difference (P<0.001). The mean depression level in control group was 24.53 ± 5.81 before intervention, and it was 31.66± 6.09 after intervention, so it showed statistically significant difference (P<0.001). The comparison of mean depression level at post treatment, in both groups showed statistically significant difference (P<0.001). Conclusion: EMDR is an effective, useful, efficient, and non-invasive method for treatment and reducing depression in patients with MI. PMID:26153191
Biophotons, coherence and photocount statistics: A critical review
NASA Astrophysics Data System (ADS)
Cifra, Michal; Brouder, Christian; Nerudová, Michaela; Kučera, Ondřej
2015-08-01
Biological samples continuously emit ultra-weak photon emission (UPE, or "biophotons") which stems from electronic excited states generated chemically during oxidative metabolism and stress. Thus, UPE can potentially serve as a method for non-invasive diagnostics of oxidative processes or, if discovered, also of other processes capable of electron excitation. While the fundamental generating mechanisms of UPE are fairly elucidated together with their approximate ranges of intensities and spectra, statistical properties of UPE is still a highly challenging topic. Here we review claims about nontrivial statistical properties of UPE, such as coherence and squeezed states of light. After introduction to the necessary theory, we categorize the experimental works of all authors to those with solid, conventional interpretation and those with unconventional and even speculative interpretation. The conclusion of our review is twofold; while the phenomenon of UPE from biological systems can be considered experimentally well established, no reliable evidence for the coherence or nonclassicality of UPE was actually achieved up to now. Furthermore, we propose perspective avenues in the research of statistical properties of biological UPE.
Comment on: 'A Poisson resampling method for simulating reduced counts in nuclear medicine images'.
de Nijs, Robin
2015-07-21
In order to be able to calculate half-count images from already acquired data, White and Lawson published their method based on Poisson resampling. They verified their method experimentally by measurements with a Co-57 flood source. In this comment their results are reproduced and confirmed by a direct numerical simulation in Matlab. Not only Poisson resampling, but also two direct redrawing methods were investigated. Redrawing methods were based on a Poisson and a Gaussian distribution. Mean, standard deviation, skewness and excess kurtosis half-count/full-count ratios were determined for all methods, and compared to the theoretical values for a Poisson distribution. Statistical parameters showed the same behavior as in the original note and showed the superiority of the Poisson resampling method. Rounding off before saving of the half count image had a severe impact on counting statistics for counts below 100. Only Poisson resampling was not affected by this, while Gaussian redrawing was less affected by it than Poisson redrawing. Poisson resampling is the method of choice, when simulating half-count (or less) images from full-count images. It simulates correctly the statistical properties, also in the case of rounding off of the images.
Text mining by Tsallis entropy
NASA Astrophysics Data System (ADS)
Jamaati, Maryam; Mehri, Ali
2018-01-01
Long-range correlations between the elements of natural languages enable them to convey very complex information. Complex structure of human language, as a manifestation of natural languages, motivates us to apply nonextensive statistical mechanics in text mining. Tsallis entropy appropriately ranks the terms' relevance to document subject, taking advantage of their spatial correlation length. We apply this statistical concept as a new powerful word ranking metric in order to extract keywords of a single document. We carry out an experimental evaluation, which shows capability of the presented method in keyword extraction. We find that, Tsallis entropy has reliable word ranking performance, at the same level of the best previous ranking methods.
Guidelines for the Investigation of Mediating Variables in Business Research.
MacKinnon, David P; Coxe, Stefany; Baraldi, Amanda N
2012-03-01
Business theories often specify the mediating mechanisms by which a predictor variable affects an outcome variable. In the last 30 years, investigations of mediating processes have become more widespread with corresponding developments in statistical methods to conduct these tests. The purpose of this article is to provide guidelines for mediation studies by focusing on decisions made prior to the research study that affect the clarity of conclusions from a mediation study, the statistical models for mediation analysis, and methods to improve interpretation of mediation results after the research study. Throughout this article, the importance of a program of experimental and observational research for investigating mediating mechanisms is emphasized.
NASA Astrophysics Data System (ADS)
Boichuk, T. M.; Bachinskiy, V. T.; Vanchuliak, O. Ya.; Minzer, O. P.; Garazdiuk, M.; Motrich, A. V.
2014-08-01
This research presents the results of investigation of laser polarization fluorescence of biological layers (histological sections of the myocardium). The polarized structure of autofluorescence imaging layers of biological tissues was detected and investigated. Proposed the model of describing the formation of polarization inhomogeneous of autofluorescence imaging biological optically anisotropic layers. On this basis, analytically and experimentally tested to justify the method of laser polarimetry autofluorescent. Analyzed the effectiveness of this method in the postmortem diagnosis of infarction. The objective criteria (statistical moments) of differentiation of autofluorescent images of histological sections myocardium were defined. The operational characteristics (sensitivity, specificity, accuracy) of these technique were determined.
Effect of plasma spraying modes on material properties of internal combustion engine cylinder liners
NASA Astrophysics Data System (ADS)
Timokhova, O. M.; Burmistrova, O. N.; Sirina, E. A.; Timokhov, R. S.
2018-03-01
The paper analyses different methods of remanufacturing worn-out machine parts in order to get the best performance characteristics. One of the most promising of them is a plasma spraying method. The mathematical models presented in the paper are intended to anticipate the results of plasma spraying, its effect on the properties of the material of internal combustion engine cylinder liners under repair. The experimental data and research results have been computer processed with Statistica 10.0 software package. The pare correlation coefficient values (R) and F-statistic criterion are given to confirm the statistical properties and adequacy of obtained regression equations.
Mynaugh, P A
1991-09-01
This study examined the effects of two methods of teaching perineal massage on the rates of practice of perineal massage, of episiotomy, and of lacerations in primiparas at birth. Couples in 20 randomly selected sections of four prenatal class series received routine printed and verbal instruction and a 12-minute video demonstration of perineal massage, or only the routine printed and verbal instruction. Women reported their practice rates in daily diary records, which were mailed to the researcher weekly. Hospital records provided delivery data. Of the 83 women, 23 (28%) practiced perineal massage: 16 (35.6%) in the experimental group, 7 (18.4%) controls. Even though the rate of practice almost doubled among experimental group women, the videotape instruction method was statistically nonsignificant. Episiotomy and laceration rates were not affected by teaching method. More severe lacerations occurred among the experimental group; however, the control group had almost four times as many severe (21%) as minor (5.3%) lacerations. The experimental group had twice as many severe (28.9%) as minor (13.3%) lacerations. These results were also nonsignificant.
The Application of Probabilistic Methods to the Mistuning Problem
NASA Technical Reports Server (NTRS)
Griffin, J. H.; Rossi, M. R.; Feiner, D. M.
2004-01-01
FMM is a reduced order model for efficiently calculating the forced response of a mistuned bladed disk. FMM ID is a companion program which determines the mistuning in a particular rotor. Together, these methods provide a way to acquire data on the mistuning in a population of bladed disks, and then simulate the forced response of the fleet. This process is tested experimentally, and the simulated results are compared with laboratory measurements of a fleet of test rotors. The method is shown to work quite well. It is found that accuracy of the results depends on two factors: the quality of the statistical model used to characterize mistuning, and how sensitive the system is to errors in the statistical modeling.
NASA Astrophysics Data System (ADS)
Wang, Dong
2016-03-01
Gears are the most commonly used components in mechanical transmission systems. Their failures may cause transmission system breakdown and result in economic loss. Identification of different gear crack levels is important to prevent any unexpected gear failure because gear cracks lead to gear tooth breakage. Signal processing based methods mainly require expertize to explain gear fault signatures which is usually not easy to be achieved by ordinary users. In order to automatically identify different gear crack levels, intelligent gear crack identification methods should be developed. The previous case studies experimentally proved that K-nearest neighbors based methods exhibit high prediction accuracies for identification of 3 different gear crack levels under different motor speeds and loads. In this short communication, to further enhance prediction accuracies of existing K-nearest neighbors based methods and extend identification of 3 different gear crack levels to identification of 5 different gear crack levels, redundant statistical features are constructed by using Daubechies 44 (db44) binary wavelet packet transform at different wavelet decomposition levels, prior to the use of a K-nearest neighbors method. The dimensionality of redundant statistical features is 620, which provides richer gear fault signatures. Since many of these statistical features are redundant and highly correlated with each other, dimensionality reduction of redundant statistical features is conducted to obtain new significant statistical features. At last, the K-nearest neighbors method is used to identify 5 different gear crack levels under different motor speeds and loads. A case study including 3 experiments is investigated to demonstrate that the developed method provides higher prediction accuracies than the existing K-nearest neighbors based methods for recognizing different gear crack levels under different motor speeds and loads. Based on the new significant statistical features, some other popular statistical models including linear discriminant analysis, quadratic discriminant analysis, classification and regression tree and naive Bayes classifier, are compared with the developed method. The results show that the developed method has the highest prediction accuracies among these statistical models. Additionally, selection of the number of new significant features and parameter selection of K-nearest neighbors are thoroughly investigated.
ELEVEN BROADCASTING EXPERIMENTS.
ERIC Educational Resources Information Center
PERRATON, HILARY D.
A REVIEW IS MADE OF EXPERIMENTAL COURSES COMBINING THE USE OF RADIO, TELEVISION, AND CORRESPONDENCE STUDY AND GIVEN BY THE NATIONAL EXTENSION COLLEGE IN ENGLAND. COURSES INCLUDED ENGLISH, MATHEMATICS, SOCIAL WORK, PHYSICS, STATISTICS, AND COMPUTERS. TWO METHODS OF LINKING CORRESPONDENCE COURSES TO BROADCASTS WERE USED--IN MATHEMATICS AND SOCIAL…
Electronic holographic moire in the micron range
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Sciammarella, Federico M.
2001-06-01
The basic theory behind microscopic electronic holographic moire is presented. Conditions of observation are discussed, and optimal parameters are established. An application is presented as an example where experimental result are statistically analyzed and successfully correlated with an independent method of measurement of the same quantity.
Muir, Bob; Quick, Suzanne; Slater, Ben J; Cooper, David B; Moran, Mary C; Timperley, Christopher M; Carrick, Wendy A; Burnell, Christopher K
2005-03-18
Thermal desorption with gas chromatography-mass spectrometry (TD-GC-MS) remains the technique of choice for analysis of trace concentrations of analytes in air samples. This paper describes the development and application of a method for analysing the vesicant compounds sulfur mustard and Lewisites I-III. 3,4-Dimercaptotoluene and butanethiol were used to spike sorbent tubes and vesicant vapours sampled; Lewisite I and II reacted with the thiols while sulfur mustard and Lewisite III did not. Statistical experimental design was used to optimise thermal desorption parameters and the optimum method used to determine vesicant compounds in headspace samples taken from a decontamination trial. 3,4-Dimercaptotoluene reacted with Lewisites I and II to give a common derivative with a limit of detection (LOD) of 260 microg m(-3), while the butanethiol gave distinct derivatives with limits of detection around 30 microg m(-3).
Gaussian process regression for sensor networks under localization uncertainty
Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming
2013-01-01
In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.
Data-driven sensitivity inference for Thomson scattering electron density measurement systems.
Fujii, Keisuke; Yamada, Ichihiro; Hasuo, Masahiro
2017-01-01
We developed a method to infer the calibration parameters of multichannel measurement systems, such as channel variations of sensitivity and noise amplitude, from experimental data. We regard such uncertainties of the calibration parameters as dependent noise. The statistical properties of the dependent noise and that of the latent functions were modeled and implemented in the Gaussian process kernel. Based on their statistical difference, both parameters were inferred from the data. We applied this method to the electron density measurement system by Thomson scattering for the Large Helical Device plasma, which is equipped with 141 spatial channels. Based on the 210 sets of experimental data, we evaluated the correction factor of the sensitivity and noise amplitude for each channel. The correction factor varies by ≈10%, and the random noise amplitude is ≈2%, i.e., the measurement accuracy increases by a factor of 5 after this sensitivity correction. The certainty improvement in the spatial derivative inference was demonstrated.
On vital aid: the why, what and how of validation
Kleywegt, Gerard J.
2009-01-01
Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such techniques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968
Genetic algorithm for the optimization of features and neural networks in ECG signals classification
NASA Astrophysics Data System (ADS)
Li, Hongqiang; Yuan, Danyang; Ma, Xiangdong; Cui, Dianyin; Cao, Lu
2017-01-01
Feature extraction and classification of electrocardiogram (ECG) signals are necessary for the automatic diagnosis of cardiac diseases. In this study, a novel method based on genetic algorithm-back propagation neural network (GA-BPNN) for classifying ECG signals with feature extraction using wavelet packet decomposition (WPD) is proposed. WPD combined with the statistical method is utilized to extract the effective features of ECG signals. The statistical features of the wavelet packet coefficients are calculated as the feature sets. GA is employed to decrease the dimensions of the feature sets and to optimize the weights and biases of the back propagation neural network (BPNN). Thereafter, the optimized BPNN classifier is applied to classify six types of ECG signals. In addition, an experimental platform is constructed for ECG signal acquisition to supply the ECG data for verifying the effectiveness of the proposed method. The GA-BPNN method with the MIT-BIH arrhythmia database achieved a dimension reduction of nearly 50% and produced good classification results with an accuracy of 97.78%. The experimental results based on the established acquisition platform indicated that the GA-BPNN method achieved a high classification accuracy of 99.33% and could be efficiently applied in the automatic identification of cardiac arrhythmias.
A statistical method for the conservative adjustment of false discovery rate (q-value).
Lai, Yinglei
2017-03-14
q-value is a widely used statistical method for estimating false discovery rate (FDR), which is a conventional significance measure in the analysis of genome-wide expression data. q-value is a random variable and it may underestimate FDR in practice. An underestimated FDR can lead to unexpected false discoveries in the follow-up validation experiments. This issue has not been well addressed in literature, especially in the situation when the permutation procedure is necessary for p-value calculation. We proposed a statistical method for the conservative adjustment of q-value. In practice, it is usually necessary to calculate p-value by a permutation procedure. This was also considered in our adjustment method. We used simulation data as well as experimental microarray or sequencing data to illustrate the usefulness of our method. The conservativeness of our approach has been mathematically confirmed in this study. We have demonstrated the importance of conservative adjustment of q-value, particularly in the situation that the proportion of differentially expressed genes is small or the overall differential expression signal is weak.
Khodadadi, Esmail; Ebrahimi, Hossein; Moghaddasian, Sima; Babapour, Jalil
2013-03-01
Having an effective relationship with the patient in the process of treatment is essential. Nurses must have communication skills in order to establish effective relationships with the patients. This study evaluated the impact of communication skills training on quality of care, self-efficacy, job satisfaction and communication skills of nurses. This is an experimental study with a control group that has been done in 2012. The study sample consisted of 73 nurses who work in hospitals of Tabriz; they were selected by proportional randomizing method. The intervention was only conducted on the experimental group. In order to measure the quality of care 160 patients, who had received care by nurses, participated in this study. The Data were analyzed by SPSS (ver.13). Comparing the mean scores of communication skills showed a statistically significant difference between control and experimental groups after intervention. The paired t-test showed a statistically significant difference in the experimental group before and after the intervention. Independent t-test showed a statistically significant difference between the rate of quality of care in patients of control and experimental groups after the intervention. The results showed that the training of communication skills can increase the nurse's rate of communication skills and cause elevation in quality of nursing care. Therefore, in order to improve the quality of nursing care it is recommended that communication skills be established and taught as a separate course in nursing education.
Navarro-Fontestad, Carmen; González-Álvarez, Isabel; Fernández-Teruel, Carlos; Bermejo, Marival; Casabó, Vicente Germán
2012-01-01
The aim of the present work was to develop a new mathematical method for estimating the area under the curve (AUC) and its variability that could be applied in different preclinical experimental designs and amenable to be implemented in standard calculation worksheets. In order to assess the usefulness of the new approach, different experimental scenarios were studied and the results were compared with those obtained with commonly used software: WinNonlin® and Phoenix WinNonlin®. The results do not show statistical differences among the AUC values obtained by both procedures, but the new method appears to be a better estimator of the AUC standard error, measured as the coverage of 95% confidence interval. In this way, the new proposed method demonstrates to be as useful as WinNonlin® software when it was applicable. Copyright © 2011 John Wiley & Sons, Ltd.
Statistical analysis and digital processing of the Mössbauer spectra
NASA Astrophysics Data System (ADS)
Prochazka, Roman; Tucek, Pavel; Tucek, Jiri; Marek, Jaroslav; Mashlan, Miroslav; Pechousek, Jiri
2010-02-01
This work is focused on using the statistical methods and development of the filtration procedures for signal processing in Mössbauer spectroscopy. Statistical tools for noise filtering in the measured spectra are used in many scientific areas. The use of a pure statistical approach in accumulated Mössbauer spectra filtration is described. In Mössbauer spectroscopy, the noise can be considered as a Poisson statistical process with a Gaussian distribution for high numbers of observations. This noise is a superposition of the non-resonant photons counting with electronic noise (from γ-ray detection and discrimination units), and the velocity system quality that can be characterized by the velocity nonlinearities. The possibility of a noise-reducing process using a new design of statistical filter procedure is described. This mathematical procedure improves the signal-to-noise ratio and thus makes it easier to determine the hyperfine parameters of the given Mössbauer spectra. The filter procedure is based on a periodogram method that makes it possible to assign the statistically important components in the spectral domain. The significance level for these components is then feedback-controlled using the correlation coefficient test results. The estimation of the theoretical correlation coefficient level which corresponds to the spectrum resolution is performed. Correlation coefficient test is based on comparison of the theoretical and the experimental correlation coefficients given by the Spearman method. The correctness of this solution was analyzed by a series of statistical tests and confirmed by many spectra measured with increasing statistical quality for a given sample (absorber). The effect of this filter procedure depends on the signal-to-noise ratio and the applicability of this method has binding conditions.
2008-03-01
injector con- figurations for Scramjet applications.” International Journal of Heat and Mass Transfer 49: 3634–3644 (2006). 8. Anderson, C.D...Experimental Attainment of Optimal Conditions,” Journal of the Royal Statistical Society, B(13): 1–38, 1951. 19. Brewer, K.M. Exergy Methods for the Mission...second applies mvps to a new scramjet design in support of the Hypersonic International Flight Re- search Experimentation (hifire). The results
Application of Absorbable Hemostatic Materials Observed in Thyroid Operation
NASA Astrophysics Data System (ADS)
Li, Yan-Ming; Liang, Zhen-Zhen; Song, Yan
2016-05-01
To observe the application effects of the absorbable hemostatic materials in thyroid operation. Methods: From May 2014 to January 2015, 100 patients with thyroid surgery in our university affiliated hospital were selected as the research object. Randomly divided into experimental group and control group, 50 cases in each group. Application of absorbable hemostatic hemostatic materials in the experimental group during the operation, the control group using the traditional mechanical methods of hemostasis hemostasis to observe the operation time, bleeding volume, postoperative drainage volume, complications and hospital stay of the two groups. Results: The operation time, bleeding volume, postoperative drainage and hospital stay in the experimental group were significantly lower in the study group than in the control group, and the difference between the two groups was statistically significant (P< 0.05); The satisfaction of patients in the experimental group was significantly higher than that in the control group, the difference was statistically significant in the two groups (P < 0.05); There was no significant difference in the incidence of wound bleeding complications between the study group and the control group (P > 0.05). Conclusion: Absorbable hemostatic materials can effectively shorten the operation time, reduce intraoperative blood loss and postoperative drainage, reduce the length of hospital stay and improve the success rate of surgery and patient satisfaction, which is worthy to be popularized in clinical thyroid surgery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewis, John R.; Brooks, Dusty Marie
In pressurized water reactors, the prevention, detection, and repair of cracks within dissimilar metal welds is essential to ensure proper plant functionality and safety. Weld residual stresses, which are difficult to model and cannot be directly measured, contribute to the formation and growth of cracks due to primary water stress corrosion cracking. Additionally, the uncertainty in weld residual stress measurements and modeling predictions is not well understood, further complicating the prediction of crack evolution. The purpose of this document is to develop methodology to quantify the uncertainty associated with weld residual stress that can be applied to modeling predictions andmore » experimental measurements. Ultimately, the results can be used to assess the current state of uncertainty and to build confidence in both modeling and experimental procedures. The methodology consists of statistically modeling the variation in the weld residual stress profiles using functional data analysis techniques. Uncertainty is quantified using statistical bounds (e.g. confidence and tolerance bounds) constructed with a semi-parametric bootstrap procedure. Such bounds describe the range in which quantities of interest, such as means, are expected to lie as evidenced by the data. The methodology is extended to provide direct comparisons between experimental measurements and modeling predictions by constructing statistical confidence bounds for the average difference between the two quantities. The statistical bounds on the average difference can be used to assess the level of agreement between measurements and predictions. The methodology is applied to experimental measurements of residual stress obtained using two strain relief measurement methods and predictions from seven finite element models developed by different organizations during a round robin study.« less
Laboratory animal science: a resource to improve the quality of science.
Forni, M
2007-08-01
The contribution of animal experimentation to biomedical research is of undoubted value, nevertheless the real usefulness of animal models is still being hotly debated. Laboratory Animal Science is a multidisciplinary approach to humane animal experimentation that allows the choice of the correct animal model and the collection of unbiased data. Refinement, Reduction and Replacement, the "3Rs rule", are now widely accepted and have a major influence on animal experimentation procedures. Refinement, namely any decrease in the incidence or severity of inhumane procedures applied to animals, has been today extended to the entire lives of the experimental animals. Reduction of the number of animals used to obtain statistically significant data may be achieved by improving experimental design and statistical analysis of data. Replacement refers to the development of validated alternative methods. A Laboratory Animal Science training program in biomedical degrees can promote the 3Rs and improve the welfare of laboratory animals as well as the quality of science with ethical, scientific and economic advantages complying with the European requirement that "persons who carry out, take part in, or supervise procedures on animals, or take care of animals used in procedures, shall have had appropriate education and training".
NASA Astrophysics Data System (ADS)
Whitcher, Carrie Lynn
2005-08-01
Adolescence is marked with many changes in the development of higher order thinking skills. As students enter high school they are expected to utilize these skills to solve problems, become abstract thinkers, and contribute to society. The goal of this study was to assess horticultural science knowledge achievement and attitude toward horticulture, science, and school in high school agriculture students. There were approximately 240 high school students in the sample including both experimental and control groups from California and Washington. Students in the experimental group participated in an educational program called "Hands-On Hortscience" which emphasized problem solving in investigation and experimentation activities with greenhouse plants, soilless media, and fertilizers. Students in the control group were taught by the subject matter method. The activities included in the Hands-On Hortscience curriculum were created to reinforce teaching the scientific method through the context of horticulture. The objectives included evaluating whether the students participating in the Hands-On Hortscience experimental group benefited in the areas of science literacy, data acquisition and analysis, and attitude toward horticulture, science, and school. Pre-tests were administered in both the experimental and control groups prior to the research activities and post-tests were administered after completion. The survey questionnaire included a biographical section and attitude survey. Significant increases in hortscience achievement were found from pre-test to post-test in both control and experimental study groups. The experimental treatment group had statistically higher achievement scores than the control group in the two areas tested: scientific method (p=0.0016) and horticulture plant nutrition (p=0.0004). In addition, the students participating in the Hands-On Hortscience activities had more positive attitudes toward horticulture, science, and school (p=0.0033). Students who were more actively involved in hands-on projects had higher attitude scores compared to students who were taught traditional methods alone. In demographic comparisons, females had more positive attitudes toward horticulture science than males; and students from varying ethnic backgrounds had statistically different achievement (p=0.0001). Ethnicity was determined with few students in each background, 8 in one ethnicity and 10 students in another. Youth organization membership such as FFA or 4-H had no significant bearing on achievement or attitude.
Coscollà, Clara; Navarro-Olivares, Santiago; Martí, Pedro; Yusà, Vicent
2014-02-01
When attempting to discover the important factors and then optimise a response by tuning these factors, experimental design (design of experiments, DoE) gives a powerful suite of statistical methodology. DoE identify significant factors and then optimise a response with respect to them in method development. In this work, a headspace-solid-phase micro-extraction (HS-SPME) combined with gas chromatography tandem mass spectrometry (GC-MS/MS) methodology for the simultaneous determination of six important organotin compounds namely monobutyltin (MBT), dibutyltin (DBT), tributyltin (TBT), monophenyltin (MPhT), diphenyltin (DPhT), triphenyltin (TPhT) has been optimized using a statistical design of experiments (DOE). The analytical method is based on the ethylation with NaBEt4 and simultaneous headspace-solid-phase micro-extraction of the derivative compounds followed by GC-MS/MS analysis. The main experimental parameters influencing the extraction efficiency selected for optimization were pre-incubation time, incubation temperature, agitator speed, extraction time, desorption temperature, buffer (pH, concentration and volume), headspace volume, sample salinity, preparation of standards, ultrasonic time and desorption time in the injector. The main factors (excitation voltage, excitation time, ion source temperature, isolation time and electron energy) affecting the GC-IT-MS/MS response were also optimized using the same statistical design of experiments. The proposed method presented good linearity (coefficient of determination R(2)>0.99) and repeatibilty (1-25%) for all the compounds under study. The accuracy of the method measured as the average percentage recovery of the compounds in spiked surface and marine waters was higher than 70% for all compounds studied. Finally, the optimized methodology was applied to real aqueous samples enabled the simultaneous determination of all compounds under study in surface and marine water samples obtained from Valencia region (Spain). © 2013 Elsevier B.V. All rights reserved.
The Influence of Roughness on Gear Surface Fatigue
NASA Technical Reports Server (NTRS)
Krantz, Timothy
2005-01-01
Gear working surfaces are subjected to repeated rolling and sliding contacts, and often designs require loads sufficient to cause eventual fatigue of the surface. This research provides experimental data and analytical tools to further the understanding of the causal relationship of gear surface roughness to surface fatigue. The research included evaluations and developments of statistical tools for gear fatigue data, experimental evaluation of the surface fatigue lives of superfinished gears with a near-mirror quality, and evaluations of the experiments by analytical methods and surface inspections. Alternative statistical methods were evaluated using Monte Carlo studies leading to a final recommendation to describe gear fatigue data using a Weibull distribution, maximum likelihood estimates of shape and scale parameters, and a presumed zero-valued location parameter. A new method was developed for comparing two datasets by extending the current methods of likelihood-ratio based statistics. The surface fatigue lives of superfinished gears were evaluated by carefully controlled experiments, and it is shown conclusively that superfinishing of gears can provide for significantly greater lives relative to ground gears. The measured life improvement was approximately a factor of five. To assist with application of this finding to products, the experimental condition was evaluated. The fatigue life results were expressed in terms of specific film thickness and shown to be consistent with bearing data. Elastohydrodynamic and stress analyses were completed to relate the stress condition to fatigue. Smooth-surface models do not adequately explain the improved fatigue lives. Based on analyses using a rough surface model, it is concluded that the improved fatigue lives of superfinished gears is due to a reduced rate of near-surface micropitting fatigue processes, not due to any reduced rate of spalling (sub-surface) fatigue processes. To complete the evaluations, surface inspection were completed. The surface topographies of the ground gears changed substantially due to running, but the topographies of the superfinished gears were essentially unchanged with running.
An improved swarm optimization for parameter estimation and biological model selection.
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data.
R. A. Fisher and his advocacy of randomization.
Hall, Nancy S
2007-01-01
The requirement of randomization in experimental design was first stated by R. A. Fisher, statistician and geneticist, in 1925 in his book Statistical Methods for Research Workers. Earlier designs were systematic and involved the judgment of the experimenter; this led to possible bias and inaccurate interpretation of the data. Fisher's dictum was that randomization eliminates bias and permits a valid test of significance. Randomization in experimenting had been used by Charles Sanders Peirce in 1885 but the practice was not continued. Fisher developed his concepts of randomizing as he considered the mathematics of small samples, in discussions with "Student," William Sealy Gosset. Fisher published extensively. His principles of experimental design were spread worldwide by the many "voluntary workers" who came from other institutions to Rothamsted Agricultural Station in England to learn Fisher's methods.
McElreath, Richard; Bell, Adrian V; Efferson, Charles; Lubell, Mark; Richerson, Peter J; Waring, Timothy
2008-11-12
The existence of social learning has been confirmed in diverse taxa, from apes to guppies. In order to advance our understanding of the consequences of social transmission and evolution of behaviour, however, we require statistical tools that can distinguish among diverse social learning strategies. In this paper, we advance two main ideas. First, social learning is diverse, in the sense that individuals can take advantage of different kinds of information and combine them in different ways. Examining learning strategies for different information conditions illuminates the more detailed design of social learning. We construct and analyse an evolutionary model of diverse social learning heuristics, in order to generate predictions and illustrate the impact of design differences on an organism's fitness. Second, in order to eventually escape the laboratory and apply social learning models to natural behaviour, we require statistical methods that do not depend upon tight experimental control. Therefore, we examine strategic social learning in an experimental setting in which the social information itself is endogenous to the experimental group, as it is in natural settings. We develop statistical models for distinguishing among different strategic uses of social information. The experimental data strongly suggest that most participants employ a hierarchical strategy that uses both average observed pay-offs of options as well as frequency information, the same model predicted by our evolutionary analysis to dominate a wide range of conditions.
Effects of neck exercise on high-school students' neck-shoulder posture.
Lee, Myoung-Hyo; Park, Su-Jin; Kim, Jin-Sang
2013-05-01
[Purpose] This study examined the effects of deep flexor muscle-strengthening exercise on the neck-shoulder posture, and the strength and endurance of the deep flexor muscles of high-school students. [Subjects] The subjects were 30 seventeen-year-old female high-school students who complained about bad posture and chronic neck-shoulder pain. They were randomly divided into an experimental group of 15 subjects, who performed a deep flexor muscle-strengthening exercise and a control group of 15 subjects, who performed a basic stretching exercise. [Methods] The experimental group of 15 subjects performed a deep flexor muscle-strengthening exercise consisting of low-load training of the cranio-cervical flexor muscle, and the control group of 15 subjects performed a basic stretching exercise consisting of seven motions. [Results] The experimental group showed statistically significant changes in head tilt angle, neck flexion angle, forward shoulder angle, and the result of the cranio-cervical flexion test after the training. In contrast, the control group showed no statistically significant changes in these measures following the training. When the results of the groups were compared, statistically significant differences were found for all items between the experimental group and the control group. [Conclusion] Strengthening cranio-cervical flexor muscles is important for the adjustment of neck posture, and maintaining their stability is required to improve neck-shoulder posture.
Humeniuk, Stephan; Büchler, Hans Peter
2017-12-08
We present a method for computing the full probability distribution function of quadratic observables such as particle number or magnetization for the Fermi-Hubbard model within the framework of determinantal quantum Monte Carlo calculations. Especially in cold atom experiments with single-site resolution, such a full counting statistics can be obtained from repeated projective measurements. We demonstrate that the full counting statistics can provide important information on the size of preformed pairs. Furthermore, we compute the full counting statistics of the staggered magnetization in the repulsive Hubbard model at half filling and find excellent agreement with recent experimental results. We show that current experiments are capable of probing the difference between the Hubbard model and the limiting Heisenberg model.
Statistical methods for quantitative mass spectrometry proteomic experiments with labeling.
Oberg, Ann L; Mahoney, Douglas W
2012-01-01
Mass Spectrometry utilizing labeling allows multiple specimens to be subjected to mass spectrometry simultaneously. As a result, between-experiment variability is reduced. Here we describe use of fundamental concepts of statistical experimental design in the labeling framework in order to minimize variability and avoid biases. We demonstrate how to export data in the format that is most efficient for statistical analysis. We demonstrate how to assess the need for normalization, perform normalization, and check whether it worked. We describe how to build a model explaining the observed values and test for differential protein abundance along with descriptive statistics and measures of reliability of the findings. Concepts are illustrated through the use of three case studies utilizing the iTRAQ 4-plex labeling protocol.
Pilpel, Avital
2007-09-01
This paper is concerned with the role of rational belief change theory in the philosophical understanding of experimental error. Today, philosophers seek insight about error in the investigation of specific experiments, rather than in general theories. Nevertheless, rational belief change theory adds to our understanding of just such cases: R. A. Fisher's criticism of Mendel's experiments being a case in point. After an historical introduction, the main part of this paper investigates Fisher's paper from the point of view of rational belief change theory: what changes of belief about Mendel's experiment does Fisher go through and with what justification. It leads to surprising insights about what Fisher had done right and wrong, and, more generally, about the limits of statistical methods in detecting error.
No-Reference Video Quality Assessment Based on Statistical Analysis in 3D-DCT Domain.
Li, Xuelong; Guo, Qun; Lu, Xiaoqiang
2016-05-13
It is an important task to design models for universal no-reference video quality assessment (NR-VQA) in multiple video processing and computer vision applications. However, most existing NR-VQA metrics are designed for specific distortion types which are not often aware in practical applications. A further deficiency is that the spatial and temporal information of videos is hardly considered simultaneously. In this paper, we propose a new NR-VQA metric based on the spatiotemporal natural video statistics (NVS) in 3D discrete cosine transform (3D-DCT) domain. In the proposed method, a set of features are firstly extracted based on the statistical analysis of 3D-DCT coefficients to characterize the spatiotemporal statistics of videos in different views. These features are used to predict the perceived video quality via the efficient linear support vector regression (SVR) model afterwards. The contributions of this paper are: 1) we explore the spatiotemporal statistics of videos in 3DDCT domain which has the inherent spatiotemporal encoding advantage over other widely used 2D transformations; 2) we extract a small set of simple but effective statistical features for video visual quality prediction; 3) the proposed method is universal for multiple types of distortions and robust to different databases. The proposed method is tested on four widely used video databases. Extensive experimental results demonstrate that the proposed method is competitive with the state-of-art NR-VQA metrics and the top-performing FR-VQA and RR-VQA metrics.
Nakagami-based total variation method for speckle reduction in thyroid ultrasound images.
Koundal, Deepika; Gupta, Savita; Singh, Sukhwinder
2016-02-01
A good statistical model is necessary for the reduction in speckle noise. The Nakagami model is more general than the Rayleigh distribution for statistical modeling of speckle in ultrasound images. In this article, the Nakagami-based noise removal method is presented to enhance thyroid ultrasound images and to improve clinical diagnosis. The statistics of log-compressed image are derived from the Nakagami distribution following a maximum a posteriori estimation framework. The minimization problem is solved by optimizing an augmented Lagrange and Chambolle's projection method. The proposed method is evaluated on both artificial speckle-simulated and real ultrasound images. The experimental findings reveal the superiority of the proposed method both quantitatively and qualitatively in comparison with other speckle reduction methods reported in the literature. The proposed method yields an average signal-to-noise ratio gain of more than 2.16 dB over the non-convex regularizer-based speckle noise removal method, 3.83 dB over the Aubert-Aujol model, 1.71 dB over the Shi-Osher model and 3.21 dB over the Rudin-Lions-Osher model on speckle-simulated synthetic images. Furthermore, visual evaluation of the despeckled images shows that the proposed method suppresses speckle noise well while preserving the textures and fine details. © IMechE 2015.
Machine Learning Methods for Attack Detection in the Smart Grid.
Ozay, Mete; Esnaola, Inaki; Yarman Vural, Fatos Tunay; Kulkarni, Sanjeev R; Poor, H Vincent
2016-08-01
Attack detection problems in the smart grid are posed as statistical learning problems for different attack scenarios in which the measurements are observed in batch or online settings. In this approach, machine learning algorithms are used to classify measurements as being either secure or attacked. An attack detection framework is provided to exploit any available prior knowledge about the system and surmount constraints arising from the sparse structure of the problem in the proposed approach. Well-known batch and online learning algorithms (supervised and semisupervised) are employed with decision- and feature-level fusion to model the attack detection problem. The relationships between statistical and geometric properties of attack vectors employed in the attack scenarios and learning algorithms are analyzed to detect unobservable attacks using statistical learning methods. The proposed algorithms are examined on various IEEE test systems. Experimental analyses show that machine learning algorithms can detect attacks with performances higher than attack detection algorithms that employ state vector estimation methods in the proposed attack detection framework.
NASA Technical Reports Server (NTRS)
Sprowls, D. O.; Bucci, R. J.; Ponchel, B. M.; Brazill, R. L.; Bretz, P. E.
1984-01-01
A technique is demonstrated for accelerated stress corrosion testing of high strength aluminum alloys. The method offers better precision and shorter exposure times than traditional pass fail procedures. The approach uses data from tension tests performed on replicate groups of smooth specimens after various lengths of exposure to static stress. The breaking strength measures degradation in the test specimen load carrying ability due to the environmental attack. Analysis of breaking load data by extreme value statistics enables the calculation of survival probabilities and a statistically defined threshold stress applicable to the specific test conditions. A fracture mechanics model is given which quantifies depth of attack in the stress corroded specimen by an effective flaw size calculated from the breaking stress and the material strength and fracture toughness properties. Comparisons are made with experimental results from three tempers of 7075 alloy plate tested by the breaking load method and by traditional tests of statistically loaded smooth tension bars and conventional precracked specimens.
TIME-DOMAIN METHODS FOR DIFFUSIVE TRANSPORT IN SOFT MATTER
Fricks, John; Yao, Lingxing; Elston, Timothy C.; Gregory Forest, And M.
2015-01-01
Passive microrheology [12] utilizes measurements of noisy, entropic fluctuations (i.e., diffusive properties) of micron-scale spheres in soft matter to infer bulk frequency-dependent loss and storage moduli. Here, we are concerned exclusively with diffusion of Brownian particles in viscoelastic media, for which the Mason-Weitz theoretical-experimental protocol is ideal, and the more challenging inference of bulk viscoelastic moduli is decoupled. The diffusive theory begins with a generalized Langevin equation (GLE) with a memory drag law specified by a kernel [7, 16, 22, 23]. We start with a discrete formulation of the GLE as an autoregressive stochastic process governing microbead paths measured by particle tracking. For the inverse problem (recovery of the memory kernel from experimental data) we apply time series analysis (maximum likelihood estimators via the Kalman filter) directly to bead position data, an alternative to formulas based on mean-squared displacement statistics in frequency space. For direct modeling, we present statistically exact GLE algorithms for individual particle paths as well as statistical correlations for displacement and velocity. Our time-domain methods rest upon a generalization of well-known results for a single-mode exponential kernel [1, 7, 22, 23] to an arbitrary M-mode exponential series, for which the GLE is transformed to a vector Ornstein-Uhlenbeck process. PMID:26412904
Testing for nonlinearity in time series: The method of surrogate data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Theiler, J.; Galdrikian, B.; Longtin, A.
1991-01-01
We describe a statistical approach for identifying nonlinearity in time series; in particular, we want to avoid claims of chaos when simpler models (such as linearly correlated noise) can explain the data. The method requires a careful statement of the null hypothesis which characterizes a candidate linear process, the generation of an ensemble of surrogate'' data sets which are similar to the original time series but consistent with the null hypothesis, and the computation of a discriminating statistic for the original and for each of the surrogate data sets. The idea is to test the original time series against themore » null hypothesis by checking whether the discriminating statistic computed for the original time series differs significantly from the statistics computed for each of the surrogate sets. We present algorithms for generating surrogate data under various null hypotheses, and we show the results of numerical experiments on artificial data using correlation dimension, Lyapunov exponent, and forecasting error as discriminating statistics. Finally, we consider a number of experimental time series -- including sunspots, electroencephalogram (EEG) signals, and fluid convection -- and evaluate the statistical significance of the evidence for nonlinear structure in each case. 56 refs., 8 figs.« less
Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M; Grün, Sonja
2017-01-01
Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis.
Quaglio, Pietro; Yegenoglu, Alper; Torre, Emiliano; Endres, Dominik M.; Grün, Sonja
2017-01-01
Repeated, precise sequences of spikes are largely considered a signature of activation of cell assemblies. These repeated sequences are commonly known under the name of spatio-temporal patterns (STPs). STPs are hypothesized to play a role in the communication of information in the computational process operated by the cerebral cortex. A variety of statistical methods for the detection of STPs have been developed and applied to electrophysiological recordings, but such methods scale poorly with the current size of available parallel spike train recordings (more than 100 neurons). In this work, we introduce a novel method capable of overcoming the computational and statistical limits of existing analysis techniques in detecting repeating STPs within massively parallel spike trains (MPST). We employ advanced data mining techniques to efficiently extract repeating sequences of spikes from the data. Then, we introduce and compare two alternative approaches to distinguish statistically significant patterns from chance sequences. The first approach uses a measure known as conceptual stability, of which we investigate a computationally cheap approximation for applications to such large data sets. The second approach is based on the evaluation of pattern statistical significance. In particular, we provide an extension to STPs of a method we recently introduced for the evaluation of statistical significance of synchronous spike patterns. The performance of the two approaches is evaluated in terms of computational load and statistical power on a variety of artificial data sets that replicate specific features of experimental data. Both methods provide an effective and robust procedure for detection of STPs in MPST data. The method based on significance evaluation shows the best overall performance, although at a higher computational cost. We name the novel procedure the spatio-temporal Spike PAttern Detection and Evaluation (SPADE) analysis. PMID:28596729
Experimental Analysis of Cell Function Using Cytoplasmic Streaming
ERIC Educational Resources Information Center
Janssens, Peter; Waldhuber, Megan
2012-01-01
This laboratory exercise investigates the phenomenon of cytoplasmic streaming in the fresh water alga "Nitella". Students use the fungal toxin cytochalasin D, an inhibitor of actin polymerization, to investigate the mechanism of streaming. Students use simple statistical methods to analyze their data. Typical student data are provided. (Contains 3…
A Hybrid Computer Simulation to Generate the DNA Distribution of a Cell Population.
ERIC Educational Resources Information Center
Griebling, John L.; Adams, William S.
1981-01-01
Described is a method of simulating the formation of a DNA distribution, on which statistical results and experimentally measured parameters from DNA distribution and percent-labeled mitosis studies are combined. An EAI-680 and DECSystem-10 Hybrid Computer configuration are used. (Author/CS)
An Experimental Ecological Study of a Garden Compost Heap.
ERIC Educational Resources Information Center
Curds, Tracy
1985-01-01
A quantitative study of the fauna of a garden compost heap shows it to be similar to that of organisms found in soil and leaf litter. Materials, methods, and results are discussed and extensive tables of fauna lists, wet/dry masses, and statistical analyses are presented. (Author/DH)
The Infeasibility of Experimental Quantification of Life-Critical Software Reliability
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Finelli, George B.
1991-01-01
This paper affirms that quantification of life-critical software reliability is infeasible using statistical methods whether applied to standard software or fault-tolerant software. The key assumption of software fault tolerance|separately programmed versions fail independently|is shown to be problematic. This assumption cannot be justified by experimentation in the ultra-reliability region and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multi-version software experiments support this affirmation.
Robust matching for voice recognition
NASA Astrophysics Data System (ADS)
Higgins, Alan; Bahler, L.; Porter, J.; Blais, P.
1994-10-01
This paper describes an automated method of comparing a voice sample of an unknown individual with samples from known speakers in order to establish or verify the individual's identity. The method is based on a statistical pattern matching approach that employs a simple training procedure, requires no human intervention (transcription, work or phonetic marketing, etc.), and makes no assumptions regarding the expected form of the statistical distributions of the observations. The content of the speech material (vocabulary, grammar, etc.) is not assumed to be constrained in any way. An algorithm is described which incorporates frame pruning and channel equalization processes designed to achieve robust performance with reasonable computational resources. An experimental implementation demonstrating the feasibility of the concept is described.
Guidelines for the Investigation of Mediating Variables in Business Research
Coxe, Stefany; Baraldi, Amanda N.
2013-01-01
Business theories often specify the mediating mechanisms by which a predictor variable affects an outcome variable. In the last 30 years, investigations of mediating processes have become more widespread with corresponding developments in statistical methods to conduct these tests. The purpose of this article is to provide guidelines for mediation studies by focusing on decisions made prior to the research study that affect the clarity of conclusions from a mediation study, the statistical models for mediation analysis, and methods to improve interpretation of mediation results after the research study. Throughout this article, the importance of a program of experimental and observational research for investigating mediating mechanisms is emphasized. PMID:25237213
The Effect of Yoga on Functional Recovery Level in Schizophrenic Patients.
Kavak, Funda; Ekinci, Mine
2016-12-01
The objective of this study is to determine the effect of yoga on functional recovery level in schizophrenic patients. The study was conducted in quasi-experimental design with pretest-posttest control group. The population of the study consisted of schizophrenic patients with registered in Malatya and Elazığ Community Mental Health Centers and regularly going to these centers. The sample group of the study consisted of totally 100 patients including 50 patients in the experimental group and 50 patients in the control group who were specified through power analysis and chosen by using random sampling method from this population. The data were collected between April 2015 and August 2015. 'Patient Description Form' and 'FROGS' were used to collect the data. Yoga was applied to patients in the experimental group. Any intervention was not made to patients in the control group. Percentage distribution, arithmetic mean, standard deviation, chi-square, independent samples t test, and paired t test were used to assess the data. Patients in the control and experimental group pretest subscale and the total means scores of FROGS was found to be low. In the posttest subscale and total means scores of FROGS in the experimental group were higher than in the control group and the differences between them were found to be statistically significant (p<0.05). In the experimental group pretest and posttest subscale and total means scores of FR0GS was determined to be statistically significant (p<0.05). Yoga that applied to schizophrenic patients it was determined to increased the level of functional recovery. It can be suggested that yoga should be used as an complementary method in nursing practise in order to increase the effectiveness of the treatment. Copyright © 2016 Elsevier Inc. All rights reserved.
Experimental aspect of solid-state nuclear magnetic resonance studies of biomaterials such as bones.
Singh, Chandan; Rai, Ratan Kumar; Sinha, Neeraj
2013-01-01
Solid-state nuclear magnetic resonance (SSNMR) spectroscopy is increasingly becoming a popular technique to probe micro-structural details of biomaterial such as bone with pico-meter resolution. Due to high-resolution structural details probed by SSNMR methods, handling of bone samples and experimental protocol are very crucial aspects of study. We present here first report of the effect of various experimental protocols and handling methods of bone samples on measured SSNMR parameters. Various popular SSNMR experiments were performed on intact cortical bone sample collected from fresh animal, immediately after removal from animal systems, and results were compared with bone samples preserved in different conditions. We find that the best experimental conditions for SSNMR parameters of bones correspond to preservation at -20 °C and in 70% ethanol solution. Various other SSNMR parameters were compared corresponding to different experimental conditions. Our study has helped in finding best experimental protocol for SSNMR studies of bone. This study will be of further help in the application of SSNMR studies on large bone disease related animal model systems for statistically significant results. © 2013 Elsevier Inc. All rights reserved.
Application of multivariate statistical techniques in microbial ecology.
Paliy, O; Shankar, V
2016-03-01
Recent advances in high-throughput methods of molecular analyses have led to an explosion of studies generating large-scale ecological data sets. In particular, noticeable effect has been attained in the field of microbial ecology, where new experimental approaches provided in-depth assessments of the composition, functions and dynamic changes of complex microbial communities. Because even a single high-throughput experiment produces large amount of data, powerful statistical techniques of multivariate analysis are well suited to analyse and interpret these data sets. Many different multivariate techniques are available, and often it is not clear which method should be applied to a particular data set. In this review, we describe and compare the most widely used multivariate statistical techniques including exploratory, interpretive and discriminatory procedures. We consider several important limitations and assumptions of these methods, and we present examples of how these approaches have been utilized in recent studies to provide insight into the ecology of the microbial world. Finally, we offer suggestions for the selection of appropriate methods based on the research question and data set structure. © 2016 John Wiley & Sons Ltd.
Exploiting Complexity Information for Brain Activation Detection
Zhang, Yan; Liang, Jiali; Lin, Qiang; Hu, Zhenghui
2016-01-01
We present a complexity-based approach for the analysis of fMRI time series, in which sample entropy (SampEn) is introduced as a quantification of the voxel complexity. Under this hypothesis the voxel complexity could be modulated in pertinent cognitive tasks, and it changes through experimental paradigms. We calculate the complexity of sequential fMRI data for each voxel in two distinct experimental paradigms and use a nonparametric statistical strategy, the Wilcoxon signed rank test, to evaluate the difference in complexity between them. The results are compared with the well known general linear model based Statistical Parametric Mapping package (SPM12), where a decided difference has been observed. This is because SampEn method detects brain complexity changes in two experiments of different conditions and the data-driven method SampEn evaluates just the complexity of specific sequential fMRI data. Also, the larger and smaller SampEn values correspond to different meanings, and the neutral-blank design produces higher predictability than threat-neutral. Complexity information can be considered as a complementary method to the existing fMRI analysis strategies, and it may help improving the understanding of human brain functions from a different perspective. PMID:27045838
González-Rodríguez, M L; Barros, L B; Palma, J; González-Rodríguez, P L; Rabasco, A M
2007-06-07
In this paper, we have used statistical experimental design to investigate the effect of several factors in coating process of lidocaine hydrochloride (LID) liposomes by a biodegradable polymer (chitosan, CH). These variables were the concentration of CH coating solution, the dripping rate of this solution on the liposome colloidal dispersion, the stirring rate, the time since the liposome production to the liposome coating and finally the amount of drug entrapped into liposomes. The selected response variables were drug encapsulation efficiency (EE, %), coating efficiency (CE, %) and zeta potential. Liposomes were obtained by thin-layer evaporation method. They were subsequently coated with CH according the experimental plan provided by a fractional factorial (2(5-1)) screening matrix. We have used spectroscopic methods to determine the zeta potential values. The EE (%) assay was carried out in dialysis bags and the brilliant red probe was used to determine CE (%) due to its property of forming molecular complexes with CH. The graphic analysis of the effects allowed the identification of the main formulation and technological factors by the analysis of the selected responses and permitted the determination of the proper level of these factors for the response improvement. Moreover, fractional design allowed quantifying the interactions between the factors, which will consider in next experiments. The results obtained pointed out that LID amount was the predominant factor that increased the drug entrapment capacity (EE). The CE (%) response was mainly affected by the concentration of the CH solution and the stirring rate, although all the interactions between the main factors have statistical significance.
PhenStat | Informatics Technology for Cancer Research (ITCR)
PhenStat is a freely available R package that provides a variety of statistical methods for the identification of phenotypic associations from model organisms developed for the International Mouse Phenotyping Consortium (IMPC at www.mousephenotype.org ). The methods have been developed for high throughput phenotyping pipelines implemented across various experimental designs with an emphasis on managing temporal variation and is being adapted for analysis with PDX mouse strains.
Modeling and optimization of dough recipe for breadsticks
NASA Astrophysics Data System (ADS)
Krivosheev, A. Yu; Ponomareva, E. I.; Zhuravlev, A. A.; Lukina, S. I.; Alekhina, N. N.
2018-05-01
During the work, the authors studied the combined effect of non-traditional raw materials on indicators of quality breadsticks, mathematical methods of experiment planning were applied. The main factors chosen were the dosages of flaxseed flour and grape seed oil. The output parameters were the swelling factor of the products and their strength. Optimization of the formulation composition of the dough for bread sticks was carried out by experimental- statistical methods. As a result of the experiment, mathematical models were constructed in the form of regression equations, adequately describing the process of studies. The statistical processing of the experimental data was carried out by the criteria of Student, Cochran and Fisher (with a confidence probability of 0.95). A mathematical interpretation of the regression equations was given. Optimization of the formulation of the dough for bread sticks was carried out by the method of uncertain Lagrange multipliers. The rational values of the factors were determined: the dosage of flaxseed flour - 14.22% and grape seed oil - 7.8%, ensuring the production of products with the best combination of swelling ratio and strength. On the basis of the data obtained, a recipe and a method for the production of breadsticks "Idea" were proposed (TU (Russian Technical Specifications) 9117-443-02068106-2017).
Online and offline tools for head movement compensation in MEG.
Stolk, Arjen; Todorovic, Ana; Schoffelen, Jan-Mathijs; Oostenveld, Robert
2013-03-01
Magnetoencephalography (MEG) is measured above the head, which makes it sensitive to variations of the head position with respect to the sensors. Head movements blur the topography of the neuronal sources of the MEG signal, increase localization errors, and reduce statistical sensitivity. Here we describe two novel and readily applicable methods that compensate for the detrimental effects of head motion on the statistical sensitivity of MEG experiments. First, we introduce an online procedure that continuously monitors head position. Second, we describe an offline analysis method that takes into account the head position time-series. We quantify the performance of these methods in the context of three different experimental settings, involving somatosensory, visual and auditory stimuli, assessing both individual and group-level statistics. The online head localization procedure allowed for optimal repositioning of the subjects over multiple sessions, resulting in a 28% reduction of the variance in dipole position and an improvement of up to 15% in statistical sensitivity. Offline incorporation of the head position time-series into the general linear model resulted in improvements of group-level statistical sensitivity between 15% and 29%. These tools can substantially reduce the influence of head movement within and between sessions, increasing the sensitivity of many cognitive neuroscience experiments. Copyright © 2012 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bouhaj, M.; von Estorff, O.; Peiffer, A.
2017-09-01
In the application of Statistical Energy Analysis "SEA" to complex assembled structures, a purely predictive model often exhibits errors. These errors are mainly due to a lack of accurate modelling of the power transmission mechanism described through the Coupling Loss Factors (CLF). Experimental SEA (ESEA) is practically used by the automotive and aerospace industry to verify and update the model or to derive the CLFs for use in an SEA predictive model when analytical estimates cannot be made. This work is particularly motivated by the lack of procedures that allow an estimate to be made of the variance and confidence intervals of the statistical quantities when using the ESEA technique. The aim of this paper is to introduce procedures enabling a statistical description of measured power input, vibration energies and the derived SEA parameters. Particular emphasis is placed on the identification of structural CLFs of complex built-up structures comparing different methods. By adopting a Stochastic Energy Model (SEM), the ensemble average in ESEA is also addressed. For this purpose, expressions are obtained to randomly perturb the energy matrix elements and generate individual samples for the Monte Carlo (MC) technique applied to derive the ensemble averaged CLF. From results of ESEA tests conducted on an aircraft fuselage section, the SEM approach provides a better performance of estimated CLFs compared to classical matrix inversion methods. The expected range of CLF values and the synthesized energy are used as quality criteria of the matrix inversion, allowing to assess critical SEA subsystems, which might require a more refined statistical description of the excitation and the response fields. Moreover, the impact of the variance of the normalized vibration energy on uncertainty of the derived CLFs is outlined.
Ganger, Michael T; Dietz, Geoffrey D; Ewing, Sarah J
2017-12-01
qPCR has established itself as the technique of choice for the quantification of gene expression. Procedures for conducting qPCR have received significant attention; however, more rigorous approaches to the statistical analysis of qPCR data are needed. Here we develop a mathematical model, termed the Common Base Method, for analysis of qPCR data based on threshold cycle values (C q ) and efficiencies of reactions (E). The Common Base Method keeps all calculations in the logscale as long as possible by working with log 10 (E) ∙ C q , which we call the efficiency-weighted C q value; subsequent statistical analyses are then applied in the logscale. We show how efficiency-weighted C q values may be analyzed using a simple paired or unpaired experimental design and develop blocking methods to help reduce unexplained variation. The Common Base Method has several advantages. It allows for the incorporation of well-specific efficiencies and multiple reference genes. The method does not necessitate the pairing of samples that must be performed using traditional analysis methods in order to calculate relative expression ratios. Our method is also simple enough to be implemented in any spreadsheet or statistical software without additional scripts or proprietary components.
Tichy, Diana; Pickl, Julia Maria Anna; Benner, Axel; Sültmann, Holger
2017-03-31
The identification of microRNA (miRNA) target genes is crucial for understanding miRNA function. Many methods for the genome-wide miRNA target identification have been developed in recent years; however, they have several limitations including the dependence on low-confident prediction programs and artificial miRNA manipulations. Ago-RNA immunoprecipitation combined with high-throughput sequencing (Ago-RIP-Seq) is a promising alternative. However, appropriate statistical data analysis algorithms taking into account the experimental design and the inherent noise of such experiments are largely lacking.Here, we investigate the experimental design for Ago-RIP-Seq and examine biostatistical methods to identify de novo miRNA target genes. Statistical approaches considered are either based on a negative binomial model fit to the read count data or applied to transformed data using a normal distribution-based generalized linear model. We compare them by a real data simulation study using plasmode data sets and evaluate the suitability of the approaches to detect true miRNA targets by sensitivity and false discovery rates. Our results suggest that simple approaches like linear regression models on (appropriately) transformed read count data are preferable. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
AGR-1 Thermocouple Data Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeff Einerson
2012-05-01
This report documents an effort to analyze measured and simulated data obtained in the Advanced Gas Reactor (AGR) fuel irradiation test program conducted in the INL's Advanced Test Reactor (ATR) to support the Next Generation Nuclear Plant (NGNP) R&D program. The work follows up on a previous study (Pham and Einerson, 2010), in which statistical analysis methods were applied for AGR-1 thermocouple data qualification. The present work exercises the idea that, while recognizing uncertainties inherent in physics and thermal simulations of the AGR-1 test, results of the numerical simulations can be used in combination with the statistical analysis methods tomore » further improve qualification of measured data. Additionally, the combined analysis of measured and simulation data can generate insights about simulation model uncertainty that can be useful for model improvement. This report also describes an experimental control procedure to maintain fuel target temperature in the future AGR tests using regression relationships that include simulation results. The report is organized into four chapters. Chapter 1 introduces the AGR Fuel Development and Qualification program, AGR-1 test configuration and test procedure, overview of AGR-1 measured data, and overview of physics and thermal simulation, including modeling assumptions and uncertainties. A brief summary of statistical analysis methods developed in (Pham and Einerson 2010) for AGR-1 measured data qualification within NGNP Data Management and Analysis System (NDMAS) is also included for completeness. Chapters 2-3 describe and discuss cases, in which the combined use of experimental and simulation data is realized. A set of issues associated with measurement and modeling uncertainties resulted from the combined analysis are identified. This includes demonstration that such a combined analysis led to important insights for reducing uncertainty in presentation of AGR-1 measured data (Chapter 2) and interpretation of simulation results (Chapter 3). The statistics-based simulation-aided experimental control procedure described for the future AGR tests is developed and demonstrated in Chapter 4. The procedure for controlling the target fuel temperature (capsule peak or average) is based on regression functions of thermocouple readings and other relevant parameters and accounting for possible changes in both physical and thermal conditions and in instrument performance.« less
Quantum tomography for collider physics: illustrations with lepton-pair production
NASA Astrophysics Data System (ADS)
Martens, John C.; Ralston, John P.; Takaki, J. D. Tapia
2018-01-01
Quantum tomography is a method to experimentally extract all that is observable about a quantum mechanical system. We introduce quantum tomography to collider physics with the illustration of the angular distribution of lepton pairs. The tomographic method bypasses much of the field-theoretic formalism to concentrate on what can be observed with experimental data. We provide a practical, experimentally driven guide to model-independent analysis using density matrices at every step. Comparison with traditional methods of analyzing angular correlations of inclusive reactions finds many advantages in the tomographic method, which include manifest Lorentz covariance, direct incorporation of positivity constraints, exhaustively complete polarization information, and new invariants free from frame conventions. For example, experimental data can determine the entanglement entropy of the production process. We give reproducible numerical examples and provide a supplemental standalone computer code that implements the procedure. We also highlight a property of complex positivity that guarantees in a least-squares type fit that a local minimum of a χ 2 statistic will be a global minimum: There are no isolated local minima. This property with an automated implementation of positivity promises to mitigate issues relating to multiple minima and convention dependence that have been problematic in previous work on angular distributions.
Nonparametric entropy estimation using kernel densities.
Lake, Douglas E
2009-01-01
The entropy of experimental data from the biological and medical sciences provides additional information over summary statistics. Calculating entropy involves estimates of probability density functions, which can be effectively accomplished using kernel density methods. Kernel density estimation has been widely studied and a univariate implementation is readily available in MATLAB. The traditional definition of Shannon entropy is part of a larger family of statistics, called Renyi entropy, which are useful in applications that require a measure of the Gaussianity of data. Of particular note is the quadratic entropy which is related to the Friedman-Tukey (FT) index, a widely used measure in the statistical community. One application where quadratic entropy is very useful is the detection of abnormal cardiac rhythms, such as atrial fibrillation (AF). Asymptotic and exact small-sample results for optimal bandwidth and kernel selection to estimate the FT index are presented and lead to improved methods for entropy estimation.
Lozoya, Oswaldo A.; Santos, Janine H.; Woychik, Richard P.
2018-01-01
To life scientists, one important feature offered by RNAseq, a next-generation sequencing tool used to estimate changes in gene expression levels, lies in its unprecedented resolution. It can score countable differences in transcript numbers among thousands of genes and between experimental groups, all at once. However, its high cost limits experimental designs to very small sample sizes, usually N = 3, which often results in statistically underpowered analysis and poor reproducibility. All these issues are compounded by the presence of experimental noise, which is harder to distinguish from instrumental error when sample sizes are limiting (e.g., small-budget pilot tests), experimental populations exhibit biologically heterogeneous or diffuse expression phenotypes (e.g., patient samples), or when discriminating among transcriptional signatures of closely related experimental conditions (e.g., toxicological modes of action, or MOAs). Here, we present a leveraged signal-to-noise ratio (LSTNR) thresholding method, founded on generalized linear modeling (GLM) of aligned read detection limits to extract differentially expressed genes (DEGs) from noisy low-replication RNAseq data. The LSTNR method uses an agnostic independent filtering strategy to define the dynamic range of detected aggregate read counts per gene, and assigns statistical weights that prioritize genes with better sequencing resolution in differential expression analyses. To assess its performance, we implemented the LSTNR method to analyze three separate datasets: first, using a systematically noisy in silico dataset, we demonstrated that LSTNR can extract pre-designed patterns of expression and discriminate between “noise” and “true” differentially expressed pseudogenes at a 100% success rate; then, we illustrated how the LSTNR method can assign patient-derived breast cancer specimens correctly to one out of their four reported molecular subtypes (luminal A, luminal B, Her2-enriched and basal-like); and last, we showed the ability to retrieve five different modes of action (MOA) elicited in livers of rats exposed to three toxicants under three nutritional routes by using the LSTNR method. By combining differential measurements with resolving power to detect DEGs, the LSTNR method offers an alternative approach to interrogate noisy and low-replication RNAseq datasets, which handles multiple biological conditions at once, and defines benchmarks to validate RNAseq experiments with standard benchtop assays. PMID:29868123
The effects of guided inquiry instruction on student achievement in high school biology
NASA Astrophysics Data System (ADS)
Vass, Laszlo
The purpose of this quantitative, quasi-experimental study was to measure the effect of a student-centered instructional method called guided inquiry on the achievement of students in a unit of study in high school biology. The study used a non-random sample of 109 students, the control group of 55 students enrolled in high school one, received teacher centered instruction while the experimental group of 54 students enrolled at high school two received student-centered, guided inquiry instruction. The pretest-posttest design of the study analyzed scores using an independent t-test, a dependent t-test (p = <.001), an ANCOVA (p = .007), mixed method ANOVA (p = .024) and hierarchical linear regression (p = <.001). The experimental group that received guided inquiry instruction had statistically significantly higher achievement than the control group.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1990-01-01
A structural power flow approach for the analysis of structure-borne transmission of vibrations is used to analyze the influence of structural parameters on transmitted power. The parametric analysis is also performed using the Statistical Energy Analysis approach and the results are compared with those obtained using the power flow approach. The advantages of structural power flow analysis are demonstrated by comparing the type of results that are obtained by the two analytical methods. Also, to demonstrate that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental study of structural power flow is presented. This experimental study presents results for an L shaped beam for which an available solution was already obtained. Various methods to measure vibrational power flow are compared to study their advantages and disadvantages.
ToNER: A tool for identifying nucleotide enrichment signals in feature-enriched RNA-seq data.
Promworn, Yuttachon; Kaewprommal, Pavita; Shaw, Philip J; Intarapanich, Apichart; Tongsima, Sissades; Piriyapongsa, Jittima
2017-01-01
Biochemical methods are available for enriching 5' ends of RNAs in prokaryotes, which are employed in the differential RNA-seq (dRNA-seq) and the more recent Cappable-seq protocols. Computational methods are needed to locate RNA 5' ends from these data by statistical analysis of the enrichment. Although statistical-based analysis methods have been developed for dRNA-seq, they may not be suitable for Cappable-seq data. The more efficient enrichment method employed in Cappable-seq compared with dRNA-seq could affect data distribution and thus algorithm performance. We present Transformation of Nucleotide Enrichment Ratios (ToNER), a tool for statistical modeling of enrichment from RNA-seq data obtained from enriched and unenriched libraries. The tool calculates nucleotide enrichment scores and determines the global transformation for fitting to the normal distribution using the Box-Cox procedure. From the transformed distribution, sites of significant enrichment are identified. To increase power of detection, meta-analysis across experimental replicates is offered. We tested the tool on Cappable-seq and dRNA-seq data for identifying Escherichia coli transcript 5' ends and compared the results with those from the TSSAR tool, which is designed for analyzing dRNA-seq data. When combining results across Cappable-seq replicates, ToNER detects more known transcript 5' ends than TSSAR. In general, the transcript 5' ends detected by ToNER but not TSSAR occur in regions which cannot be locally modeled by TSSAR. ToNER uses a simple yet robust statistical modeling approach, which can be used for detecting RNA 5'ends from Cappable-seq data, in particular when combining information from experimental replicates. The ToNER tool could potentially be applied for analyzing other RNA-seq datasets in which enrichment for other structural features of RNA is employed. The program is freely available for download at ToNER webpage (http://www4a.biotec.or.th/GI/tools/toner) and GitHub repository (https://github.com/PavitaKae/ToNER).
Statistical density modification using local pattern matching
Terwilliger, Thomas C.
2007-01-23
A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.
Gene Profiling in Experimental Models of Eye Growth: Clues to Myopia Pathogenesis
Stone, Richard A.; Khurana, Tejvir S.
2010-01-01
To understand the complex regulatory pathways that underlie the development of refractive errors, expression profiling has evaluated gene expression in ocular tissues of well-characterized experimental models that alter postnatal eye growth and induce refractive errors. Derived from a variety of platforms (e.g. differential display, spotted microarrays or Affymetrix GeneChips), gene expression patterns are now being identified in species that include chicken, mouse and primate. Reconciling available results is hindered by varied experimental designs and analytical/statistical features. Continued application of these methods offers promise to provide the much-needed mechanistic framework to develop therapies to normalize refractive development in children. PMID:20363242
A Tutorial on Adaptive Design Optimization
Myung, Jay I.; Cavagnaro, Daniel R.; Pitt, Mark A.
2013-01-01
Experimentation is ubiquitous in the field of psychology and fundamental to the advancement of its science, and one of the biggest challenges for researchers is designing experiments that can conclusively discriminate the theoretical hypotheses or models under investigation. The recognition of this challenge has led to the development of sophisticated statistical methods that aid in the design of experiments and that are within the reach of everyday experimental scientists. This tutorial paper introduces the reader to an implementable experimentation methodology, dubbed Adaptive Design Optimization, that can help scientists to conduct “smart” experiments that are maximally informative and highly efficient, which in turn should accelerate scientific discovery in psychology and beyond. PMID:23997275
Detecting circular RNAs: bioinformatic and experimental challenges
Szabo, Linda; Salzman, Julia
2017-01-01
The pervasive expression of circular RNAs (circRNAs) is a recently discovered feature of gene expression in highly diverged eukaryotes. Numerous algorithms that are used to detect genome-wide circRNA expression from RNA sequencing (RNA-seq) data have been developed in the past few years, but there is little overlap in their predictions and no clear gold-standard method to assess the accuracy of these algorithms. We review sources of experimental and bioinformatic biases that complicate the accurate discovery of circRNAs and discuss statistical approaches to address these biases. We conclude with a discussion of the current experimental progress on the topic. PMID:27739534
[Analysis of master degree thesis of otolaryngology head and neck surgery in Xinjiang].
Ayiheng, Qukuerhan; Niliapaer, Alimu; Yalikun, Yasheng
2010-12-01
To understand the basic situation and development of knowledge structure and ability of master degree of Otolaryngology Head and Neck Surgery in Xinjiang region in order to provide reference to further improve the quality of postgraduate students. Fourty-six papers of Otolaryngology master degree thesis were reviewed at randomly in terms of types, subject selection ranges as well as statistical methods during 1998-2009 in Xinjiang region in order to analyze and explore its advantages and characteristics and suggest a solution for its disadvantages. In 46 degree thesis, nine of them are scientific dissertations accounting for 19.57%, 37 are clinical professional degree thesis, accounting for 80.43%. Five are Experimental research papers, 30 are clinical research papers, 10 are clinical and experimental research papers, 1 of them is experimental epidemiology research paper; in this study, the kinds of diseases including every subject of ENT, various statistical methods are involved; references are 37.46 in average, 19.55 of them are foreign literatures references in nearly 5 years are 13.57; four ethnic groups are exist in postgraduate students with high teaching professional level of tutors. The clinical research should be focused in order to further research on ENT common diseases, the application of advanced research methods, the full application of the latest literature, tutors with high-level, training of students of various nationalities, basic research needs to be innovative and should be focus the subject characteristics, to avoid excessive duplication of research.
Bastistella, Luciane; Rousset, Patrick; Aviz, Antonio; Caldeira-Pires, Armando; Humbert, Gilles; Nogueira, Manoel
2018-02-09
New experimental techniques, as well as modern variants on known methods, have recently been employed to investigate the fundamental reactions underlying the oxidation of biochar. The purpose of this paper was to experimentally and statistically study how the relative humidity of air, mass, and particle size of four biochars influenced the adsorption of water and the increase in temperature. A random factorial design was employed using the intuitive statistical software Xlstat. A simple linear regression model and an analysis of variance with a pairwise comparison were performed. The experimental study was carried out on the wood of Quercus pubescens , Cyclobalanopsis glauca , Trigonostemon huangmosun , and Bambusa vulgaris , and involved five relative humidity conditions (22, 43, 75, 84, and 90%), two mass samples (0.1 and 1 g), and two particle sizes (powder and piece). Two response variables including water adsorption and temperature increase were analyzed and discussed. The temperature did not increase linearly with the adsorption of water. Temperature was modeled by nine explanatory variables, while water adsorption was modeled by eight. Five variables, including factors and their interactions, were found to be common to the two models. Sample mass and relative humidity influenced the two qualitative variables, while particle size and biochar type only influenced the temperature.
Vesterinen, Hanna M; Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich
2011-04-01
Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication.
Vesterinen, Hanna V; Egan, Kieren; Deister, Amelie; Schlattmann, Peter; Macleod, Malcolm R; Dirnagl, Ulrich
2011-01-01
Translating experimental findings into clinically effective therapies is one of the major bottlenecks of modern medicine. As this has been particularly true for cerebrovascular research, attention has turned to the quality and validity of experimental cerebrovascular studies. We set out to assess the study design, statistical analyses, and reporting of cerebrovascular research. We assessed all original articles published in the Journal of Cerebral Blood Flow and Metabolism during the year 2008 against a checklist designed to capture the key attributes relating to study design, statistical analyses, and reporting. A total of 156 original publications were included (animal, in vitro, human). Few studies reported a primary research hypothesis, statement of purpose, or measures to safeguard internal validity (such as randomization, blinding, exclusion or inclusion criteria). Many studies lacked sufficient information regarding methods and results to form a reasonable judgment about their validity. In nearly 20% of studies, statistical tests were either not appropriate or information to allow assessment of appropriateness was lacking. This study identifies a number of factors that should be addressed if the quality of research in basic and translational biomedicine is to be improved. We support the widespread implementation of the ARRIVE (Animal Research Reporting In Vivo Experiments) statement for the reporting of experimental studies in biomedicine, for improving training in proper study design and analysis, and that reviewers and editors adopt a more constructively critical approach in the assessment of manuscripts for publication. PMID:21157472
Statistical error propagation in ab initio no-core full configuration calculations of light nuclei
Navarro Pérez, R.; Amaro, J. E.; Ruiz Arriola, E.; ...
2015-12-28
We propagate the statistical uncertainty of experimental N N scattering data into the binding energy of 3H and 4He. Here, we also study the sensitivity of the magnetic moment and proton radius of the 3 H to changes in the N N interaction. The calculations are made with the no-core full configuration method in a sufficiently large harmonic oscillator basis. For those light nuclei we obtain Δ E stat (3H) = 0.015 MeV and Δ E stat ( 4He) = 0.055 MeV .
Uncertainty Quantification and Statistical Engineering for Hypersonic Entry Applications
NASA Technical Reports Server (NTRS)
Cozmuta, Ioana
2011-01-01
NASA has invested significant resources in developing and validating a mathematical construct for TPS margin management: a) Tailorable for low/high reliability missions; b) Tailorable for ablative/reusable TPS; c) Uncertainty Quantification and Statistical Engineering are valuable tools not exploited enough; and d) Need to define strategies combining both Theoretical Tools and Experimental Methods. The main reason for this lecture is to give a flavor of where UQ and SE could contribute and hope that the broader community will work with us to improve in these areas.
van de Streek, Jacco; Neumann, Marcus A
2010-10-01
This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.
Revision by means of computer-mediated peer discussions
NASA Astrophysics Data System (ADS)
Soong, Benson; Mercer, Neil; Er, Siew Shin
2010-05-01
In this article, we provide a discussion on our revision method (termed prescriptive tutoring) aimed at revealing students' misconceptions and misunderstandings by getting them to solve physics problems with an anonymous partner via the computer. It is currently being implemented and evaluated in a public secondary school in Singapore, and statistical analysis of our initial small-scale study shows that students in the experimental group significantly outperformed students in both the control and alternative intervention groups. In addition, students in the experimental group perceived that they had gained improved understanding of the physics concepts covered during the intervention, and reported that they would like to continue revising physics concepts using the intervention methods.
Development and Characterization of a Low-Pressure Calibration System for Hypersonic Wind Tunnels
NASA Technical Reports Server (NTRS)
Green, Del L.; Everhart, Joel L.; Rhode, Matthew N.
2004-01-01
Minimization of uncertainty is essential for accurate ESP measurements at very low free-stream static pressures found in hypersonic wind tunnels. Statistical characterization of environmental error sources requires a well defined and controlled calibration method. A calibration system has been constructed and environmental control software developed to control experimentation to eliminate human induced error sources. The initial stability study of the calibration system shows a high degree of measurement accuracy and precision in temperature and pressure control. Control manometer drift and reference pressure instabilities induce uncertainty into the repeatability of voltage responses measured from the PSI System 8400 between calibrations. Methods of improving repeatability are possible through software programming and further experimentation.
Statistical Modeling of Single Target Cell Encapsulation
Moon, SangJun; Ceyhan, Elvan; Gurkan, Umut Atakan; Demirci, Utkan
2011-01-01
High throughput drop-on-demand systems for separation and encapsulation of individual target cells from heterogeneous mixtures of multiple cell types is an emerging method in biotechnology that has broad applications in tissue engineering and regenerative medicine, genomics, and cryobiology. However, cell encapsulation in droplets is a random process that is hard to control. Statistical models can provide an understanding of the underlying processes and estimation of the relevant parameters, and enable reliable and repeatable control over the encapsulation of cells in droplets during the isolation process with high confidence level. We have modeled and experimentally verified a microdroplet-based cell encapsulation process for various combinations of cell loading and target cell concentrations. Here, we explain theoretically and validate experimentally a model to isolate and pattern single target cells from heterogeneous mixtures without using complex peripheral systems. PMID:21814548
Bayesian Treed Calibration: An Application to Carbon Capture With AX Sorbent
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konomi, Bledar A.; Karagiannis, Georgios; Lai, Kevin
2017-01-02
In cases where field or experimental measurements are not available, computer models can model real physical or engineering systems to reproduce their outcomes. They are usually calibrated in light of experimental data to create a better representation of the real system. Statistical methods, based on Gaussian processes, for calibration and prediction have been especially important when the computer models are expensive and experimental data limited. In this paper, we develop the Bayesian treed calibration (BTC) as an extension of standard Gaussian process calibration methods to deal with non-stationarity computer models and/or their discrepancy from the field (or experimental) data. Ourmore » proposed method partitions both the calibration and observable input space, based on a binary tree partitioning, into sub-regions where existing model calibration methods can be applied to connect a computer model with the real system. The estimation of the parameters in the proposed model is carried out using Markov chain Monte Carlo (MCMC) computational techniques. Different strategies have been applied to improve mixing. We illustrate our method in two artificial examples and a real application that concerns the capture of carbon dioxide with AX amine based sorbents. The source code and the examples analyzed in this paper are available as part of the supplementary materials.« less
Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S; Sinha, Saurabh
2011-12-01
Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, 'enhancers'), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for 'motif-blind' CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to 'supervise' the search. We propose a new statistical method, based on 'Interpolated Markov Models', for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. © The Author(s) 2011. Published by Oxford University Press.
Experimental design data for the biosynthesis of citric acid using Central Composite Design method.
Kola, Anand Kishore; Mekala, Mallaiah; Goli, Venkat Reddy
2017-06-01
In the present investigation, we report that statistical design and optimization of significant variables for the microbial production of citric acid from sucrose in presence of filamentous fungi A. niger NCIM 705. Various combinations of experiments were designed with Central Composite Design (CCD) of Response Surface Methodology (RSM) for the production of citric acid as a function of six variables. The variables are; initial sucrose concentration, initial pH of medium, fermentation temperature, incubation time, stirrer rotational speed, and oxygen flow rate. From experimental data, a statistical model for this process has been developed. The optimum conditions reported in the present article are initial concentration of sucrose of 163.6 g/L, initial pH of medium 5.26, stirrer rotational speed of 247.78 rpm, incubation time of 8.18 days, fermentation temperature of 30.06 °C and flow rate of oxygen of 1.35 lpm. Under optimum conditions the predicted maximum citric acid is 86.42 g/L. The experimental validation carried out under the optimal values and reported citric acid to be 82.0 g/L. The model is able to represent the experimental data and the agreement between the model and experimental data is good.
Using Performance Methods to Enhance Students' Reading Fluency
ERIC Educational Resources Information Center
Young, Chase; Valadez, Corinne; Gandara, Cori
2016-01-01
The quasi-experimental study examined the effects of pairing Rock and Read with Readers Theater and only Rock and Read on second grade students' reading fluency scores. The 51 subjects were pre- and post-tested on five different reading fluency measures. A series of 3 × 2 repeated measures ANOVAs revealed statistically significant interaction…
Edge Detection and Geometric Methods in Computer Vision,
1985-02-01
enlightening discussion) Derivations or Eqs. 3.29, 3.31, 3.32 (some statistics) Experimental results (pictures)-- not very informative, extensive or useful. lie... neurophysiology and hardware design. If one views 9 the state space as a free vector space on the labels over the field of weights (which we take to be R), then
Bayesian statistics: estimating plant demographic parameters
James S. Clark; Michael Lavine
2001-01-01
There are times when external information should be brought tobear on an ecological analysis. experiments are never conducted in a knowledge-free context. The inference we draw from an observation may depend on everything else we know about the process. Bayesian analysis is a method that brings outside evidence into the analysis of experimental and observational data...
USDA FS
1982-01-01
Instructions, illustrated with examples and experimental results, are given for the controlled-environment propagation and selection of poplar clones. Greenhouse and growth-room culture of poplar stock plants and scions are described, and statistical techniques for discriminating among clones on the basis of growth variables are emphasized.
Electronic contributions to the sigma(p) parameter of the Hammett equation.
Domingo, Luis R; Pérez, Patricia; Contreras, Renato
2003-07-25
A statistical procedure to obtain the intrinsic electronic contributions to the Hammett substituent constant sigma(p) is reported. The method is based on the comparison between the experimental sigma(p) values and the electronic electrophilicity index omega evaluated for a series of 42 functional groups commonly present in organic compounds.
Kiss High Blood Pressure Goodbye: The Relationship between Dark Chocolate and Hypertension
ERIC Educational Resources Information Center
Nordmoe, Eric D.
2008-01-01
This article reports on a delicious finding from a recent study claiming a causal link between dark chocolate consumption and blood pressure reductions. In the article, I provide ideas for using this study to whet student appetites for a discussion of statistical ideas, including experimental design, measurement error and inference methods.
Topographic ERP analyses: a step-by-step tutorial review.
Murray, Micah M; Brunet, Denis; Michel, Christoph M
2008-06-01
In this tutorial review, we detail both the rationale for as well as the implementation of a set of analyses of surface-recorded event-related potentials (ERPs) that uses the reference-free spatial (i.e. topographic) information available from high-density electrode montages to render statistical information concerning modulations in response strength, latency, and topography both between and within experimental conditions. In these and other ways these topographic analysis methods allow the experimenter to glean additional information and neurophysiologic interpretability beyond what is available from canonical waveform analyses. In this tutorial we present the example of somatosensory evoked potentials (SEPs) in response to stimulation of each hand to illustrate these points. For each step of these analyses, we provide the reader with both a conceptual and mathematical description of how the analysis is carried out, what it yields, and how to interpret its statistical outcome. We show that these topographic analysis methods are intuitive and easy-to-use approaches that can remove much of the guesswork often confronting ERP researchers and also assist in identifying the information contained within high-density ERP datasets.
Bayesian Estimation of Thermonuclear Reaction Rates for Deuterium+Deuterium Reactions
NASA Astrophysics Data System (ADS)
Gómez Iñesta, Á.; Iliadis, C.; Coc, A.
2017-11-01
The study of d+d reactions is of major interest since their reaction rates affect the predicted abundances of D, 3He, and 7Li. In particular, recent measurements of primordial D/H ratios call for reduced uncertainties in the theoretical abundances predicted by Big Bang nucleosynthesis (BBN). Different authors have studied reactions involved in BBN by incorporating new experimental data and a careful treatment of systematic and probabilistic uncertainties. To analyze the experimental data, Coc et al. used results of ab initio models for the theoretical calculation of the energy dependence of S-factors in conjunction with traditional statistical methods based on χ 2 minimization. Bayesian methods have now spread to many scientific fields and provide numerous advantages in data analysis. Astrophysical S-factors and reaction rates using Bayesian statistics were calculated by Iliadis et al. Here we present a similar analysis for two d+d reactions, d(d, n)3He and d(d, p)3H, that has been translated into a total decrease of the predicted D/H value by 0.16%.
NASA Astrophysics Data System (ADS)
Zakynthinaki, Maria S.; Stirling, James R.; Cordente Martínez, Carlos A.; Díaz de Durana, Alfonso López; Quintana, Manuel Sillero; Romo, Gabriel Rodríguez; Molinuevo, Javier Sampedro
2010-03-01
We present a method of modeling the basin of attraction as a three-dimensional function describing a two-dimensional manifold on which the dynamics of the system evolves from experimental time series data. Our method is based on the density of the data set and uses numerical optimization and data modeling tools. We also show how to obtain analytic curves that describe both the contours and the boundary of the basin. Our method is applied to the problem of regaining balance after perturbation from quiet vertical stance using data of an elite athlete. Our method goes beyond the statistical description of the experimental data, providing a function that describes the shape of the basin of attraction. To test its robustness, our method has also been applied to two different data sets of a second subject and no significant differences were found between the contours of the calculated basin of attraction for the different data sets. The proposed method has many uses in a wide variety of areas, not just human balance for which there are many applications in medicine, rehabilitation, and sport.
The Taguchi Method Application to Improve the Quality of a Sustainable Process
NASA Astrophysics Data System (ADS)
Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.
2018-06-01
Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.
Robust Combining of Disparate Classifiers Through Order Statistics
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Ghosh, Joydeep
2001-01-01
Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In this article we investigate a family of combiners based on order statistics, for robust handling of situations where there are large discrepancies in performance of individual classifiers. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when simple output combination methods based on the the median, the maximum and in general, the ith order statistic, are used. Furthermore, we analyze the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that in the presence of uneven classifier performance, they often provide substantial gains over both linear and simple order statistics combiners. Experimental results on both real world data and standard public domain data sets corroborate these findings.
Kinter, Elizabeth T; Prior, Thomas J; Carswell, Christopher I; Bridges, John F P
2012-01-01
While the application of conjoint analysis and discrete-choice experiments in health are now widely accepted, a healthy debate exists around competing approaches to experimental design. There remains, however, a paucity of experimental evidence comparing competing design approaches and their impact on the application of these methods in patient-centered outcomes research. Our objectives were to directly compare the choice-model parameters and predictions of an orthogonal and a D-efficient experimental design using a randomized trial (i.e., an experiment on experiments) within an application of conjoint analysis studying patient-centered outcomes among outpatients diagnosed with schizophrenia in Germany. Outpatients diagnosed with schizophrenia were surveyed and randomized to receive choice tasks developed using either an orthogonal or a D-efficient experimental design. The choice tasks elicited judgments from the respondents as to which of two patient profiles (varying across seven outcomes and process attributes) was preferable from their own perspective. The results from the two survey designs were analyzed using the multinomial logit model, and the resulting parameter estimates and their robust standard errors were compared across the two arms of the study (i.e., the orthogonal and D-efficient designs). The predictive performances of the two resulting models were also compared by computing their percentage of survey responses classified correctly, and the potential for variation in scale between the two designs of the experiments was tested statistically and explored graphically. The results of the two models were statistically identical. No difference was found using an overall chi-squared test of equality for the seven parameters (p = 0.69) or via uncorrected pairwise comparisons of the parameter estimates (p-values ranged from 0.30 to 0.98). The D-efficient design resulted in directionally smaller standard errors for six of the seven parameters, of which only two were statistically significant, and no differences were found in the observed D-efficiencies of their standard errors (p = 0.62). The D-efficient design resulted in poorer predictive performance, but this was not significant (p = 0.73); there was some evidence that the parameters of the D-efficient design were biased marginally towards the null. While no statistical difference in scale was detected between the two designs (p = 0.74), the D-efficient design had a higher relative scale (1.06). This could be observed when the parameters were explored graphically, as the D-efficient parameters were lower. Our results indicate that orthogonal and D-efficient experimental designs have produced results that are statistically equivalent. This said, we have identified several qualitative findings that speak to the potential differences in these results that may have been statistically identified in a larger sample. While more comparative studies focused on the statistical efficiency of competing design strategies are needed, a more pressing research problem is to document the impact the experimental design has on respondent efficiency.
Kazdal, Hizir; Kanat, Ayhan; Aydin, Mehmet Dumlu; Yazar, Ugur; Guvercin, Ali Riza; Calik, Muhammet; Gundogdu, Betul
2017-01-01
Context: Sudden death from subarachnoid hemorrhage (SAH) is not uncommon. Aims: The goal of this study is to elucidate the effect of the cervical spinal roots and the related dorsal root ganglions (DRGs) on cardiorespiratory arrest following SAH. Settings and Design: This was an experimental study conducted on rabbits. Materials and Methods: This study was conducted on 22 rabbits which were randomly divided into three groups: control (n = 5), physiologic serum saline (SS; n = 6), SAH groups (n = 11). Experimental SAH was performed. Seven of 11 rabbits with SAH died within the first 2 weeks. After 20 days, other animals were sacrificed. The anterior spinal arteries, arteriae nervorum of cervical nerve roots (C6–C8), DRGs, and lungs were histopathologically examined and estimated stereologically. Statistical Analysis Used: Statistical analysis was performed using the PASW Statistics 18.0 for Windows (SPSS Inc., Chicago, Illinois, USA). Intergroup differences were assessed using a one-way ANOVA. The statistical significance was set at P < 0.05. Results: In the SAH group, histopathologically, severe anterior spinal artery (ASA) and arteriae nervorum vasospasm, axonal and neuronal degeneration, and neuronal apoptosis were observed. Vasospasm of ASA did not occur in the SS and control groups. There was a statistically significant increase in the degenerated neuron density in the SAH group as compared to the control and SS groups (P < 0.05). Cardiorespiratory disturbances, arrest, and lung edema more commonly developed in animals in the SAH group. Conclusion: We noticed interestingly that C6–C8 DRG degenerations were secondary to the vasospasm of ASA, following SAH. Cardiorespiratory disturbances or arrest can be explained with these mechanisms. PMID:28250634
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.
NASA Astrophysics Data System (ADS)
Düsterer, S.; Rehders, M.; Al-Shemmary, A.; Behrens, C.; Brenner, G.; Brovko, O.; DellAngela, M.; Drescher, M.; Faatz, B.; Feldhaus, J.; Frühling, U.; Gerasimova, N.; Gerken, N.; Gerth, C.; Golz, T.; Grebentsov, A.; Hass, E.; Honkavaara, K.; Kocharian, V.; Kurka, M.; Limberg, Th.; Mitzner, R.; Moshammer, R.; Plönjes, E.; Richter, M.; Rönsch-Schulenburg, J.; Rudenko, A.; Schlarb, H.; Schmidt, B.; Senftleben, A.; Schneidmiller, E. A.; Siemer, B.; Sorgenfrei, F.; Sorokin, A. A.; Stojanovic, N.; Tiedtke, K.; Treusch, R.; Vogt, M.; Wieland, M.; Wurth, W.; Wesch, S.; Yan, M.; Yurkov, M. V.; Zacharias, H.; Schreiber, S.
2014-12-01
One of the most challenging tasks for extreme ultraviolet, soft and hard x-ray free-electron laser photon diagnostics is the precise determination of the photon pulse duration, which is typically in the sub 100 fs range. Nine different methods, able to determine such ultrashort photon pulse durations, were compared experimentally at FLASH, the self-amplified spontaneous emission free-electron laser at DESY in Hamburg, in order to identify advantages and disadvantages of different methods. Radiation pulses at a wavelength of 13.5 and 24.0 nm together with the corresponding electron bunch duration were measured by indirect methods like analyzing spectral correlations, statistical fluctuations, and energy modulations of the electron bunch and also by direct methods like autocorrelation techniques, terahertz streaking, or reflectivity changes of solid state samples. In this paper, we present a comprehensive overview of the various techniques and a comparison of the individual experimental results. The information gained is of utmost importance for the future development of reliable pulse duration monitors indispensable for successful experiments with ultrashort extreme ultraviolet pulses.
The secret lives of experiments: methods reporting in the fMRI literature.
Carp, Joshua
2012-10-15
Replication of research findings is critical to the progress of scientific understanding. Accordingly, most scientific journals require authors to report experimental procedures in sufficient detail for independent researchers to replicate their work. To what extent do research reports in the functional neuroimaging literature live up to this standard? The present study evaluated methods reporting and methodological choices across 241 recent fMRI articles. Many studies did not report critical methodological details with regard to experimental design, data acquisition, and analysis. Further, many studies were underpowered to detect any but the largest statistical effects. Finally, data collection and analysis methods were highly flexible across studies, with nearly as many unique analysis pipelines as there were studies in the sample. Because the rate of false positive results is thought to increase with the flexibility of experimental designs, the field of functional neuroimaging may be particularly vulnerable to false positives. In sum, the present study documented significant gaps in methods reporting among fMRI studies. Improved methodological descriptions in research reports would yield significant benefits for the field. Copyright © 2012 Elsevier Inc. All rights reserved.
Walker, Martin; Basáñez, María-Gloria; Ouédraogo, André Lin; Hermsen, Cornelus; Bousema, Teun; Churcher, Thomas S
2015-01-16
Quantitative molecular methods (QMMs) such as quantitative real-time polymerase chain reaction (q-PCR), reverse-transcriptase PCR (qRT-PCR) and quantitative nucleic acid sequence-based amplification (QT-NASBA) are increasingly used to estimate pathogen density in a variety of clinical and epidemiological contexts. These methods are often classified as semi-quantitative, yet estimates of reliability or sensitivity are seldom reported. Here, a statistical framework is developed for assessing the reliability (uncertainty) of pathogen densities estimated using QMMs and the associated diagnostic sensitivity. The method is illustrated with quantification of Plasmodium falciparum gametocytaemia by QT-NASBA. The reliability of pathogen (e.g. gametocyte) densities, and the accompanying diagnostic sensitivity, estimated by two contrasting statistical calibration techniques, are compared; a traditional method and a mixed model Bayesian approach. The latter accounts for statistical dependence of QMM assays run under identical laboratory protocols and permits structural modelling of experimental measurements, allowing precision to vary with pathogen density. Traditional calibration cannot account for inter-assay variability arising from imperfect QMMs and generates estimates of pathogen density that have poor reliability, are variable among assays and inaccurately reflect diagnostic sensitivity. The Bayesian mixed model approach assimilates information from replica QMM assays, improving reliability and inter-assay homogeneity, providing an accurate appraisal of quantitative and diagnostic performance. Bayesian mixed model statistical calibration supersedes traditional techniques in the context of QMM-derived estimates of pathogen density, offering the potential to improve substantially the depth and quality of clinical and epidemiological inference for a wide variety of pathogens.
A Robust Adaptive Autonomous Approach to Optimal Experimental Design
NASA Astrophysics Data System (ADS)
Gu, Hairong
Experimentation is the fundamental tool of scientific inquiries to understand the laws governing the nature and human behaviors. Many complex real-world experimental scenarios, particularly in quest of prediction accuracy, often encounter difficulties to conduct experiments using an existing experimental procedure for the following two reasons. First, the existing experimental procedures require a parametric model to serve as the proxy of the latent data structure or data-generating mechanism at the beginning of an experiment. However, for those experimental scenarios of concern, a sound model is often unavailable before an experiment. Second, those experimental scenarios usually contain a large number of design variables, which potentially leads to a lengthy and costly data collection cycle. Incompetently, the existing experimental procedures are unable to optimize large-scale experiments so as to minimize the experimental length and cost. Facing the two challenges in those experimental scenarios, the aim of the present study is to develop a new experimental procedure that allows an experiment to be conducted without the assumption of a parametric model while still achieving satisfactory prediction, and performs optimization of experimental designs to improve the efficiency of an experiment. The new experimental procedure developed in the present study is named robust adaptive autonomous system (RAAS). RAAS is a procedure for sequential experiments composed of multiple experimental trials, which performs function estimation, variable selection, reverse prediction and design optimization on each trial. Directly addressing the challenges in those experimental scenarios of concern, function estimation and variable selection are performed by data-driven modeling methods to generate a predictive model from data collected during the course of an experiment, thus exempting the requirement of a parametric model at the beginning of an experiment; design optimization is performed to select experimental designs on the fly of an experiment based on their usefulness so that fewest designs are needed to reach useful inferential conclusions. Technically, function estimation is realized by Bayesian P-splines, variable selection is realized by Bayesian spike-and-slab prior, reverse prediction is realized by grid-search and design optimization is realized by the concepts of active learning. The present study demonstrated that RAAS achieves statistical robustness by making accurate predictions without the assumption of a parametric model serving as the proxy of latent data structure while the existing procedures can draw poor statistical inferences if a misspecified model is assumed; RAAS also achieves inferential efficiency by taking fewer designs to acquire useful statistical inferences than non-optimal procedures. Thus, RAAS is expected to be a principled solution to real-world experimental scenarios pursuing robust prediction and efficient experimentation.
Stated Choice design comparison in a developing country: recall and attribute nonattendance
2014-01-01
Background Experimental designs constitute a vital component of all Stated Choice (aka discrete choice experiment) studies. However, there exists limited empirical evaluation of the statistical benefits of Stated Choice (SC) experimental designs that employ non-zero prior estimates in constructing non-orthogonal constrained designs. This paper statistically compares the performance of contrasting SC experimental designs. In so doing, the effect of respondent literacy on patterns of Attribute non-Attendance (ANA) across fractional factorial orthogonal and efficient designs is also evaluated. The study uses a ‘real’ SC design to model consumer choice of primary health care providers in rural north India. A total of 623 respondents were sampled across four villages in Uttar Pradesh, India. Methods Comparison of orthogonal and efficient SC experimental designs is based on several measures. Appropriate comparison of each design’s respective efficiency measure is made using D-error results. Standardised Akaike Information Criteria are compared between designs and across recall periods. Comparisons control for stated and inferred ANA. Coefficient and standard error estimates are also compared. Results The added complexity of the efficient SC design, theorised elsewhere, is reflected in higher estimated amounts of ANA among illiterate respondents. However, controlling for ANA using stated and inferred methods consistently shows that the efficient design performs statistically better. Modelling SC data from the orthogonal and efficient design shows that model-fit of the efficient design outperform the orthogonal design when using a 14-day recall period. The performance of the orthogonal design, with respect to standardised AIC model-fit, is better when longer recall periods of 30-days, 6-months and 12-months are used. Conclusions The effect of the efficient design’s cognitive demand is apparent among literate and illiterate respondents, although, more pronounced among illiterate respondents. This study empirically confirms that relaxing the orthogonality constraint of SC experimental designs increases the information collected in choice tasks, subject to the accuracy of the non-zero priors in the design and the correct specification of a ‘real’ SC recall period. PMID:25386388
An Improved Swarm Optimization for Parameter Estimation and Biological Model Selection
Abdullah, Afnizanfaizal; Deris, Safaai; Mohamad, Mohd Saberi; Anwar, Sohail
2013-01-01
One of the key aspects of computational systems biology is the investigation on the dynamic biological processes within cells. Computational models are often required to elucidate the mechanisms and principles driving the processes because of the nonlinearity and complexity. The models usually incorporate a set of parameters that signify the physical properties of the actual biological systems. In most cases, these parameters are estimated by fitting the model outputs with the corresponding experimental data. However, this is a challenging task because the available experimental data are frequently noisy and incomplete. In this paper, a new hybrid optimization method is proposed to estimate these parameters from the noisy and incomplete experimental data. The proposed method, called Swarm-based Chemical Reaction Optimization, integrates the evolutionary searching strategy employed by the Chemical Reaction Optimization, into the neighbouring searching strategy of the Firefly Algorithm method. The effectiveness of the method was evaluated using a simulated nonlinear model and two biological models: synthetic transcriptional oscillators, and extracellular protease production models. The results showed that the accuracy and computational speed of the proposed method were better than the existing Differential Evolution, Firefly Algorithm and Chemical Reaction Optimization methods. The reliability of the estimated parameters was statistically validated, which suggests that the model outputs produced by these parameters were valid even when noisy and incomplete experimental data were used. Additionally, Akaike Information Criterion was employed to evaluate the model selection, which highlighted the capability of the proposed method in choosing a plausible model based on the experimental data. In conclusion, this paper presents the effectiveness of the proposed method for parameter estimation and model selection problems using noisy and incomplete experimental data. This study is hoped to provide a new insight in developing more accurate and reliable biological models based on limited and low quality experimental data. PMID:23593445
NASA Astrophysics Data System (ADS)
Most, S.; Nowak, W.; Bijeljic, B.
2014-12-01
Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.
Prediction of Enzyme Mutant Activity Using Computational Mutagenesis and Incremental Transduction
Basit, Nada; Wechsler, Harry
2011-01-01
Wet laboratory mutagenesis to determine enzyme activity changes is expensive and time consuming. This paper expands on standard one-shot learning by proposing an incremental transductive method (T2bRF) for the prediction of enzyme mutant activity during mutagenesis using Delaunay tessellation and 4-body statistical potentials for representation. Incremental learning is in tune with both eScience and actual experimentation, as it accounts for cumulative annotation effects of enzyme mutant activity over time. The experimental results reported, using cross-validation, show that overall the incremental transductive method proposed, using random forest as base classifier, yields better results compared to one-shot learning methods. T2bRF is shown to yield 90% on T4 and LAC (and 86% on HIV-1). This is significantly better than state-of-the-art competing methods, whose performance yield is at 80% or less using the same datasets. PMID:22007208
Examination of the Ovarian Reserve after Generation of Unilateral Rudimentary Uterine Horns in Rats
Toyganözü, Hasan; Nazik, Hakan; Narin, Raziye; Satar, Deniz; Narin, Mehmet Ali; Büyüknacar, Sinem; Api, Murat; Aytan, Hakan
2014-01-01
Objective. The purpose of this experimental rat model study is to evaluate the changes in the ovarian environment after excision of the rudimentary horn. Methods. Ten female Wistar albino rats were used in this study. One cm of right uterine horn length was excised in the first operation. Two months after the first operation, all animals were sacrificed to obtain ovaries for histological examination. Mann-Whitney U test and Student's t-test were used for statistical analysis purposes. Statistical significance was defined as P < 0.005. Results. The number of primordial follicles (P = 0.415), primary follicles (P = 0.959), preantral follicles (P = 0.645), antral follicles (P = 0.328), and Graafian follicles (P = 0.721) was decreased and the number of atretic follicles (P = 0.374) increased in the right ovarian side. Howeve,r this difference was not found to be statistically significant. Conclusion. The results of this experimental rat model study suggest that the excision of rudimentary horn could have negative effects on ipsilateral ovarian functions. PMID:24672393
Dynamic causal modelling: a critical review of the biophysical and statistical foundations.
Daunizeau, J; David, O; Stephan, K E
2011-09-15
The goal of dynamic causal modelling (DCM) of neuroimaging data is to study experimentally induced changes in functional integration among brain regions. This requires (i) biophysically plausible and physiologically interpretable models of neuronal network dynamics that can predict distributed brain responses to experimental stimuli and (ii) efficient statistical methods for parameter estimation and model comparison. These two key components of DCM have been the focus of more than thirty methodological articles since the seminal work of Friston and colleagues published in 2003. In this paper, we provide a critical review of the current state-of-the-art of DCM. We inspect the properties of DCM in relation to the most common neuroimaging modalities (fMRI and EEG/MEG) and the specificity of inference on neural systems that can be made from these data. We then discuss both the plausibility of the underlying biophysical models and the robustness of the statistical inversion techniques. Finally, we discuss potential extensions of the current DCM framework, such as stochastic DCMs, plastic DCMs and field DCMs. Copyright © 2009 Elsevier Inc. All rights reserved.
A persuasive concept of research-oriented teaching in Soil Biochemistry
NASA Astrophysics Data System (ADS)
Blagodatskaya, Evgenia; Kuzyakova, Irina
2013-04-01
One of the main problems of existing bachelor programs is disconnection of basic and experimental education: even during practical training the methods learned are not related to characterization of soil field experiments and observed soil processes. We introduce a multi-level research-oriented teaching system involving Bachelor students in four-semesters active study by integration the basic knowledge, experimental techniques, statistical approaches, project design and it's realization.The novelty of research-oriented teaching system is based 1) on linkage of ongoing experiment to the study of statistical methods and 2) on self-responsibility of students for interpretation of soil chemical and biochemical characteristics obtained in the very beginning of their study by analysing the set of soil samples allowing full-factorial data treatment. This experimental data set is related to specific soil stand and is used as a backbone of the teaching system accelerating the student's interest to soil studies and motivating them for application of basic knowledge from lecture courses. The multi-level system includes: 1) basic lecture course on soil biochemistry with analysis of research questions, 2) practical training course on laboratory analytics where small groups of students are responsible for analysis of soil samples related to the specific land-use/forest type/forest age; 3) training course on biotic (e.g. respiration) - abiotic (e.g. temperature, moisture, fire etc.) interactions in the same soil samples; 4) theoretical seminars where students present and make a first attempt to explain soil characteristics of various soil stands as affected by abiotic factors (first semester); 5) lecture and seminar course on soil statistics where students apply newly learned statistical methods to prove their conclusions and to find relationships between soil characteristics obtained during first semester; 6) seminar course on project design where students develop their scientific projects to study the uncertainties revealed in soil responses to abiotic factors (second and third semesters); 7) Lecture, seminar and training courses on estimation of active microbial biomass in soil where students realize their projects applying a new knowledge to the soils from the stands they are responsible for (fourth semester). Thus, during four semesters the students continuously combine the theoretical knowledge from the lectures with their own experimental experience, compare and discuss results of various groups during seminars and obtain the skills in project design. The successful application of research-oriented teaching system in University of Göttingen allowed each student the early-stage revealing knowledge gaps, accelerated their involvement in ongoing research projects, and motivated them to begin own scientific career.
Bredbenner, Todd L.; Eliason, Travis D.; Francis, W. Loren; McFarland, John M.; Merkle, Andrew C.; Nicolella, Daniel P.
2014-01-01
Cervical spinal injuries are a significant concern in all trauma injuries. Recent military conflicts have demonstrated the substantial risk of spinal injury for the modern warfighter. Finite element models used to investigate injury mechanisms often fail to examine the effects of variation in geometry or material properties on mechanical behavior. The goals of this study were to model geometric variation for a set of cervical spines, to extend this model to a parametric finite element model, and, as a first step, to validate the parametric model against experimental data for low-loading conditions. Individual finite element models were created using cervical spine (C3–T1) computed tomography data for five male cadavers. Statistical shape modeling (SSM) was used to generate a parametric finite element model incorporating variability of spine geometry, and soft-tissue material property variation was also included. The probabilistic loading response of the parametric model was determined under flexion-extension, axial rotation, and lateral bending and validated by comparison to experimental data. Based on qualitative and quantitative comparison of the experimental loading response and model simulations, we suggest that the model performs adequately under relatively low-level loading conditions in multiple loading directions. In conclusion, SSM methods coupled with finite element analyses within a probabilistic framework, along with the ability to statistically validate the overall model performance, provide innovative and important steps toward describing the differences in vertebral morphology, spinal curvature, and variation in material properties. We suggest that these methods, with additional investigation and validation under injurious loading conditions, will lead to understanding and mitigating the risks of injury in the spine and other musculoskeletal structures. PMID:25506051
NASA Astrophysics Data System (ADS)
Navarrete, Álvaro; Wang, Wenyuan; Xu, Feihu; Curty, Marcos
2018-04-01
The experimental characterization of multi-photon quantum interference effects in optical networks is essential in many applications of photonic quantum technologies, which include quantum computing and quantum communication as two prominent examples. However, such characterization often requires technologies which are beyond our current experimental capabilities, and today's methods suffer from errors due to the use of imperfect sources and photodetectors. In this paper, we introduce a simple experimental technique to characterize multi-photon quantum interference by means of practical laser sources and threshold single-photon detectors. Our technique is based on well-known methods in quantum cryptography which use decoy settings to tightly estimate the statistics provided by perfect devices. As an illustration of its practicality, we use this technique to obtain a tight estimation of both the generalized Hong‑Ou‑Mandel dip in a beamsplitter with six input photons and the three-photon coincidence probability at the output of a tritter.
Guide to Using Onionskin Analysis Code (U)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fugate, Michael Lynn; Morzinski, Jerome Arthur
2016-09-15
This document is a guide to using R-code written for the purpose of analyzing onionskin experiments. We expect the user to be very familiar with statistical methods and the R programming language. For more details about onionskin experiments and the statistical methods mentioned in this document see Storlie, Fugate, et al. (2013). Engineers at LANL experiment with detonators and high explosives to assess performance. The experimental unit, called an onionskin, is a hemisphere consisting of a detonator and a booster pellet surrounded by explosive material. When the detonator explodes, a streak camera mounted above the pole of the hemisphere recordsmore » when the shock wave arrives at the surface. The output from the camera is a two-dimensional image that is transformed into a curve that shows the arrival time as a function of polar angle. The statistical challenge is to characterize a baseline population of arrival time curves and to compare the baseline curves to curves from a new, so-called, test series. The hope is that the new test series of curves is statistically similar to the baseline population.« less
NASA Astrophysics Data System (ADS)
Dixon, K. W.; Balaji, V.; Lanzante, J.; Radhakrishnan, A.; Hayhoe, K.; Stoner, A. K.; Gaitan, C. F.
2013-12-01
Statistical downscaling (SD) methods may be viewed as generating a value-added product - a refinement of global climate model (GCM) output designed to add finer scale detail and to address GCM shortcomings via a process that gleans information from a combination of observations and GCM-simulated climate change responses. Making use of observational data sets and GCM simulations representing the same historical period, cross-validation techniques allow one to assess how well an SD method meets this goal. However, lacking observations of future, the extent to which a particular SD method's skill might degrade when applied to future climate projections cannot be assessed in the same manner. Here we illustrate and describe extensions to a 'perfect model' experimental design that seeks to quantify aspects of SD method performance both for a historical period (1979-2008) and for late 21st century climate projections. Examples highlighting cases in which downscaling performance deteriorates in future climate projections will be discussed. Also, results will be presented showing how synthetic datasets having known statistical properties may be used to further isolate factors responsible for degradations in SD method skill under changing climatic conditions. We will describe a set of input files used to conduct these analyses that are being made available to researchers who wish to utilize this experimental framework to evaluate SD methods they have developed. The gridded data sets cover a region centered on the contiguous 48 United States with a grid spacing of approximately 25km, have daily time resolution (e.g., maximum and minimum near-surface temperature and precipitation), and represent a total of 120 years of model simulations. This effort is consistent with the 2013 National Climate Predictions and Projections Platform Quantitative Evaluation of Downscaling Workshop goal of supporting a community approach to promote the informed use of downscaled climate projections.
Inferring gene regression networks with model trees
2010-01-01
Background Novel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities. Results We propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods. Conclusions REGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear regressions to separate areas of the search space favoring to infer localized similarities over a more global similarity. Furthermore, experimental results show the good performance of REGNET. PMID:20950452
Morris, Jeffrey S
2012-01-01
In recent years, developments in molecular biotechnology have led to the increased promise of detecting and validating biomarkers, or molecular markers that relate to various biological or medical outcomes. Proteomics, the direct study of proteins in biological samples, plays an important role in the biomarker discovery process. These technologies produce complex, high dimensional functional and image data that present many analytical challenges that must be addressed properly for effective comparative proteomics studies that can yield potential biomarkers. Specific challenges include experimental design, preprocessing, feature extraction, and statistical analysis accounting for the inherent multiple testing issues. This paper reviews various computational aspects of comparative proteomic studies, and summarizes contributions I along with numerous collaborators have made. First, there is an overview of comparative proteomics technologies, followed by a discussion of important experimental design and preprocessing issues that must be considered before statistical analysis can be done. Next, the two key approaches to analyzing proteomics data, feature extraction and functional modeling, are described. Feature extraction involves detection and quantification of discrete features like peaks or spots that theoretically correspond to different proteins in the sample. After an overview of the feature extraction approach, specific methods for mass spectrometry ( Cromwell ) and 2D gel electrophoresis ( Pinnacle ) are described. The functional modeling approach involves modeling the proteomic data in their entirety as functions or images. A general discussion of the approach is followed by the presentation of a specific method that can be applied, wavelet-based functional mixed models, and its extensions. All methods are illustrated by application to two example proteomic data sets, one from mass spectrometry and one from 2D gel electrophoresis. While the specific methods presented are applied to two specific proteomic technologies, MALDI-TOF and 2D gel electrophoresis, these methods and the other principles discussed in the paper apply much more broadly to other expression proteomics technologies.
Comparison of two surface temperature measurement using thermocouples and infrared camera
NASA Astrophysics Data System (ADS)
Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena
This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting
Husen, Mohd Nizam; Lee, Sukhan
2016-01-01
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis. PMID:27845711
Indoor Location Sensing with Invariant Wi-Fi Received Signal Strength Fingerprinting.
Husen, Mohd Nizam; Lee, Sukhan
2016-11-11
A method of location fingerprinting based on the Wi-Fi received signal strength (RSS) in an indoor environment is presented. The method aims to overcome the RSS instability due to varying channel disturbances in time by introducing the concept of invariant RSS statistics. The invariant RSS statistics represent here the RSS distributions collected at individual calibration locations under minimal random spatiotemporal disturbances in time. The invariant RSS statistics thus collected serve as the reference pattern classes for fingerprinting. Fingerprinting is carried out at an unknown location by identifying the reference pattern class that maximally supports the spontaneous RSS sensed from individual Wi-Fi sources. A design guideline is also presented as a rule of thumb for estimating the number of Wi-Fi signal sources required to be available for any given number of calibration locations under a certain level of random spatiotemporal disturbances. Experimental results show that the proposed method not only provides 17% higher success rate than conventional ones but also removes the need for recalibration. Furthermore, the resolution is shown finer by 40% with the execution time more than an order of magnitude faster than the conventional methods. These results are also backed up by theoretical analysis.
Revising the lower statistical limit of x-ray grating-based phase-contrast computed tomography.
Marschner, Mathias; Birnbacher, Lorenz; Willner, Marian; Chabior, Michael; Herzen, Julia; Noël, Peter B; Pfeiffer, Franz
2017-01-01
Phase-contrast x-ray computed tomography (PCCT) is currently investigated as an interesting extension of conventional CT, providing high soft-tissue contrast even if examining weakly absorbing specimen. Until now, the potential for dose reduction was thought to be limited compared to attenuation CT, since meaningful phase retrieval fails for scans with very low photon counts when using the conventional phase retrieval method via phase stepping. In this work, we examine the statistical behaviour of the reverse projection method, an alternative phase retrieval approach and compare the results to the conventional phase retrieval technique. We investigate the noise levels in the projections as well as the image quality and quantitative accuracy of the reconstructed tomographic volumes. The results of our study show that this method performs better in a low-dose scenario than the conventional phase retrieval approach, resulting in lower noise levels, enhanced image quality and more accurate quantitative values. Overall, we demonstrate that the lower statistical limit of the phase stepping procedure as proposed by recent literature does not apply to this alternative phase retrieval technique. However, further development is necessary to overcome experimental challenges posed by this method which would enable mainstream or even clinical application of PCCT.
Board Game in Physics Classes—a Proposal for a New Method of Student Assessment
NASA Astrophysics Data System (ADS)
Dziob, Daniel
2018-03-01
The aim of this study was to examine the impact of assessing students' achievements in a physics course in the form of a group board game. Research was conducted in two groups of 131 high school students in Poland. In each school, the research sample was divided into experimental and control groups. Each group was taught by the same teacher and participated in the same courses and tests before the game. Just after finishing the course on waves and vibrations (school 1) and optics (school 2), experimental groups took part in a group board game to assess their knowledge. One week after the game, the experimental and control groups (not involved in the game) took part in the post-tests. Students from the experimental groups performed better in the game than in the tests given before the game. As well their results in the post-tests were significantly higher statistically than students from the control groups. Simultaneously, student's opinions in the experimental groups about the board game as an assessment method were collected in an open-descriptive form and in a short questionnaire, and analyzed. Results showed that students experienced a positive attitude toward the assessment method, a reduction of test anxiety and an increase in their motivation for learning.
NASA Astrophysics Data System (ADS)
Nam, Kyoung Won; Kim, In Young; Kang, Ho Chul; Yang, Hee Kyung; Yoon, Chang Ki; Hwang, Jeong Min; Kim, Young Jae; Kim, Tae Yun; Kim, Kwang Gi
2012-10-01
Accurate measurement of binocular misalignment between both eyes is important for proper preoperative management, surgical planning, and postoperative evaluation of patients with strabismus. In this study, we proposed a new computerized diagnostic algorithm that can calculate the angle of binocular eye misalignment photographically by using a dedicated three-dimensional eye model mimicking the structure of the natural human eye. To evaluate the performance of the proposed algorithm, eight healthy volunteers and eight individuals with strabismus were recruited in this study, the horizontal deviation angle, vertical deviation angle, and angle of eye misalignment were calculated and the angular differences between the healthy and the strabismus groups were evaluated using the nonparametric Mann-Whitney test and the Pearson correlation test. The experimental results demonstrated a statistically significant difference between the healthy and strabismus groups (p = 0.015 < 0.05), but no statistically significant difference between the proposed method and the Krimsky test (p = 0.912 > 0.05). The measurements of the two methods were highly correlated (r = 0.969, p < 0.05). From the experimental results, we believe that the proposed diagnostic method has the potential to be a diagnostic tool that measures the physical disorder of the human eye to diagnose non-invasively the severity of strabismus.
Computer-Assisted Drug Formulation Design: Novel Approach in Drug Delivery.
Metwally, Abdelkader A; Hathout, Rania M
2015-08-03
We hypothesize that, by using several chemo/bio informatics tools and statistical computational methods, we can study and then predict the behavior of several drugs in model nanoparticulate lipid and polymeric systems. Accordingly, two different matrices comprising tripalmitin, a core component of solid lipid nanoparticles (SLN), and PLGA were first modeled using molecular dynamics simulation, and then the interaction of drugs with these systems was studied by means of computing the free energy of binding using the molecular docking technique. These binding energies were hence correlated with the loadings of these drugs in the nanoparticles obtained experimentally from the available literature. The obtained relations were verified experimentally in our laboratory using curcumin as a model drug. Artificial neural networks were then used to establish the effect of the drugs' molecular descriptors on the binding energies and hence on the drug loading. The results showed that the used soft computing methods can provide an accurate method for in silico prediction of drug loading in tripalmitin-based and PLGA nanoparticulate systems. These results have the prospective of being applied to other nano drug-carrier systems, and this integrated statistical and chemo/bio informatics approach offers a new toolbox to the formulation science by proposing what we present as computer-assisted drug formulation design (CADFD).
Radioactivity Registered With a Small Number of Events
NASA Astrophysics Data System (ADS)
Zlokazov, Victor; Utyonkov, Vladimir
2018-02-01
The synthesis of superheavy elements asks for the analysis of low statistics experimental data presumably obeying an unknown exponential distribution and to take the decision whether they originate from one source or have admixtures. Here we analyze predictions following from non-parametrical methods, employing only such fundamental sample properties as the sample mean, the median and the mode.
Establishing Interventions via a Theory-Driven Single Case Design Research Cycle
ERIC Educational Resources Information Center
Kilgus, Stephen P.; Riley-Tillman, T. Chris; Kratochwill, Thomas R.
2016-01-01
Recent studies have suggested single case design (SCD) intervention research is subject to publication bias, wherein studies are more likely to be published if they possess large or statistically significant effects and use rigorous experimental methods. The nature of SCD and the purposes for which it might be used could suggest that large effects…
SAR Speckle Noise Reduction Using Wiener Filter
NASA Technical Reports Server (NTRS)
Joo, T. H.; Held, D. N.
1983-01-01
Synthetic aperture radar (SAR) images are degraded by speckle. A multiplicative speckle noise model for SAR images is presented. Using this model, a Wiener filter is derived by minimizing the mean-squared error using the known speckle statistics. Implementation of the Wiener filter is discussed and experimental results are presented. Finally, possible improvements to this method are explored.
Bayesian Hypothesis Testing for Psychologists: A Tutorial on the Savage-Dickey Method
ERIC Educational Resources Information Center
Wagenmakers, Eric-Jan; Lodewyckx, Tom; Kuriyal, Himanshu; Grasman, Raoul
2010-01-01
In the field of cognitive psychology, the "p"-value hypothesis test has established a stranglehold on statistical reporting. This is unfortunate, as the "p"-value provides at best a rough estimate of the evidence that the data provide for the presence of an experimental effect. An alternative and arguably more appropriate measure of evidence is…
Fisher, Sir Ronald Aylmer (1890-1962)
NASA Astrophysics Data System (ADS)
Murdin, P.
2000-11-01
Statistician, born in London, England. After studying astronomy using AIRY's manual on the Theory of Errors he became interested in statistics, and laid the foundation of randomization in experimental design, the analysis of variance and the use of data in estimating the properties of the parent population from which it was drawn. Invented the maximum likelihood method for estimating from random ...
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Gupta, Rajesh K; Reddy, Pooja S
2013-10-01
Jasminum grandiflorum belongs to the family Oleaceae and is known to have anti-inflammatory, antimicrobial, antioxidant, and antiulcer activities. The present study was undertaken to study its analgesic and anticonvulsant effects in rats and mice. The antinociceptive activity of the hydroalcoholic extract of J. grandiflorum leaves (HEJGL) was studied using tail flick and acetic acid - induced writhing method. Similarly, its anticonvulsant activity was observed by maximal electroshock (MES) method and pentylenetetrazol (PTZ) method. Statistical analysis was performed using one-way analysis of variance (ANOVA) followed by Dunnett's test. At doses of 50, 100, and 200 mg/kg, HEJGL showed significant analgesic and anticonvulsant effects in experimental animals. In view of its analgesic and anticonvulsant activity, the JGL extract can be used in painful conditions as well as in seizure disorders.
Gupta, Rajesh K.; Reddy, Pooja S.
2013-01-01
Jasminum grandiflorum belongs to the family Oleaceae and is known to have anti-inflammatory, antimicrobial, antioxidant, and antiulcer activities. The present study was undertaken to study its analgesic and anticonvulsant effects in rats and mice. The antinociceptive activity of the hydroalcoholic extract of J. grandiflorum leaves (HEJGL) was studied using tail flick and acetic acid – induced writhing method. Similarly, its anticonvulsant activity was observed by maximal electroshock (MES) method and pentylenetetrazol (PTZ) method. Statistical analysis was performed using one-way analysis of variance (ANOVA) followed by Dunnett's test. At doses of 50, 100, and 200 mg/kg, HEJGL showed significant analgesic and anticonvulsant effects in experimental animals. In view of its analgesic and anticonvulsant activity, the JGL extract can be used in painful conditions as well as in seizure disorders. PMID:24174823
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
ANTIPLAQUE AND ANTIGINGIVITIS EFFECT OF LIPPIA SIDOIDES. A DOUBLE-BLIND CLINICAL STUDY IN HUMANS
Rodrigues, Ítalo Sarto Carvalho; Tavares, Vinícius Nascimento; Pereira, Sérgio Luís da Silva; da Costa, Flávio Nogueira
2009-01-01
Objectives: The antiplaque and antigingivitis effect of Lippia Sidoides (LS) was evaluated in this in vivo investigation. Material and Methods: Twenty-three subjects participated in a cross-over, double-blind clinical study, using 21-day partial-mouth experimental model of gingivitis. A toothshield was constructed for each volunteer, avoiding the brushing of the 4 experimental posterior teeth in the lower left quadrant. The subjects were randomly assigned initially to use either the placebo gel (control group) or the test gel, containing 10% LS (test group). Results: The clinical results showed statistically significant differences for plaque index (PLI) (p<0.01) between days 0 and 21 in both groups, however only the control group showed statistically significant difference (p<0.01) for the bleeding (IB) and gingival (GI) index within the experimental period of 21 days. On day 21, the test group presented significantly better results than the control group with regard to the GI (p<0.05). Conclusions: The test gel containing 10% LS was effective in the control of gingivitis. PMID:19936516
[Dilemma of null hypothesis in ecological hypothesis's experiment test.
Li, Ji
2016-06-01
Experimental test is one of the major test methods of ecological hypothesis, though there are many arguments due to null hypothesis. Quinn and Dunham (1983) analyzed the hypothesis deduction model from Platt (1964) and thus stated that there is no null hypothesis in ecology that can be strictly tested by experiments. Fisher's falsificationism and Neyman-Pearson (N-P)'s non-decisivity inhibit statistical null hypothesis from being strictly tested. Moreover, since the null hypothesis H 0 (α=1, β=0) and alternative hypothesis H 1 '(α'=1, β'=0) in ecological progresses are diffe-rent from classic physics, the ecological null hypothesis can neither be strictly tested experimentally. These dilemmas of null hypothesis could be relieved via the reduction of P value, careful selection of null hypothesis, non-centralization of non-null hypothesis, and two-tailed test. However, the statistical null hypothesis significance testing (NHST) should not to be equivalent to the causality logistical test in ecological hypothesis. Hence, the findings and conclusions about methodological studies and experimental tests based on NHST are not always logically reliable.
Experimental statistical signature of many-body quantum interference
NASA Astrophysics Data System (ADS)
Giordani, Taira; Flamini, Fulvio; Pompili, Matteo; Viggianiello, Niko; Spagnolo, Nicolò; Crespi, Andrea; Osellame, Roberto; Wiebe, Nathan; Walschaers, Mattia; Buchleitner, Andreas; Sciarrino, Fabio
2018-03-01
Multi-particle interference is an essential ingredient for fundamental quantum mechanics phenomena and for quantum information processing to provide a computational advantage, as recently emphasized by boson sampling experiments. Hence, developing a reliable and efficient technique to witness its presence is pivotal in achieving the practical implementation of quantum technologies. Here, we experimentally identify genuine many-body quantum interference via a recent efficient protocol, which exploits statistical signatures at the output of a multimode quantum device. We successfully apply the test to validate three-photon experiments in an integrated photonic circuit, providing an extensive analysis on the resources required to perform it. Moreover, drawing upon established techniques of machine learning, we show how such tools help to identify the—a priori unknown—optimal features to witness these signatures. Our results provide evidence on the efficacy and feasibility of the method, paving the way for its adoption in large-scale implementations.
Tang, Qi-Yi; Zhang, Chuan-Xi
2013-04-01
A comprehensive but simple-to-use software package called DPS (Data Processing System) has been developed to execute a range of standard numerical analyses and operations used in experimental design, statistics and data mining. This program runs on standard Windows computers. Many of the functions are specific to entomological and other biological research and are not found in standard statistical software. This paper presents applications of DPS to experimental design, statistical analysis and data mining in entomology. © 2012 The Authors Insect Science © 2012 Institute of Zoology, Chinese Academy of Sciences.
Rydzy, M; Deslauriers, R; Smith, I C; Saunders, J K
1990-08-01
A systematic study was performed to optimize the accuracy of kinetic parameters derived from magnetization transfer measurements. Three techniques were investigated: time-dependent saturation transfer (TDST), saturation recovery (SRS), and inversion recovery (IRS). In the last two methods, one of the resonances undergoing exchange is saturated throughout the experiment. The three techniques were compared with respect to the accuracy of the kinetic parameters derived from experiments performed in a given, fixed, amount of time. Stochastic simulation of magnetization transfer experiments was performed to optimize experimental design. General formulas for the relative accuracies of the unidirectional rate constant (k) were derived for each of the three experimental methods. It was calculated that for k values between 0.1 and 1.0 s-1, T1 values between 1 and 10 s, and relaxation delays appropriate for the creatine kinase reaction, the SRS method yields more accurate values of k than does the IRS method. The TDST method is more accurate than the SRS method for reactions where T1 is long and k is large, within the range of k and T1 values examined. Experimental verification of the method was carried out on a solution in which the forward (PCr----ATP) rate constant (kf) of the creatine kinase reaction was measured.
Theory of atomic spectral emission intensity
NASA Astrophysics Data System (ADS)
Yngström, Sten
1994-07-01
The theoretical derivation of a new spectral line intensity formula for atomic radiative emission is presented. The theory is based on first principles of quantum physics, electrodynamics, and statistical physics. Quantum rules lead to revision of the conventional principle of local thermal equilibrium of matter and radiation. Study of electrodynamics suggests absence of spectral emission from fractions of the numbers of atoms and ions in a plasma due to radiative inhibition caused by electromagnetic force fields. Statistical probability methods are extended by the statement: A macroscopic physical system develops in the most probable of all conceivable ways consistent with the constraining conditions for the system. The crucial role of statistical physics in transforming quantum logic into common sense logic is stressed. The theory is strongly supported by experimental evidence.
Statistical Characterization and Classification of Edge-Localized Plasma Instabilities
NASA Astrophysics Data System (ADS)
Webster, A. J.; Dendy, R. O.
2013-04-01
The statistics of edge-localized plasma instabilities (ELMs) in toroidal magnetically confined fusion plasmas are considered. From first principles, standard experimentally motivated assumptions are shown to determine a specific probability distribution for the waiting times between ELMs: the Weibull distribution. This is confirmed empirically by a statistically rigorous comparison with a large data set from the Joint European Torus. The successful characterization of ELM waiting times enables future work to progress in various ways. Here we present a quantitative classification of ELM types, complementary to phenomenological approaches. It also informs us about the nature of ELM processes, such as whether they are random or deterministic. The methods are extremely general and can be applied to numerous other quasiperiodic intermittent phenomena.
Heintze, S D; Zellweger, G; Cavalleri, A; Ferracane, J
2006-02-01
The aim of the study was to evaluate two ceramic materials as possible substitutes for enamel using two wear simulation methods, and to compare both methods with regard to the wear results for different materials. Flat specimens (OHSU n=6, Ivoclar n=8) of one compomer and three composite materials (Dyract AP, Tetric Ceram, Z250, experimental composite) were fabricated and subjected to wear using two different wear testing methods and two pressable ceramic materials as stylus (Empress, experimental ceramic). For the OHSU method, enamel styli of the same dimensions as the ceramic stylus were fabricated additionally. Both wear testing methods differ with regard to loading force, lateral movement of stylus, stylus dimension, number of cycles, thermocycling and abrasive medium. In the OHSU method, the wear facets (mean vertical loss) were measured using a contact profilometer, while in the Ivoclar method (maximal vertical loss) a laser scanner was used for this purpose. Additionally, the vertical loss of the ceramic stylus was quantified for the Ivoclar method. The results obtained from each method were compared by ANOVA and Tukey's test (p<0.05). To compare both wear methods, the log-transformed data were used to establish relative ranks between material/stylus combinations and assessed by applying the Pearson correlation coefficient. The experimental ceramic material generated significantly less wear in Tetric Ceram and Z250 specimens compared to the Empress stylus in the Ivoclar method, whereas with the OHSU method, no difference between the two ceramic antagonists was found with regard to abrasion or attrition. The wear generated by the enamel stylus was not statistically different from that generated by the other two ceramic materials in the OHSU method. With the Ivoclar method, wear of the ceramic stylus was only statistically different when in contact with Tetric Ceram. There was a close correlation between the attrition wear of the OHSU and the wear of the Ivoclar method (Pearson coefficient 0.83, p=0.01). Pressable ceramic materials can be used as a substitute for enamel in wear testing machines. However, material ranking may be affected by the type of ceramic material chosen. The attrition wear of the OHSU method was comparable with the wear generated with the Ivoclar method.
NASA Astrophysics Data System (ADS)
Reuter, Matthew; Tschudi, Stephen
When investigating the electrical response properties of molecules, experiments often measure conductance whereas computation predicts transmission probabilities. Although the Landauer-Büttiker theory relates the two in the limit of coherent scattering through the molecule, a direct comparison between experiment and computation can still be difficult. Experimental data (specifically that from break junctions) is statistical and computational results are deterministic. Many studies compare the most probable experimental conductance with computation, but such an analysis discards almost all of the experimental statistics. In this work we develop tools to decipher the Landauer-Büttiker transmission function directly from experimental statistics and then apply them to enable a fairer comparison between experimental and computational results.
Asymptotic formulae for likelihood-based tests of new physics
NASA Astrophysics Data System (ADS)
Cowan, Glen; Cranmer, Kyle; Gross, Eilam; Vitells, Ofer
2011-02-01
We describe likelihood-based statistical tests for use in high energy physics for the discovery of new phenomena and for construction of confidence intervals on model parameters. We focus on the properties of the test procedures that allow one to account for systematic uncertainties. Explicit formulae for the asymptotic distributions of test statistics are derived using results of Wilks and Wald. We motivate and justify the use of a representative data set, called the "Asimov data set", which provides a simple method to obtain the median experimental sensitivity of a search or measurement as well as fluctuations about this expectation.
Polarization-interference Jones-matrix mapping of biological crystal networks
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Dubolazov, O. V.; Pidkamin, L. Y.; Sidor, M. I.; Pavlyukovich, N.; Pavlyukovich, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of Jones-matrix mapping with the help of reference wave. It was provided experimentally measured coordinate distributions of modulus of Jones-matrix elements of polycrystalline film of bile. It was defined the values and ranges of changing of statistic moments, which characterize such distributions. The second part presents the data of statistic analysis of the distributions of matrix elements of polycrystalline film of urine of donors and patients with albuminuria. It was defined the objective criteria of differentiation of albuminuria.
Logo image clustering based on advanced statistics
NASA Astrophysics Data System (ADS)
Wei, Yi; Kamel, Mohamed; He, Yiwei
2007-11-01
In recent years, there has been a growing interest in the research of image content description techniques. Among those, image clustering is one of the most frequently discussed topics. Similar to image recognition, image clustering is also a high-level representation technique. However it focuses on the coarse categorization rather than the accurate recognition. Based on wavelet transform (WT) and advanced statistics, the authors propose a novel approach that divides various shaped logo images into groups according to the external boundary of each logo image. Experimental results show that the presented method is accurate, fast and insensitive to defects.
Gender discrimination and prediction on the basis of facial metric information.
Fellous, J M
1997-07-01
Horizontal and vertical facial measurements are statistically independent. Discriminant analysis shows that five of such normalized distances explain over 95% of the gender differences of "training" samples and predict the gender of 90% novel test faces exhibiting various facial expressions. The robustness of the method and its results are assessed. It is argued that these distances (termed fiducial) are compatible with those found experimentally by psychophysical and neurophysiological studies. In consequence, partial explanations for the effects observed in these experiments can be found in the intrinsic statistical nature of the facial stimuli used.
Rollins, Derrick K; Teh, Ailing
2010-12-17
Microarray data sets provide relative expression levels for thousands of genes for a small number, in comparison, of different experimental conditions called assays. Data mining techniques are used to extract specific information of genes as they relate to the assays. The multivariate statistical technique of principal component analysis (PCA) has proven useful in providing effective data mining methods. This article extends the PCA approach of Rollins et al. to the development of ranking genes of microarray data sets that express most differently between two biologically different grouping of assays. This method is evaluated on real and simulated data and compared to a current approach on the basis of false discovery rate (FDR) and statistical power (SP) which is the ability to correctly identify important genes. This work developed and evaluated two new test statistics based on PCA and compared them to a popular method that is not PCA based. Both test statistics were found to be effective as evaluated in three case studies: (i) exposing E. coli cells to two different ethanol levels; (ii) application of myostatin to two groups of mice; and (iii) a simulated data study derived from the properties of (ii). The proposed method (PM) effectively identified critical genes in these studies based on comparison with the current method (CM). The simulation study supports higher identification accuracy for PM over CM for both proposed test statistics when the gene variance is constant and for one of the test statistics when the gene variance is non-constant. PM compares quite favorably to CM in terms of lower FDR and much higher SP. Thus, PM can be quite effective in producing accurate signatures from large microarray data sets for differential expression between assays groups identified in a preliminary step of the PCA procedure and is, therefore, recommended for use in these applications.
The effect of neuro-linguistic programming on occupational stress in critical care nurses
HemmatiMaslakpak, Masumeh; Farhadi, Masumeh; Fereidoni, Javid
2016-01-01
Background: The use of coping strategies in reducing the adverse effects of stress can be helpful. Nero-linguistic programming (NLP) is one of the modern methods of psychotherapy. This study aimed to determine the effect of NLP on occupational stress in nurses working in critical care units of Urmia. Materials and Methods: This study was carried out quasi-experimentally (before–after) with control and experimental groups. Of all the nurses working in the critical care units of Urmia Imam Khomeini and Motahari educational/therapeutic centers, 60 people participated in this survey. Eighteen sessions of intervention were done, each for 180 min. The experimental group received NLP program (such as goal setting, time management, assertiveness skills, representational system, and neurological levels, as well as some practical and useful NLP techniques). Expanding Nursing Stress Scale (ENSS) was used as the data gathering tool. Data were analyzed using SPSS version 16. Descriptive statistics and Chi-square test, Mann–Whitney test, and independent t-test were used to analyze the data. Results: The baseline score average of job stress was 120.88 and 121.36 for the intervention and control groups, respectively (P = 0.65). After intervention, the score average of job stress decreased to 64.53 in the experimental group while that of control group remained relatively unchanged (120.96). Mann–Whitney test results showed that stress scores between the two groups was statistically significant (P = 0.0001). Conclusions: The results showed that the use of NLP can increase coping with stressful situations, and it can reduce the adverse effects of occupational stress. PMID:26985221
Experimental toxicology: Issues of statistics, experimental design, and replication.
Briner, Wayne; Kirwan, Jeral
2017-01-01
The difficulty of replicating experiments has drawn considerable attention. Issues with replication occur for a variety of reasons ranging from experimental design to laboratory errors to inappropriate statistical analysis. Here we review a variety of guidelines for statistical analysis, design, and execution of experiments in toxicology. In general, replication can be improved by using hypothesis driven experiments with adequate sample sizes, randomization, and blind data collection techniques. Copyright © 2016 Elsevier B.V. All rights reserved.
Identification of the isomers using principal component analysis (PCA) method
NASA Astrophysics Data System (ADS)
Kepceoǧlu, Abdullah; Gündoǧdu, Yasemin; Ledingham, Kenneth William David; Kilic, Hamdi Sukur
2016-03-01
In this work, we have carried out a detailed statistical analysis for experimental data of mass spectra from xylene isomers. Principle Component Analysis (PCA) was used to identify the isomers which cannot be distinguished using conventional statistical methods for interpretation of their mass spectra. Experiments have been carried out using a linear TOF-MS coupled to a femtosecond laser system as an energy source for the ionisation processes. We have performed experiments and collected data which has been analysed and interpreted using PCA as a multivariate analysis of these spectra. This demonstrates the strength of the method to get an insight for distinguishing the isomers which cannot be identified using conventional mass analysis obtained through dissociative ionisation processes on these molecules. The PCA results dependending on the laser pulse energy and the background pressure in the spectrometers have been presented in this work.
Multi-classification of cell deformation based on object alignment and run length statistic.
Li, Heng; Liu, Zhiwen; An, Xing; Shi, Yonggang
2014-01-01
Cellular morphology is widely applied in digital pathology and is essential for improving our understanding of the basic physiological processes of organisms. One of the main issues of application is to develop efficient methods for cell deformation measurement. We propose an innovative indirect approach to analyze dynamic cell morphology in image sequences. The proposed approach considers both the cellular shape change and cytoplasm variation, and takes each frame in the image sequence into account. The cell deformation is measured by the minimum energy function of object alignment, which is invariant to object pose. Then an indirect analysis strategy is employed to overcome the limitation of gradual deformation by run length statistic. We demonstrate the power of the proposed approach with one application: multi-classification of cell deformation. Experimental results show that the proposed method is sensitive to the morphology variation and performs better than standard shape representation methods.
Active Learning with Rationales for Identifying Operationally Significant Anomalies in Aviation
NASA Technical Reports Server (NTRS)
Sharma, Manali; Das, Kamalika; Bilgic, Mustafa; Matthews, Bryan; Nielsen, David Lynn; Oza, Nikunj C.
2016-01-01
A major focus of the commercial aviation community is discovery of unknown safety events in flight operations data. Data-driven unsupervised anomaly detection methods are better at capturing unknown safety events compared to rule-based methods which only look for known violations. However, not all statistical anomalies that are discovered by these unsupervised anomaly detection methods are operationally significant (e.g., represent a safety concern). Subject Matter Experts (SMEs) have to spend significant time reviewing these statistical anomalies individually to identify a few operationally significant ones. In this paper we propose an active learning algorithm that incorporates SME feedback in the form of rationales to build a classifier that can distinguish between uninteresting and operationally significant anomalies. Experimental evaluation on real aviation data shows that our approach improves detection of operationally significant events by as much as 75% compared to the state-of-the-art. The learnt classifier also generalizes well to additional validation data sets.
Erfanian, Parham; Tenzif, Siamak; Guerriero, Rocco C
2004-01-01
Objective To determine the effects of a semi-customized experimental cervical pillow on symptomatic adults with chronic neck pain (with and without headache) during a four week study. Design A randomized controlled trial. Sample size Thirty-six adults were recruited for the trial, and randomly assigned to experimental or non-experimental groups of 17 and 19 participants respectively. Subjects Adults with chronic biomechanical neck pain who were recruited from the Canadian Memorial Chiropractic College (CMCC) Walk-in Clinic. Outcome measures Subjective findings were assessed using a mail-in self-report daily pain diary, and the CMCC Neck Disability Index (NDI). Statistical analysis Using repeated measure analysis of variance weekly NDI scores, average weekly AM and PM pain scores between the experimental and non-experimental groups were compared throughout the study. Results The experimental group had statistically significant lower NDI scores (p < 0.05) than the non-experimental group. The average weekly AM scores were lower and statistically significant (p < 0.05) in the experimental group. The PM scores in the experimental group were lower but not statistically significant than the other group. Conclusions The study results show that compared to conventional pillows, this experimental semi-customized cervical pillow was effective in reducing low-level neck pain intensity, especially in the morning following its use in a 4 week long study. PMID:17549216
Angelovičová, Mária; Kunová, Simona; Čapla, Jozef; Zajac, Peter; Bučko, Ondřej; Angelovič, Marek
2016-01-01
The purpose of this study was an experimental investigation and a statistical evaluation of the influence of various additives in feed mixtures of broiler chickens on fatty acids content and their ratio in breast and thigh muscles. First feed additive consisted of narasin, nicarbasin and salinomycin sodium, and other five additives were of phytogenic origin. In vivo experiment was realized on the poultry experimental station with deep litter breeding system. A total of 300 one-day-old hybrid chickens Cobb 500 divided into six groups were used for the experiment. The experimental period was divided into four phases, i.e. Starter, Grower 1, Grower 2 and Final, according to the application of commercial feed mixture of soy cereal type. Additive substances used in feed mixtures were different for each group. Basic feed mixtures were equal for all groups. Fatty acid profile of breast and thigh muscles was measured by the method of FT IR Nicolet 6700. Investigated additive substances in the feed mixtures did not have statistically significant effect on fatty acid content and omega-6/omega-3 polyunsaturated fatty acid (PUFA) ratio in breast and thigh muscles. Strong statistically significant relation between omega-6 PUFAs and total PUFAs were proved by experiment. A relation between omega-3 PUFAs and total PUFAs was found only in the group with Biocitro additive.
TREATMENT SWITCHING: STATISTICAL AND DECISION-MAKING CHALLENGES AND APPROACHES.
Latimer, Nicholas R; Henshall, Chris; Siebert, Uwe; Bell, Helen
2016-01-01
Treatment switching refers to the situation in a randomized controlled trial where patients switch from their randomly assigned treatment onto an alternative. Often, switching is from the control group onto the experimental treatment. In this instance, a standard intention-to-treat analysis does not identify the true comparative effectiveness of the treatments under investigation. We aim to describe statistical methods for adjusting for treatment switching in a comprehensible way for nonstatisticians, and to summarize views on these methods expressed by stakeholders at the 2014 Adelaide International Workshop on Treatment Switching in Clinical Trials. We describe three statistical methods used to adjust for treatment switching: marginal structural models, two-stage adjustment, and rank preserving structural failure time models. We draw upon discussion heard at the Adelaide International Workshop to explore the views of stakeholders on the acceptability of these methods. Stakeholders noted that adjustment methods are based on assumptions, the validity of which may often be questionable. There was disagreement on the acceptability of adjustment methods, but consensus that when these are used, they should be justified rigorously. The utility of adjustment methods depends upon the decision being made and the processes used by the decision-maker. Treatment switching makes estimating the true comparative effect of a new treatment challenging. However, many decision-makers have reservations with adjustment methods. These, and how they affect the utility of adjustment methods, require further exploration. Further technical work is required to develop adjustment methods to meet real world needs, to enhance their acceptability to decision-makers.
Guidelines for Genome-Scale Analysis of Biological Rhythms.
Hughes, Michael E; Abruzzi, Katherine C; Allada, Ravi; Anafi, Ron; Arpat, Alaaddin Bulak; Asher, Gad; Baldi, Pierre; de Bekker, Charissa; Bell-Pedersen, Deborah; Blau, Justin; Brown, Steve; Ceriani, M Fernanda; Chen, Zheng; Chiu, Joanna C; Cox, Juergen; Crowell, Alexander M; DeBruyne, Jason P; Dijk, Derk-Jan; DiTacchio, Luciano; Doyle, Francis J; Duffield, Giles E; Dunlap, Jay C; Eckel-Mahan, Kristin; Esser, Karyn A; FitzGerald, Garret A; Forger, Daniel B; Francey, Lauren J; Fu, Ying-Hui; Gachon, Frédéric; Gatfield, David; de Goede, Paul; Golden, Susan S; Green, Carla; Harer, John; Harmer, Stacey; Haspel, Jeff; Hastings, Michael H; Herzel, Hanspeter; Herzog, Erik D; Hoffmann, Christy; Hong, Christian; Hughey, Jacob J; Hurley, Jennifer M; de la Iglesia, Horacio O; Johnson, Carl; Kay, Steve A; Koike, Nobuya; Kornacker, Karl; Kramer, Achim; Lamia, Katja; Leise, Tanya; Lewis, Scott A; Li, Jiajia; Li, Xiaodong; Liu, Andrew C; Loros, Jennifer J; Martino, Tami A; Menet, Jerome S; Merrow, Martha; Millar, Andrew J; Mockler, Todd; Naef, Felix; Nagoshi, Emi; Nitabach, Michael N; Olmedo, Maria; Nusinow, Dmitri A; Ptáček, Louis J; Rand, David; Reddy, Akhilesh B; Robles, Maria S; Roenneberg, Till; Rosbash, Michael; Ruben, Marc D; Rund, Samuel S C; Sancar, Aziz; Sassone-Corsi, Paolo; Sehgal, Amita; Sherrill-Mix, Scott; Skene, Debra J; Storch, Kai-Florian; Takahashi, Joseph S; Ueda, Hiroki R; Wang, Han; Weitz, Charles; Westermark, Pål O; Wijnen, Herman; Xu, Ying; Wu, Gang; Yoo, Seung-Hee; Young, Michael; Zhang, Eric Erquan; Zielinski, Tomasz; Hogenesch, John B
2017-10-01
Genome biology approaches have made enormous contributions to our understanding of biological rhythms, particularly in identifying outputs of the clock, including RNAs, proteins, and metabolites, whose abundance oscillates throughout the day. These methods hold significant promise for future discovery, particularly when combined with computational modeling. However, genome-scale experiments are costly and laborious, yielding "big data" that are conceptually and statistically difficult to analyze. There is no obvious consensus regarding design or analysis. Here we discuss the relevant technical considerations to generate reproducible, statistically sound, and broadly useful genome-scale data. Rather than suggest a set of rigid rules, we aim to codify principles by which investigators, reviewers, and readers of the primary literature can evaluate the suitability of different experimental designs for measuring different aspects of biological rhythms. We introduce CircaInSilico, a web-based application for generating synthetic genome biology data to benchmark statistical methods for studying biological rhythms. Finally, we discuss several unmet analytical needs, including applications to clinical medicine, and suggest productive avenues to address them.
Guidelines for Genome-Scale Analysis of Biological Rhythms
Hughes, Michael E.; Abruzzi, Katherine C.; Allada, Ravi; Anafi, Ron; Arpat, Alaaddin Bulak; Asher, Gad; Baldi, Pierre; de Bekker, Charissa; Bell-Pedersen, Deborah; Blau, Justin; Brown, Steve; Ceriani, M. Fernanda; Chen, Zheng; Chiu, Joanna C.; Cox, Juergen; Crowell, Alexander M.; DeBruyne, Jason P.; Dijk, Derk-Jan; DiTacchio, Luciano; Doyle, Francis J.; Duffield, Giles E.; Dunlap, Jay C.; Eckel-Mahan, Kristin; Esser, Karyn A.; FitzGerald, Garret A.; Forger, Daniel B.; Francey, Lauren J.; Fu, Ying-Hui; Gachon, Frédéric; Gatfield, David; de Goede, Paul; Golden, Susan S.; Green, Carla; Harer, John; Harmer, Stacey; Haspel, Jeff; Hastings, Michael H.; Herzel, Hanspeter; Herzog, Erik D.; Hoffmann, Christy; Hong, Christian; Hughey, Jacob J.; Hurley, Jennifer M.; de la Iglesia, Horacio O.; Johnson, Carl; Kay, Steve A.; Koike, Nobuya; Kornacker, Karl; Kramer, Achim; Lamia, Katja; Leise, Tanya; Lewis, Scott A.; Li, Jiajia; Li, Xiaodong; Liu, Andrew C.; Loros, Jennifer J.; Martino, Tami A.; Menet, Jerome S.; Merrow, Martha; Millar, Andrew J.; Mockler, Todd; Naef, Felix; Nagoshi, Emi; Nitabach, Michael N.; Olmedo, Maria; Nusinow, Dmitri A.; Ptáček, Louis J.; Rand, David; Reddy, Akhilesh B.; Robles, Maria S.; Roenneberg, Till; Rosbash, Michael; Ruben, Marc D.; Rund, Samuel S.C.; Sancar, Aziz; Sassone-Corsi, Paolo; Sehgal, Amita; Sherrill-Mix, Scott; Skene, Debra J.; Storch, Kai-Florian; Takahashi, Joseph S.; Ueda, Hiroki R.; Wang, Han; Weitz, Charles; Westermark, Pål O.; Wijnen, Herman; Xu, Ying; Wu, Gang; Yoo, Seung-Hee; Young, Michael; Zhang, Eric Erquan; Zielinski, Tomasz; Hogenesch, John B.
2017-01-01
Genome biology approaches have made enormous contributions to our understanding of biological rhythms, particularly in identifying outputs of the clock, including RNAs, proteins, and metabolites, whose abundance oscillates throughout the day. These methods hold significant promise for future discovery, particularly when combined with computational modeling. However, genome-scale experiments are costly and laborious, yielding “big data” that are conceptually and statistically difficult to analyze. There is no obvious consensus regarding design or analysis. Here we discuss the relevant technical considerations to generate reproducible, statistically sound, and broadly useful genome-scale data. Rather than suggest a set of rigid rules, we aim to codify principles by which investigators, reviewers, and readers of the primary literature can evaluate the suitability of different experimental designs for measuring different aspects of biological rhythms. We introduce CircaInSilico, a web-based application for generating synthetic genome biology data to benchmark statistical methods for studying biological rhythms. Finally, we discuss several unmet analytical needs, including applications to clinical medicine, and suggest productive avenues to address them. PMID:29098954
NASA Astrophysics Data System (ADS)
Wang, Lixia; Pei, Jihong; Xie, Weixin; Liu, Jinyuan
2018-03-01
Large-scale oceansat remote sensing images cover a big area sea surface, which fluctuation can be considered as a non-stationary process. Short-Time Fourier Transform (STFT) is a suitable analysis tool for the time varying nonstationary signal. In this paper, a novel ship detection method using 2-D STFT sea background statistical modeling for large-scale oceansat remote sensing images is proposed. First, the paper divides the large-scale oceansat remote sensing image into small sub-blocks, and 2-D STFT is applied to each sub-block individually. Second, the 2-D STFT spectrum of sub-blocks is studied and the obvious different characteristic between sea background and non-sea background is found. Finally, the statistical model for all valid frequency points in the STFT spectrum of sea background is given, and the ship detection method based on the 2-D STFT spectrum modeling is proposed. The experimental result shows that the proposed algorithm can detect ship targets with high recall rate and low missing rate.
Robust approaches to quantification of margin and uncertainty for sparse data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hund, Lauren; Schroeder, Benjamin B.; Rumsey, Kelin
Characterizing the tails of probability distributions plays a key role in quantification of margins and uncertainties (QMU), where the goal is characterization of low probability, high consequence events based on continuous measures of performance. When data are collected using physical experimentation, probability distributions are typically fit using statistical methods based on the collected data, and these parametric distributional assumptions are often used to extrapolate about the extreme tail behavior of the underlying probability distribution. In this project, we character- ize the risk associated with such tail extrapolation. Specifically, we conducted a scaling study to demonstrate the large magnitude of themore » risk; then, we developed new methods for communicat- ing risk associated with tail extrapolation from unvalidated statistical models; lastly, we proposed a Bayesian data-integration framework to mitigate tail extrapolation risk through integrating ad- ditional information. We conclude that decision-making using QMU is a complex process that cannot be achieved using statistical analyses alone.« less
Fragment size distribution statistics in dynamic fragmentation of laser shock-loaded tin
NASA Astrophysics Data System (ADS)
He, Weihua; Xin, Jianting; Zhao, Yongqiang; Chu, Genbai; Xi, Tao; Shui, Min; Lu, Feng; Gu, Yuqiu
2017-06-01
This work investigates the geometric statistics method to characterize the size distribution of tin fragments produced in the laser shock-loaded dynamic fragmentation process. In the shock experiments, the ejection of the tin sample with etched V-shape groove in the free surface are collected by the soft recovery technique. Subsequently, the produced fragments are automatically detected with the fine post-shot analysis techniques including the X-ray micro-tomography and the improved watershed method. To characterize the size distributions of the fragments, a theoretical random geometric statistics model based on Poisson mixtures is derived for dynamic heterogeneous fragmentation problem, which reveals linear combinational exponential distribution. The experimental data related to fragment size distributions of the laser shock-loaded tin sample are examined with the proposed theoretical model, and its fitting performance is compared with that of other state-of-the-art fragment size distribution models. The comparison results prove that our proposed model can provide far more reasonable fitting result for the laser shock-loaded tin.
Frequent statistics of link-layer bit stream data based on AC-IM algorithm
NASA Astrophysics Data System (ADS)
Cao, Chenghong; Lei, Yingke; Xu, Yiming
2017-08-01
At present, there are many relevant researches on data processing using classical pattern matching and its improved algorithm, but few researches on statistical data of link-layer bit stream. This paper adopts a frequent statistical method of link-layer bit stream data based on AC-IM algorithm for classical multi-pattern matching algorithms such as AC algorithm has high computational complexity, low efficiency and it cannot be applied to binary bit stream data. The method's maximum jump distance of the mode tree is length of the shortest mode string plus 3 in case of no missing? In this paper, theoretical analysis is made on the principle of algorithm construction firstly, and then the experimental results show that the algorithm can adapt to the binary bit stream data environment and extract the frequent sequence more accurately, the effect is obvious. Meanwhile, comparing with the classical AC algorithm and other improved algorithms, AC-IM algorithm has a greater maximum jump distance and less time-consuming.
Evaluation of airborne lidar data to predict vegetation Presence/Absence
Palaseanu-Lovejoy, M.; Nayegandhi, A.; Brock, J.; Woodman, R.; Wright, C.W.
2009-01-01
This study evaluates the capabilities of the Experimental Advanced Airborne Research Lidar (EAARL) in delineating vegetation assemblages in Jean Lafitte National Park, Louisiana. Five-meter-resolution grids of bare earth, canopy height, canopy-reflection ratio, and height of median energy were derived from EAARL data acquired in September 2006. Ground-truth data were collected along transects to assess species composition, canopy cover, and ground cover. To decide which model is more accurate, comparisons of general linear models and generalized additive models were conducted using conventional evaluation methods (i.e., sensitivity, specificity, Kappa statistics, and area under the curve) and two new indexes, net reclassification improvement and integrated discrimination improvement. Generalized additive models were superior to general linear models in modeling presence/absence in training vegetation categories, but no statistically significant differences between the two models were achieved in determining the classification accuracy at validation locations using conventional evaluation methods, although statistically significant improvements in net reclassifications were observed. ?? 2009 Coastal Education and Research Foundation.
Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC
2008-01-01
Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020
Salient target detection based on pseudo-Wigner-Ville distribution and Rényi entropy.
Xu, Yuannan; Zhao, Yuan; Jin, Chenfei; Qu, Zengfeng; Liu, Liping; Sun, Xiudong
2010-02-15
We present what we believe to be a novel method based on pseudo-Wigner-Ville distribution (PWVD) and Rényi entropy for salient targets detection. In the foundation of studying the statistical property of Rényi entropy via PWVD, the residual entropy-based saliency map of an input image can be obtained. From the saliency map, target detection is completed by the simple and convenient threshold segmentation. Experimental results demonstrate the proposed method can detect targets effectively in complex ground scenes.
NASA Astrophysics Data System (ADS)
Goodman, Joseph W.
2000-07-01
The Wiley Classics Library consists of selected books that have become recognized classics in their respective fields. With these new unabridged and inexpensive editions, Wiley hopes to extend the life of these important works by making them available to future generations of mathematicians and scientists. Currently available in the Series: T. W. Anderson The Statistical Analysis of Time Series T. S. Arthanari & Yadolah Dodge Mathematical Programming in Statistics Emil Artin Geometric Algebra Norman T. J. Bailey The Elements of Stochastic Processes with Applications to the Natural Sciences Robert G. Bartle The Elements of Integration and Lebesgue Measure George E. P. Box & Norman R. Draper Evolutionary Operation: A Statistical Method for Process Improvement George E. P. Box & George C. Tiao Bayesian Inference in Statistical Analysis R. W. Carter Finite Groups of Lie Type: Conjugacy Classes and Complex Characters R. W. Carter Simple Groups of Lie Type William G. Cochran & Gertrude M. Cox Experimental Designs, Second Edition Richard Courant Differential and Integral Calculus, Volume I RIchard Courant Differential and Integral Calculus, Volume II Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume I Richard Courant & D. Hilbert Methods of Mathematical Physics, Volume II D. R. Cox Planning of Experiments Harold S. M. Coxeter Introduction to Geometry, Second Edition Charles W. Curtis & Irving Reiner Representation Theory of Finite Groups and Associative Algebras Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume I Charles W. Curtis & Irving Reiner Methods of Representation Theory with Applications to Finite Groups and Orders, Volume II Cuthbert Daniel Fitting Equations to Data: Computer Analysis of Multifactor Data, Second Edition Bruno de Finetti Theory of Probability, Volume I Bruno de Finetti Theory of Probability, Volume 2 W. Edwards Deming Sample Design in Business Research
A simple method for processing data with least square method
NASA Astrophysics Data System (ADS)
Wang, Chunyan; Qi, Liqun; Chen, Yongxiang; Pang, Guangning
2017-08-01
The least square method is widely used in data processing and error estimation. The mathematical method has become an essential technique for parameter estimation, data processing, regression analysis and experimental data fitting, and has become a criterion tool for statistical inference. In measurement data analysis, the distribution of complex rules is usually based on the least square principle, i.e., the use of matrix to solve the final estimate and to improve its accuracy. In this paper, a new method is presented for the solution of the method which is based on algebraic computation and is relatively straightforward and easy to understand. The practicability of this method is described by a concrete example.
A Realistic Experimental Design and Statistical Analysis Project
ERIC Educational Resources Information Center
Muske, Kenneth R.; Myers, John A.
2007-01-01
A realistic applied chemical engineering experimental design and statistical analysis project is documented in this article. This project has been implemented as part of the professional development and applied statistics courses at Villanova University over the past five years. The novel aspects of this project are that the students are given a…
Evaluation of normalization methods in mammalian microRNA-Seq data
Garmire, Lana Xia; Subramaniam, Shankar
2012-01-01
Simple total tag count normalization is inadequate for microRNA sequencing data generated from the next generation sequencing technology. However, so far systematic evaluation of normalization methods on microRNA sequencing data is lacking. We comprehensively evaluate seven commonly used normalization methods including global normalization, Lowess normalization, Trimmed Mean Method (TMM), quantile normalization, scaling normalization, variance stabilization, and invariant method. We assess these methods on two individual experimental data sets with the empirical statistical metrics of mean square error (MSE) and Kolmogorov-Smirnov (K-S) statistic. Additionally, we evaluate the methods with results from quantitative PCR validation. Our results consistently show that Lowess normalization and quantile normalization perform the best, whereas TMM, a method applied to the RNA-Sequencing normalization, performs the worst. The poor performance of TMM normalization is further evidenced by abnormal results from the test of differential expression (DE) of microRNA-Seq data. Comparing with the models used for DE, the choice of normalization method is the primary factor that affects the results of DE. In summary, Lowess normalization and quantile normalization are recommended for normalizing microRNA-Seq data, whereas the TMM method should be used with caution. PMID:22532701
EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.
Diykh, Mohammed; Li, Yan; Wen, Peng
2016-11-01
The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.
NASA Astrophysics Data System (ADS)
Barreiro, Andrea K.; Ly, Cheng
2017-08-01
Rapid experimental advances now enable simultaneous electrophysiological recording of neural activity at single-cell resolution across large regions of the nervous system. Models of this neural network activity will necessarily increase in size and complexity, thus increasing the computational cost of simulating them and the challenge of analyzing them. Here we present a method to approximate the activity and firing statistics of a general firing rate network model (of the Wilson-Cowan type) subject to noisy correlated background inputs. The method requires solving a system of transcendental equations and is fast compared to Monte Carlo simulations of coupled stochastic differential equations. We implement the method with several examples of coupled neural networks and show that the results are quantitatively accurate even with moderate coupling strengths and an appreciable amount of heterogeneity in many parameters. This work should be useful for investigating how various neural attributes qualitatively affect the spiking statistics of coupled neural networks.
Taylor, Lisa; Poland, Fiona; Harrison, Peter; Stephenson, Richard
2011-01-01
To evaluate a systematic treatment programme developed by the researcher that targeted aspects of visual functioning affected by visual field deficits following stroke. The study design was a non-equivalent control (conventional) group pretest-posttest quasi-experimental feasibility design, using multisite data collection methods at specified stages. The study was undertaken within three acute hospital settings as outpatient follow-up sessions. Individuals who had visual field deficits three months post stroke were studied. A treatment group received routine occupational therapy and an experimental group received, in addition, a systematic treatment programme. The treatment phase of both groups lasted six weeks. The Nottingham Adjustment Scale, a measure developed specifically for visual impairment, was used as the primary outcome measure. The change in Nottingham Adjustment Scale score was compared between the experimental (n = 7) and conventional (n = 8) treatment groups using the Wilcoxon signed ranks test. The result of Z = -2.028 (P = 0.043) showed that there was a statistically significant difference between the change in Nottingham Adjustment Scale score between both groups. The introduction of the systematic treatment programme resulted in a statistically significant change in the scores of the Nottingham Adjustment Scale.
ERIC Educational Resources Information Center
Alvarez-Nunez, Tanya Mae
2012-01-01
Scope and Method of Study: This quantitative, non-experimental study sought to determine if a statistically significant difference existed in student achievement on the PSE exam in Belizean primary schools for students who have teachers with varying levels of self-efficacy (high, medium and low). The Teacher Efficacy Scale (TES), which captures…
Determination of the refractive index of dehydrated cells by means of digital holographic microscopy
NASA Astrophysics Data System (ADS)
Belashov, A. V.; Zhikhoreva, A. A.; Bespalov, V. G.; Vasyutinskii, O. S.; Zhilinskaya, N. T.; Novik, V. I.; Semenova, I. V.
2017-10-01
Spatial distributions of the integral refractive index in dehydrated cells of human oral cavity epithelium are obtained by means of digital holographic microscopy, and mean refractive index of the cells is determined. The statistical analysis of the data obtained is carried out, and absolute errors of the method are estimated for different experimental conditions.
Variational stereo imaging of oceanic waves with statistical constraints.
Gallego, Guillermo; Yezzi, Anthony; Fedele, Francesco; Benetazzo, Alvise
2013-11-01
An image processing observational technique for the stereoscopic reconstruction of the waveform of oceanic sea states is developed. The technique incorporates the enforcement of any given statistical wave law modeling the quasi-Gaussianity of oceanic waves observed in nature. The problem is posed in a variational optimization framework, where the desired waveform is obtained as the minimizer of a cost functional that combines image observations, smoothness priors and a weak statistical constraint. The minimizer is obtained by combining gradient descent and multigrid methods on the necessary optimality equations of the cost functional. Robust photometric error criteria and a spatial intensity compensation model are also developed to improve the performance of the presented image matching strategy. The weak statistical constraint is thoroughly evaluated in combination with other elements presented to reconstruct and enforce constraints on experimental stereo data, demonstrating the improvement in the estimation of the observed ocean surface.
Steganalysis based on reducing the differences of image statistical characteristics
NASA Astrophysics Data System (ADS)
Wang, Ran; Niu, Shaozhang; Ping, Xijian; Zhang, Tao
2018-04-01
Compared with the process of embedding, the image contents make a more significant impact on the differences of image statistical characteristics. This makes the image steganalysis to be a classification problem with bigger withinclass scatter distances and smaller between-class scatter distances. As a result, the steganalysis features will be inseparate caused by the differences of image statistical characteristics. In this paper, a new steganalysis framework which can reduce the differences of image statistical characteristics caused by various content and processing methods is proposed. The given images are segmented to several sub-images according to the texture complexity. Steganalysis features are separately extracted from each subset with the same or close texture complexity to build a classifier. The final steganalysis result is figured out through a weighted fusing process. The theoretical analysis and experimental results can demonstrate the validity of the framework.
Mantini, Dante; Petrucci, Francesca; Del Boccio, Piero; Pieragostino, Damiana; Di Nicola, Marta; Lugaresi, Alessandra; Federici, Giorgio; Sacchetta, Paolo; Di Ilio, Carmine; Urbani, Andrea
2008-01-01
Independent component analysis (ICA) is a signal processing technique that can be utilized to recover independent signals from a set of their linear mixtures. We propose ICA for the analysis of signals obtained from large proteomics investigations such as clinical multi-subject studies based on MALDI-TOF MS profiling. The method is validated on simulated and experimental data for demonstrating its capability of correctly extracting protein profiles from MALDI-TOF mass spectra. The comparison on peak detection with an open-source and two commercial methods shows its superior reliability in reducing the false discovery rate of protein peak masses. Moreover, the integration of ICA and statistical tests for detecting the differences in peak intensities between experimental groups allows to identify protein peaks that could be indicators of a diseased state. This data-driven approach demonstrates to be a promising tool for biomarker-discovery studies based on MALDI-TOF MS technology. The MATLAB implementation of the method described in the article and both simulated and experimental data are freely available at http://www.unich.it/proteomica/bioinf/.
"Using Power Tables to Compute Statistical Power in Multilevel Experimental Designs"
ERIC Educational Resources Information Center
Konstantopoulos, Spyros
2009-01-01
Power computations for one-level experimental designs that assume simple random samples are greatly facilitated by power tables such as those presented in Cohen's book about statistical power analysis. However, in education and the social sciences experimental designs have naturally nested structures and multilevel models are needed to compute the…
NASA Astrophysics Data System (ADS)
Galich, Nikolay E.; Filatov, Michael V.
2008-07-01
Communication contains the description of the immunology experiments and the experimental data treatment. New nonlinear methods of immunofluorescence statistical analysis of peripheral blood neutrophils have been developed. We used technology of respiratory burst reaction of DNA fluorescence in the neutrophils cells nuclei due to oxidative activity. The histograms of photon count statistics the radiant neutrophils populations' in flow cytometry experiments are considered. Distributions of the fluorescence flashes frequency as functions of the fluorescence intensity are analyzed. Statistic peculiarities of histograms set for healthy and unhealthy donors allow dividing all histograms on the three classes. The classification is based on three different types of smoothing and long-range scale averaged immunofluorescence distributions and their bifurcation. Heterogeneity peculiarities of long-range scale immunofluorescence distributions allow dividing all histograms on three groups. First histograms group belongs to healthy donors. Two other groups belong to donors with autoimmune and inflammatory diseases. Some of the illnesses are not diagnosed by standards biochemical methods. Medical standards and statistical data of the immunofluorescence histograms for identifications of health and illnesses are interconnected. Possibilities and alterations of immunofluorescence statistics in registration, diagnostics and monitoring of different diseases in various medical treatments have been demonstrated. Health or illness criteria are connected with statistics features of immunofluorescence histograms. Neutrophils populations' fluorescence presents the sensitive clear indicator of health status.
Ozturk, Cengiz; Kanat, Ayhan; Aydin, Mehmet Dumlu; Yolas, Coskun; Kabalar, Mehmet Esref; Gundogdu, Betul; Duman, Aslihan; Kanat, Ilyas Ferit; Gundogdu, Cemal
2015-01-01
Context: Somato-sensitive innervation of bowels are maintained by lower segments of spinal cord and the blood supply of the lower spinal cord is heavily dependent on Adamkiewicz artery. Although bowel problems are sometimes seen in subarachnoid hemorrhage neither Adamkiewicz artery spasm nor spinal cord ischemia has not been elucidated as a cause of bowel dilatation so far. Aims: The goal of this study was to study the effects Adamkiewicz artery (AKA) vasospasm in lumbar subarachnoid hemorrhage (SAH) on bowel dilatation severity. Settings and Design: An experimental rabbit study. Materials and Methods: The study was conducted on 25 rabbits, which were randomly divided into three groups: Spinal SAH (N = 13), serum saline (SS) (SS; N = 7) and control (N = 5) groups. Experimental spinal SAH was performed. After 21 days, volume values of descending parts of large bowels and degenerated neuron density of L5DRG were analyzed. Statistical Analysis Used: Statistical analysis was performed using the PASW Statistics 18.0 for Windows (SPSS Inc., Chicago, Illinois). Two-tailed t-test and Mann-Whitney U-tests were used. The statistical significance was set at P < 0.05. Results: The mean volume of imaginary descending colons was estimated as 93 ± 12 cm3 in the control group and 121 ± 26 cm3 in the SS group and 176 ± 49 cm3 in SAH group. Volume augmentations of the descending colons and degenerated neuron density L5DRG were significantly different between the SAH and other two groups (P < 0.05). Conclusion: An inverse relationship between the living neuronal density of the L5DRG and the volume of imaginary descending colon values was occurred. Our findings will aid in the planning of future experimental studies and determining the clinical relevance on such studies. PMID:25972712
Girndt, Antje; Cockburn, Glenn; Sánchez-Tójar, Alfredo; Løvlie, Hanne; Schroeder, Julia
2017-01-01
Birds are model organisms in sperm biology. Previous work in zebra finches, suggested that sperm sampled from males' faeces and ejaculates do not differ in size. Here, we tested this assumption in a captive population of house sparrows, Passer domesticus. We compared sperm length in samples from three collection techniques: female dummy, faecal and abdominal massage samples. We found that sperm were significantly shorter in faecal than abdominal massage samples, which was explained by shorter heads and midpieces, but not flagella. This result might indicate that faecal sampled sperm could be less mature than sperm collected by abdominal massage. The female dummy method resulted in an insufficient number of experimental ejaculates because most males ignored it. In light of these results, we recommend using abdominal massage as a preferred method for avian sperm sampling. Where avian sperm cannot be collected by abdominal massage alone, we advise controlling for sperm sampling protocol statistically.
Assessing noninferiority in a three-arm trial using the Bayesian approach.
Ghosh, Pulak; Nathoo, Farouk; Gönen, Mithat; Tiwari, Ram C
2011-07-10
Non-inferiority trials, which aim to demonstrate that a test product is not worse than a competitor by more than a pre-specified small amount, are of great importance to the pharmaceutical community. As a result, methodology for designing and analyzing such trials is required, and developing new methods for such analysis is an important area of statistical research. The three-arm trial consists of a placebo, a reference and an experimental treatment, and simultaneously tests the superiority of the reference over the placebo along with comparing this reference to an experimental treatment. In this paper, we consider the analysis of non-inferiority trials using Bayesian methods which incorporate both parametric as well as semi-parametric models. The resulting testing approach is both flexible and robust. The benefit of the proposed Bayesian methods is assessed via simulation, based on a study examining home-based blood pressure interventions. Copyright © 2011 John Wiley & Sons, Ltd.
Designing Studies That Would Address the Multilayered Nature of Health Care
Pennell, Michael; Rhoda, Dale; Hade, Erinn M.; Paskett, Electra D.
2010-01-01
We review design and analytic methods available for multilevel interventions in cancer research with particular attention to study design, sample size requirements, and potential to provide statistical evidence for causal inference. The most appropriate methods will depend on the stage of development of the research and whether randomization is possible. Early on, fractional factorial designs may be used to screen intervention components, particularly when randomization of individuals is possible. Quasi-experimental designs, including time-series and multiple baseline designs, can be useful once the intervention is designed because they require few sites and can provide the preliminary evidence to plan efficacy studies. In efficacy and effectiveness studies, group-randomized trials are preferred when randomization is possible and regression discontinuity designs are preferred otherwise if assignment based on a quantitative score is possible. Quasi-experimental designs may be used, especially when combined with recent developments in analytic methods to reduce bias in effect estimates. PMID:20386057
Standard deviation and standard error of the mean.
Lee, Dong Kyu; In, Junyong; Lee, Sangseok
2015-06-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.
Standard deviation and standard error of the mean
In, Junyong; Lee, Sangseok
2015-01-01
In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results. PMID:26045923
A polychromatic adaption of the Beer-Lambert model for spectral decomposition
NASA Astrophysics Data System (ADS)
Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.
2017-03-01
We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.
a Theoretical and Experimental Investigation of 1/F Noise in the Alpha Decay Rates of AMERICIUM-241.
NASA Astrophysics Data System (ADS)
Pepper, Gary T.
New experimental methods and data analysis techniques were used to investigate the hypothesis of the existence of 1/f noise in a alpha particle emission rates for ^{241}Am. Experimental estimates of the flicker floor were found to be almost two orders of magnitude less than Handel's theoretical prediction and previous measurements. The existence of a flicker floor for ^{57}Co decay, a process for which no charged particles are emitted, indicate that instrumental instability is likely responsible for the values of the flicker floor obtained. The experimental results and the theoretical arguments presented indicate that a re-examination of Handel's theory of 1/f noise is appropriate. Methods of numerical simulation of noise processes with a 1/f^{rm n} power spectral density were developed. These were used to investigate various statistical aspects of 1/f ^{rm n} noise. The probability density function for the Allan variance was investigated in order to establish confidence limits for the observations made. The effect of using grouped (correlated) data, for evaluating the Allan variance, was also investigated.
ToNER: A tool for identifying nucleotide enrichment signals in feature-enriched RNA-seq data
Promworn, Yuttachon; Kaewprommal, Pavita; Shaw, Philip J.; Intarapanich, Apichart; Tongsima, Sissades
2017-01-01
Background Biochemical methods are available for enriching 5′ ends of RNAs in prokaryotes, which are employed in the differential RNA-seq (dRNA-seq) and the more recent Cappable-seq protocols. Computational methods are needed to locate RNA 5′ ends from these data by statistical analysis of the enrichment. Although statistical-based analysis methods have been developed for dRNA-seq, they may not be suitable for Cappable-seq data. The more efficient enrichment method employed in Cappable-seq compared with dRNA-seq could affect data distribution and thus algorithm performance. Results We present Transformation of Nucleotide Enrichment Ratios (ToNER), a tool for statistical modeling of enrichment from RNA-seq data obtained from enriched and unenriched libraries. The tool calculates nucleotide enrichment scores and determines the global transformation for fitting to the normal distribution using the Box-Cox procedure. From the transformed distribution, sites of significant enrichment are identified. To increase power of detection, meta-analysis across experimental replicates is offered. We tested the tool on Cappable-seq and dRNA-seq data for identifying Escherichia coli transcript 5′ ends and compared the results with those from the TSSAR tool, which is designed for analyzing dRNA-seq data. When combining results across Cappable-seq replicates, ToNER detects more known transcript 5′ ends than TSSAR. In general, the transcript 5′ ends detected by ToNER but not TSSAR occur in regions which cannot be locally modeled by TSSAR. Conclusion ToNER uses a simple yet robust statistical modeling approach, which can be used for detecting RNA 5′ends from Cappable-seq data, in particular when combining information from experimental replicates. The ToNER tool could potentially be applied for analyzing other RNA-seq datasets in which enrichment for other structural features of RNA is employed. The program is freely available for download at ToNER webpage (http://www4a.biotec.or.th/GI/tools/toner) and GitHub repository (https://github.com/PavitaKae/ToNER). PMID:28542466
NASA Astrophysics Data System (ADS)
Avakyan, L. A.; Heinz, M.; Skidanenko, A. V.; Yablunovski, K. A.; Ihlemann, J.; Meinertz, J.; Patzig, C.; Dubiel, M.; Bugaev, L. A.
2018-01-01
The formation of a localized surface plasmon resonance (SPR) spectrum of randomly distributed gold nanoparticles in the surface layer of silicate float glass, generated and implanted by UV ArF-excimer laser irradiation of a thin gold layer sputter-coated on the glass surface, was studied by the T-matrix method, which enables particle agglomeration to be taken into account. The experimental technique used is promising for the production of submicron patterns of plasmonic nanoparticles (given by laser masks or gratings) without damage to the glass surface. Analysis of the applicability of the multi-spheres T-matrix (MSTM) method to the studied material was performed through calculations of SPR characteristics for differently arranged and structured gold nanoparticles (gold nanoparticles in solution, particles pairs, and core-shell silver-gold nanoparticles) for which either experimental data or results of the modeling by other methods are available. For the studied gold nanoparticles in glass, it was revealed that the theoretical description of their SPR spectrum requires consideration of the plasmon coupling between particles, which can be done effectively by MSTM calculations. The obtained statistical distributions over particle sizes and over interparticle distances demonstrated the saturation behavior with respect to the number of particles under consideration, which enabled us to determine the effective aggregate of particles, sufficient to form the SPR spectrum. The suggested technique for the fitting of an experimental SPR spectrum of gold nanoparticles in glass by varying the geometrical parameters of the particles aggregate in the recurring calculations of spectrum by MSTM method enabled us to determine statistical characteristics of the aggregate: the average distance between particles, average size, and size distribution of the particles. The fitting strategy of the SPR spectrum presented here can be applied to nanoparticles of any nature and in various substances, and, in principle, can be extended for particles with non-spherical shapes, like ellipsoids, rod-like and other T-matrix-solvable shapes.
Nguyen, Van-Nui; Huang, Kai-Yao; Huang, Chien-Hsun; Chang, Tzu-Hao; Bretaña, Neil; Lai, K; Weng, Julia; Lee, Tzong-Yi
2015-01-01
In eukaryotes, ubiquitin-conjugation is an important mechanism underlying proteasome-mediated degradation of proteins, and as such, plays an essential role in the regulation of many cellular processes. In the ubiquitin-proteasome pathway, E3 ligases play important roles by recognizing a specific protein substrate and catalyzing the attachment of ubiquitin to a lysine (K) residue. As more and more experimental data on ubiquitin conjugation sites become available, it becomes possible to develop prediction models that can be scaled to big data. However, no development that focuses on the investigation of ubiquitinated substrate specificities has existed. Herein, we present an approach that exploits an iteratively statistical method to identify ubiquitin conjugation sites with substrate site specificities. In this investigation, totally 6259 experimentally validated ubiquitinated proteins were obtained from dbPTM. After having filtered out homologous fragments with 40% sequence identity, the training data set contained 2658 ubiquitination sites (positive data) and 5532 non-ubiquitinated sites (negative data). Due to the difficulty in characterizing the substrate site specificities of E3 ligases by conventional sequence logo analysis, a recursively statistical method has been applied to obtain significant conserved motifs. The profile hidden Markov model (profile HMM) was adopted to construct the predictive models learned from the identified substrate motifs. A five-fold cross validation was then used to evaluate the predictive model, achieving sensitivity, specificity, and accuracy of 73.07%, 65.46%, and 67.93%, respectively. Additionally, an independent testing set, completely blind to the training data of the predictive model, was used to demonstrate that the proposed method could provide a promising accuracy (76.13%) and outperform other ubiquitination site prediction tool. A case study demonstrated the effectiveness of the characterized substrate motifs for identifying ubiquitination sites. The proposed method presents a practical means of preliminary analysis and greatly diminishes the total number of potential targets required for further experimental confirmation. This method may help unravel their mechanisms and roles in E3 recognition and ubiquitin-mediated protein degradation.
Teodoro, P E; Torres, F E; Santos, A D; Corrêa, A M; Nascimento, M; Barroso, L M A; Ceccon, G
2016-05-09
The aim of this study was to evaluate the suitability of statistics as experimental precision degree measures for trials with cowpea (Vigna unguiculata L. Walp.) genotypes. Cowpea genotype yields were evaluated in 29 trials conducted in Brazil between 2005 and 2012. The genotypes were evaluated with a randomized block design with four replications. Ten statistics that were estimated for each trial were compared using descriptive statistics, Pearson correlations, and path analysis. According to the class limits established, selective accuracy and F-test values for genotype, heritability, and the coefficient of determination adequately estimated the degree of experimental precision. Using these statistics, 86.21% of the trials had adequate experimental precision. Selective accuracy and the F-test values for genotype, heritability, and the coefficient of determination were directly related to each other, and were more suitable than the coefficient of variation and the least significant difference (by the Tukey test) to evaluate experimental precision in trials with cowpea genotypes.
ERIC Educational Resources Information Center
Perrett, Jamis J.
2012-01-01
This article demonstrates how textbooks differ in their description of the term "experimental unit". Advanced Placement Statistics teachers and students are often limited in their statistical knowledge by the information presented in their classroom textbook. Definitions and descriptions differ among textbooks as well as among different…
Pretreatment of rapeseed straw by sodium hydroxide.
Kang, Kyeong Eop; Jeong, Gwi-Taek; Park, Don-Hee
2012-06-01
Pretreatment method for rapeseed straw by sodium hydroxide was investigated for production of bioethanol and biobutanol. Various pretreatment parameters, including temperature, time, and sodium hydroxide concentration were optimized using a statistical method which is a central composite design of response surface methodology. In the case of sodium hydroxide pretreatment, optimal pretreatment conditions were found to be 7.9% sodium hydroxide concentration, 5.5 h of reaction time, and 68.4 °C of reaction temperature. The maximum glucose yield which can be recovered by enzymatic hydrolysis at the optimum conditions was 95.7% and the experimental result was 94.0 ± 4.8%. This experimental result was in agreement with the model prediction. An increase of surface area and pore size in pretreated rapeseed straw by sodium hydroxide pretreatment was observed by scanning electron microscope.
Experimental and numerical characterization of expanded glass granules
NASA Astrophysics Data System (ADS)
Chaudry, Mohsin Ali; Woitzik, Christian; Düster, Alexander; Wriggers, Peter
2018-07-01
In this paper, the material response of expanded glass granules at different scales and under different boundary conditions is investigated. At grain scale, single particle tests can be used to determine properties like Young's modulus or crushing strength. With experiments like triaxial and oedometer tests, it is possible to examine the bulk mechanical behaviour of the granular material. Our experimental investigation is complemented by a numerical simulation where the discrete element method is used to compute the mechanical behaviour of such materials. In order to improve the simulation quality, effects such as rolling resistance, inelastic behaviour, damage, and crushing are also included in the discrete element method. Furthermore, the variation of the material properties of granules is modelled by a statistical distribution and included in our numerical simulation.
NASA Technical Reports Server (NTRS)
Abbey, Craig K.; Eckstein, Miguel P.
2002-01-01
We consider estimation and statistical hypothesis testing on classification images obtained from the two-alternative forced-choice experimental paradigm. We begin with a probabilistic model of task performance for simple forced-choice detection and discrimination tasks. Particular attention is paid to general linear filter models because these models lead to a direct interpretation of the classification image as an estimate of the filter weights. We then describe an estimation procedure for obtaining classification images from observer data. A number of statistical tests are presented for testing various hypotheses from classification images based on some more compact set of features derived from them. As an example of how the methods we describe can be used, we present a case study investigating detection of a Gaussian bump profile.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Climate Considerations Of The Electricity Supply Systems In Industries
NASA Astrophysics Data System (ADS)
Asset, Khabdullin; Zauresh, Khabdullina
2014-12-01
The study is focused on analysis of climate considerations of electricity supply systems in a pellet industry. The developed analysis model consists of two modules: statistical data of active power losses evaluation module and climate aspects evaluation module. The statistical data module is presented as a universal mathematical model of electrical systems and components of industrial load. It forms a basis for detailed accounting of power loss from the voltage levels. On the basis of the universal model, a set of programs is designed to perform the calculation and experimental research. It helps to obtain the statistical characteristics of the power losses and loads of the electricity supply systems and to define the nature of changes in these characteristics. Within the module, several methods and algorithms for calculating parameters of equivalent circuits of low- and high-voltage ADC and SD with a massive smooth rotor with laminated poles are developed. The climate aspects module includes an analysis of the experimental data of power supply system in pellet production. It allows identification of GHG emission reduction parameters: operation hours, type of electrical motors, values of load factor and deviation of standard value of voltage.
NASA Astrophysics Data System (ADS)
Jia, Huizhen; Sun, Quansen; Ji, Zexuan; Wang, Tonghan; Chen, Qiang
2014-11-01
The goal of no-reference/blind image quality assessment (NR-IQA) is to devise a perceptual model that can accurately predict the quality of a distorted image as human opinions, in which feature extraction is an important issue. However, the features used in the state-of-the-art "general purpose" NR-IQA algorithms are usually natural scene statistics (NSS) based or are perceptually relevant; therefore, the performance of these models is limited. To further improve the performance of NR-IQA, we propose a general purpose NR-IQA algorithm which combines NSS-based features with perceptually relevant features. The new method extracts features in both the spatial and gradient domains. In the spatial domain, we extract the point-wise statistics for single pixel values which are characterized by a generalized Gaussian distribution model to form the underlying features. In the gradient domain, statistical features based on neighboring gradient magnitude similarity are extracted. Then a mapping is learned to predict quality scores using a support vector regression. The experimental results on the benchmark image databases demonstrate that the proposed algorithm correlates highly with human judgments of quality and leads to significant performance improvements over state-of-the-art methods.
Bringing Clouds into Our Lab! - The Influence of Turbulence on the Early Stage Rain Droplets
NASA Astrophysics Data System (ADS)
Yavuz, Mehmet Altug; Kunnen, Rudie; Heijst, Gertjan; Clercx, Herman
2015-11-01
We are investigating a droplet-laden flow in an air-filled turbulence chamber, forced by speaker-driven air jets. The speakers are running in a random manner; yet they allow us to control and define the statistics of the turbulence. We study the motion of droplets with tunable size (Stokes numbers ~ 0.13 - 9) in a turbulent flow, mimicking the early stages of raindrop formation. 3D Particle Tracking Velocimetry (PTV) together with Laser Induced Fluorescence (LIF) methods are chosen as the experimental method to track the droplets and collect data for statistical analysis. Thereby it is possible to study the spatial distribution of the droplets in turbulence using the so-called Radial Distribution Function (RDF), a statistical measure to quantify the clustering of particles. Additionally, 3D-PTV technique allows us to measure velocity statistics of the droplets and the influence of the turbulence on droplet trajectories, both individually and collectively. In this contribution, we will present the clustering probability quantified by the RDF for different Stokes numbers. We will explain the physics underlying the influence of turbulence on droplet cluster behavior. This study supported by FOM/NWO Netherlands.
High Order Entropy-Constrained Residual VQ for Lossless Compression of Images
NASA Technical Reports Server (NTRS)
Kossentini, Faouzi; Smith, Mark J. T.; Scales, Allen
1995-01-01
High order entropy coding is a powerful technique for exploiting high order statistical dependencies. However, the exponentially high complexity associated with such a method often discourages its use. In this paper, an entropy-constrained residual vector quantization method is proposed for lossless compression of images. The method consists of first quantizing the input image using a high order entropy-constrained residual vector quantizer and then coding the residual image using a first order entropy coder. The distortion measure used in the entropy-constrained optimization is essentially the first order entropy of the residual image. Experimental results show very competitive performance.
Improved Adaptive LSB Steganography Based on Chaos and Genetic Algorithm
NASA Astrophysics Data System (ADS)
Yu, Lifang; Zhao, Yao; Ni, Rongrong; Li, Ting
2010-12-01
We propose a novel steganographic method in JPEG images with high performance. Firstly, we propose improved adaptive LSB steganography, which can achieve high capacity while preserving the first-order statistics. Secondly, in order to minimize visual degradation of the stego image, we shuffle bits-order of the message based on chaos whose parameters are selected by the genetic algorithm. Shuffling message's bits-order provides us with a new way to improve the performance of steganography. Experimental results show that our method outperforms classical steganographic methods in image quality, while preserving characteristics of histogram and providing high capacity.
The non-statistical dynamics of the 18O + 32O2 isotope exchange reaction at two energies
NASA Astrophysics Data System (ADS)
Van Wyngarden, Annalise L.; Mar, Kathleen A.; Quach, Jim; Nguyen, Anh P. Q.; Wiegel, Aaron A.; Lin, Shi-Ying; Lendvay, Gyorgy; Guo, Hua; Lin, Jim J.; Lee, Yuan T.; Boering, Kristie A.
2014-08-01
The dynamics of the 18O(3P) + 32O2 isotope exchange reaction were studied using crossed atomic and molecular beams at collision energies (Ecoll) of 5.7 and 7.3 kcal/mol, and experimental results were compared with quantum statistical (QS) and quasi-classical trajectory (QCT) calculations on the O3(X1A') potential energy surface (PES) of Babikov et al. [D. Babikov, B. K. Kendrick, R. B. Walker, R. T. Pack, P. Fleurat-Lesard, and R. Schinke, J. Chem. Phys. 118, 6298 (2003)]. In both QS and QCT calculations, agreement with experiment was markedly improved by performing calculations with the experimental distribution of collision energies instead of fixed at the average collision energy. At both collision energies, the scattering displayed a forward bias, with a smaller bias at the lower Ecoll. Comparisons with the QS calculations suggest that 34O2 is produced with a non-statistical rovibrational distribution that is hotter than predicted, and the discrepancy is larger at the lower Ecoll. If this underprediction of rovibrational excitation by the QS method is not due to PES errors and/or to non-adiabatic effects not included in the calculations, then this collision energy dependence is opposite to what might be expected based on collision complex lifetime arguments and opposite to that measured for the forward bias. While the QCT calculations captured the experimental product vibrational energy distribution better than the QS method, the QCT results underpredicted rotationally excited products, overpredicted forward-bias and predicted a trend in the strength of forward-bias with collision energy opposite to that measured, indicating that it does not completely capture the dynamic behavior measured in the experiment. Thus, these results further underscore the need for improvement in theoretical treatments of dynamics on the O3(X1A') PES and perhaps of the PES itself in order to better understand and predict non-statistical effects in this reaction and in the formation of ozone (in which the intermediate O3* complex is collisionally stabilized by a third body). The scattering data presented here at two different collision energies provide important benchmarks to guide these improvements.
AISLE: an automatic volumetric segmentation method for the study of lung allometry.
Ren, Hongliang; Kazanzides, Peter
2011-01-01
We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.
Kholeif, S A
2001-06-01
A new method that belongs to the differential category for determining the end points from potentiometric titration curves is presented. It uses a preprocess to find first derivative values by fitting four data points in and around the region of inflection to a non-linear function, and then locate the end point, usually as a maximum or minimum, using an inverse parabolic interpolation procedure that has an analytical solution. The behavior and accuracy of the sigmoid and cumulative non-linear functions used are investigated against three factors. A statistical evaluation of the new method using linear least-squares method validation and multifactor data analysis are covered. The new method is generally applied to symmetrical and unsymmetrical potentiometric titration curves, and the end point is calculated using numerical procedures only. It outperforms the "parent" regular differential method in almost all factors levels and gives accurate results comparable to the true or estimated true end points. Calculated end points from selected experimental titration curves compatible with the equivalence point category of methods, such as Gran or Fortuin, are also compared with the new method.
Pourkhalili, Azin; Rahimi, Ebrahim
2013-01-01
Lamb meat is regarded as an important source of highly bioavailable iron (heme iron) in the Iranians diet. The main objective of this study is to evaluate the effect of traditional cooking methods on the iron changes in lamb meat. Four published experimental methods for the determination of heme iron were assessed analytically and statistically. Samples were selected from lambs' loin. Standard methods (AOAC) were used for proximate analysis. For measuring heme iron, the results of four experimental methods were compared regarding their compliance to Ferrozine method which was used for the determination of nonheme iron. Among three cooking methods, the lowest total iron and heme iron were found in boiling method. The heme iron proportions to the total iron in raw, boiled lamb meat and grilled, were counted as 65.70%, 67.75%, and 76.01%, receptively. Measuring the heme iron, the comparison of the methods in use showed that the method in which heme extraction solution was composed of 90% acetone, 18% water, and 2% hydrochloric acid was more appropriate and more correlated with the heme iron content calculated by the difference between total iron and nonheme iron. PMID:23737716
Pourkhalili, Azin; Mirlohi, Maryam; Rahimi, Ebrahim
2013-01-01
Lamb meat is regarded as an important source of highly bioavailable iron (heme iron) in the Iranians diet. The main objective of this study is to evaluate the effect of traditional cooking methods on the iron changes in lamb meat. Four published experimental methods for the determination of heme iron were assessed analytically and statistically. Samples were selected from lambs' loin. Standard methods (AOAC) were used for proximate analysis. For measuring heme iron, the results of four experimental methods were compared regarding their compliance to Ferrozine method which was used for the determination of nonheme iron. Among three cooking methods, the lowest total iron and heme iron were found in boiling method. The heme iron proportions to the total iron in raw, boiled lamb meat and grilled, were counted as 65.70%, 67.75%, and 76.01%, receptively. Measuring the heme iron, the comparison of the methods in use showed that the method in which heme extraction solution was composed of 90% acetone, 18% water, and 2% hydrochloric acid was more appropriate and more correlated with the heme iron content calculated by the difference between total iron and nonheme iron.
Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS
Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.
2016-01-01
We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332
Crosta, Fernando; Nishiwaki-Dantas, Maria Cristina; Silvino, Wilmar; Dantas, Paulo Elias Correa
2005-01-01
To verify the frequency of study design, applied statistical analysis and approval by institutional review offices (Ethics Committee) of articles published in the "Arquivos Brasileiros de Oftalmologia" during a 10-year interval, with later comparative and critical analysis by some of the main international journals in the field of Ophthalmology. Systematic review without metanalysis was performed. Scientific papers published in the "Arquivos Brasileiros de Oftalmologia" between January 1993 and December 2002 were reviewed by two independent reviewers and classified according to the applied study design, statistical analysis and approval by the institutional review offices. To categorize those variables, a descriptive statistical analysis was used. After applying inclusion and exclusion criteria, 584 articles for evaluation of statistical analysis and, 725 articles for evaluation of study design were reviewed. Contingency table (23.10%) was the most frequently applied statistical method, followed by non-parametric tests (18.19%), Student's t test (12.65%), central tendency measures (10.60%) and analysis of variance (9.81%). Of 584 reviewed articles, 291 (49.82%) presented no statistical analysis. Observational case series (26.48%) was the most frequently used type of study design, followed by interventional case series (18.48%), observational case description (13.37%), non-random clinical study (8.96%) and experimental study (8.55%). We found a higher frequency of observational clinical studies, lack of statistical analysis in almost half of the published papers. Increase in studies with approval by institutional review Ethics Committee was noted since it became mandatory in 1996.
The analgesic effects of exogenous melatonin in humans.
Andersen, Lars Peter Holst
2016-10-01
The hormone, melatonin is produced with circadian rhythm by the pineal gland in humans. The melatonin rhythm provides an endogenous synchronizer, modulating e.g. blood pressure, body temperature, cortisol rhythm, sleep-awake-cycle, immune function and anti-oxidative defence. Interestingly, a number of experimental animal studies demonstrate significant dose-dependent anti-nociceptive effects of exogenous melatonin. Similarly, recent experimental- and clinical studies in humans indicate significant analgesic effects. In study I, we systematically reviewed all randomized studies investigating clinical effects of perioperative melatonin. Meta-analyses demonstrated significant analgesic and anxiolytic effects of melatonin in surgical patients, equating reductions of 20 mm and 19 mm, respectively on a VAS, compared with placebo. Profound heterogeneity between the included studies was, however, present. In study II, we aimed to investigate the analgesic, anti-hyperalgesic and anti-inflammatory effects of exogenous melatonin in a validated human inflammatory pain model, the human burn model. The study was performed as a randomized, double blind placebo-controlled crossover study. Primary outcomes were pain during the burn injury and areas of secondary hyperalgesia. No significant effects of exogenous melatonin were observed with respect to primary or secondary outcomes, compared to placebo. Study III and IV estimated the pharmacokinetic variables of exogenous melatonin. Oral melatonin demonstrated a t max value of 41 minutes. Bioavailability of oral melatonin was only 3%. Elimination t 1/2 were approximately 45 minutes following both oral and intravenous administration, respectively. High-dose intravenous melatonin was not associated with increased sedation, in terms of simple reaction times, compared to placebo. Similarly, no other adverse effects were reported. In Study V, we aimed to re-analyse data obtained from a randomized analgesic drug trial by a selection of standard statistical test. Furthermore, we presented an integrated assessment method of longitudinally measured pain intensity and opioid consumption. Our analyses documented that the employed statistical method impacted the statistical significance of post-operative analgesic outcomes. Furthermore, the novel integrated assessment method combines two interdependent outcomes, lowers the risk of type 2 errors, increases the statistical power, and provides a more accurate description of post-operative analgesic efficacy. Exogenous melatonin may offer an effective and safe analgesic drug. At this moment, however, the results of human studies have been contradictory. High-quality randomized experimental- and clinical studies are still needed to establish a "genuine" analgesic effect of the drug in humans. Other perioperative effects of exogenous melatonin should also be investigated, before melatonin can be introduced for clinical routine use in surgical patients. Despite promising experimental and clinical findings, several unanswered questions also relate to optimal dosage, timing of administration and administration route of exogenous melatonin.
Parametric and experimental analysis using a power flow approach
NASA Technical Reports Server (NTRS)
Cuschieri, J. M.
1988-01-01
Having defined and developed a structural power flow approach for the analysis of structure-borne transmission of structural vibrations, the technique is used to perform an analysis of the influence of structural parameters on the transmitted energy. As a base for comparison, the parametric analysis is first performed using a Statistical Energy Analysis approach and the results compared with those obtained using the power flow approach. The advantages of using structural power flow are thus demonstrated by comparing the type of results obtained by the two methods. Additionally, to demonstrate the advantages of using the power flow method and to show that the power flow results represent a direct physical parameter that can be measured on a typical structure, an experimental investigation of structural power flow is also presented. Results are presented for an L-shaped beam for which an analytical solution has already been obtained. Furthermore, the various methods available to measure vibrational power flow are compared to investigate the advantages and disadvantages of each method.
NASA Astrophysics Data System (ADS)
Smolin, I. Yu.; Kulkov, A. S.; Makarov, P. V.; Tunda, V. A.; Krasnoveikin, V. A.; Eremin, M. O.; Bakeev, R. A.
2017-12-01
The aim of the paper is to analyze experimental data on the dynamic response of the marble specimen in uniaxial compression. To make it we use the methods of mathematical statistics. The lateral surface velocity evolution obtained by the laser Doppler vibrometer represents the data for analysis. The registered data were regarded as a time series that reflects deformation evolution of the specimen loaded up to failure. The revealed changes in statistical parameters were considered as precursors of failure. It is shown that before failure the deformation response is autocorrelated and reflects the states of dynamic chaos and self-organized criticality.
Stokes-correlometry of polarization-inhomogeneous objects
NASA Astrophysics Data System (ADS)
Ushenko, O. G.; Dubolazov, A.; Bodnar, G. B.; Bachynskiy, V. T.; Vanchulyak, O.
2018-01-01
The paper consists of two parts. The first part presents short theoretical basics of the method of Stokes-correlometry description of optical anisotropy of biological tissues. It was provided experimentally measured coordinate distributions of modulus (MSV) and phase (PhSV) of complex Stokes vector of skeletal muscle tissue. It was defined the values and ranges of changes of statistic moments of the 1st-4th orders, which characterize the distributions of values of MSV and PhSV. The second part presents the data of statistic analysis of the distributions of modulus MSV and PhSV. It was defined the objective criteria of differentiation of samples with urinary incontinence.
Huang, Bo; Kuan, Pei Fen
2014-11-01
Delayed dose limiting toxicities (i.e. beyond first cycle of treatment) is a challenge for phase I trials. The time-to-event continual reassessment method (TITE-CRM) is a Bayesian dose-finding design to address the issue of long observation time and early patient drop-out. It uses a weighted binomial likelihood with weights assigned to observations by the unknown time-to-toxicity distribution, and is open to accrual continually. To avoid dosing at overly toxic levels while retaining accuracy and efficiency for DLT evaluation that involves multiple cycles, we propose an adaptive weight function by incorporating cyclical data of the experimental treatment with parameters updated continually. This provides a reasonable estimate for the time-to-toxicity distribution by accounting for inter-cycle variability and maintains the statistical properties of consistency and coherence. A case study of a First-in-Human trial in cancer for an experimental biologic is presented using the proposed design. Design calibrations for the clinical and statistical parameters are conducted to ensure good operating characteristics. Simulation results show that the proposed TITE-CRM design with adaptive weight function yields significantly shorter trial duration, does not expose patients to additional risk, is competitive against the existing weighting methods, and possesses some desirable properties. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Papanikolaou, Yannis; Tsoumakas, Grigorios; Laliotis, Manos; Markantonatos, Nikos; Vlahavas, Ioannis
2017-09-22
In this paper we present the approach that we employed to deal with large scale multi-label semantic indexing of biomedical papers. This work was mainly implemented within the context of the BioASQ challenge (2013-2017), a challenge concerned with biomedical semantic indexing and question answering. Our main contribution is a MUlti-Label Ensemble method (MULE) that incorporates a McNemar statistical significance test in order to validate the combination of the constituent machine learning algorithms. Some secondary contributions include a study on the temporal aspects of the BioASQ corpus (observations apply also to the BioASQ's super-set, the PubMed articles collection) and the proper parametrization of the algorithms used to deal with this challenging classification task. The ensemble method that we developed is compared to other approaches in experimental scenarios with subsets of the BioASQ corpus giving positive results. In our participation in the BioASQ challenge we obtained the first place in 2013 and the second place in the four following years, steadily outperforming MTI, the indexing system of the National Library of Medicine (NLM). The results of our experimental comparisons, suggest that employing a statistical significance test to validate the ensemble method's choices, is the optimal approach for ensembling multi-label classifiers, especially in contexts with many rare labels.
Sea Ice Drift Monitoring in the Bohai Sea Based on GF4 Satellite
NASA Astrophysics Data System (ADS)
Zhao, Y.; Wei, P.; Zhu, H.; Xing, B.
2018-04-01
The Bohai Sea is the inland sea with the highest latitude in China. In winter, the phenomenon of freezing occurs in the Bohai Sea due to frequent cold wave influx. According to historical records, there have been three serious ice packs in the Bohai Sea in the past 50 years which caused heavy losses to our economy. Therefore, it is of great significance to monitor the drift of sea ice and sea ice in the Bohai Sea. The GF4 image has the advantages of short imaging time and high spatial resolution. Based on the GF4 satellite images, the three methods of SIFT (Scale invariant feature - the transform and Scale invariant feature transform), MCC (maximum cross-correlation method) and sift combined with MCC are used to monitor sea ice drift and calculate the speed and direction of sea ice drift, the three calculation results are compared and analyzed by using expert interpretation and historical statistical data to carry out remote sensing monitoring of sea ice drift results. The experimental results show that the experimental results of the three methods are in accordance with expert interpretation and historical statistics. Therefore, the GF4 remote sensing satellite images have the ability to monitor sea ice drift and can be used for drift monitoring of sea ice in the Bohai Sea.
Mechanistic analysis of challenge-response experiments.
Shotwell, M S; Drake, K J; Sidorov, V Y; Wikswo, J P
2013-09-01
We present an application of mechanistic modeling and nonlinear longitudinal regression in the context of biomedical response-to-challenge experiments, a field where these methods are underutilized. In this type of experiment, a system is studied by imposing an experimental challenge, and then observing its response. The combination of mechanistic modeling and nonlinear longitudinal regression has brought new insight, and revealed an unexpected opportunity for optimal design. Specifically, the mechanistic aspect of our approach enables the optimal design of experimental challenge characteristics (e.g., intensity, duration). This article lays some groundwork for this approach. We consider a series of experiments wherein an isolated rabbit heart is challenged with intermittent anoxia. The heart responds to the challenge onset, and recovers when the challenge ends. The mean response is modeled by a system of differential equations that describe a candidate mechanism for cardiac response to anoxia challenge. The cardiac system behaves more variably when challenged than when at rest. Hence, observations arising from this experiment exhibit complex heteroscedasticity and sharp changes in central tendency. We present evidence that an asymptotic statistical inference strategy may fail to adequately account for statistical uncertainty. Two alternative methods are critiqued qualitatively (i.e., for utility in the current context), and quantitatively using an innovative Monte-Carlo method. We conclude with a discussion of the exciting opportunities in optimal design of response-to-challenge experiments. © 2013, The International Biometric Society.
NASA Astrophysics Data System (ADS)
Zhang, Zhu; Li, Hongbin; Tang, Dengping; Hu, Chen; Jiao, Yang
2017-10-01
Metering performance is the key parameter of an electronic voltage transformer (EVT), and it requires high accuracy. The conventional off-line calibration method using a standard voltage transformer is not suitable for the key equipment in a smart substation, which needs on-line monitoring. In this article, we propose a method for monitoring the metering performance of an EVT on-line based on cyber-physics correlation analysis. By the electrical and physical properties of a substation running in three-phase symmetry, the principal component analysis method is used to separate the metering deviation caused by the primary fluctuation and the EVT anomaly. The characteristic statistics of the measured data during operation are extracted, and the metering performance of the EVT is evaluated by analyzing the change in statistics. The experimental results show that the method successfully monitors the metering deviation of a Class 0.2 EVT accurately. The method demonstrates the accurate evaluation of on-line monitoring of the metering performance on an EVT without a standard voltage transformer.
Schukken, Y H; Rauch, B J; Morelli, J
2013-04-01
The objective of this paper was to define standardized protocols for determining the efficacy of a postmilking teat disinfectant following experimental exposure of teats to both Staphylococcus aureus and Streptococcus agalactiae. The standardized protocols describe the selection of cows and herds and define the critical points in performing experimental exposure, performing bacterial culture, evaluating the culture results, and finally performing statistical analyses and reporting of the results. The protocols define both negative control and positive control trials. For negative control trials, the protocol states that an efficacy of reducing new intramammary infections (IMI) of at least 40% is required for a teat disinfectant to be considered effective. For positive control trials, noninferiority to a control disinfectant with a published efficacy of reducing new IMI of at least 70% is required. Sample sizes for both negative and positive control trials are calculated. Positive control trials are expected to require a large trial size. Statistical analysis methods are defined and, in the proposed methods, the rate of IMI may be analyzed using generalized linear mixed models. The efficacy of the test product can be evaluated while controlling for important covariates and confounders in the trial. Finally, standards for reporting are defined and reporting considerations are discussed. The use of the defined protocol is shown through presentation of the results of a recent trial of a test product against a negative control. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Gene set analysis approaches for RNA-seq data: performance evaluation and application guideline
Rahmatallah, Yasir; Emmert-Streib, Frank
2016-01-01
Transcriptome sequencing (RNA-seq) is gradually replacing microarrays for high-throughput studies of gene expression. The main challenge of analyzing microarray data is not in finding differentially expressed genes, but in gaining insights into the biological processes underlying phenotypic differences. To interpret experimental results from microarrays, gene set analysis (GSA) has become the method of choice, in particular because it incorporates pre-existing biological knowledge (in a form of functionally related gene sets) into the analysis. Here we provide a brief review of several statistically different GSA approaches (competitive and self-contained) that can be adapted from microarrays practice as well as those specifically designed for RNA-seq. We evaluate their performance (in terms of Type I error rate, power, robustness to the sample size and heterogeneity, as well as the sensitivity to different types of selection biases) on simulated and real RNA-seq data. Not surprisingly, the performance of various GSA approaches depends only on the statistical hypothesis they test and does not depend on whether the test was developed for microarrays or RNA-seq data. Interestingly, we found that competitive methods have lower power as well as robustness to the samples heterogeneity than self-contained methods, leading to poor results reproducibility. We also found that the power of unsupervised competitive methods depends on the balance between up- and down-regulated genes in tested gene sets. These properties of competitive methods have been overlooked before. Our evaluation provides a concise guideline for selecting GSA approaches, best performing under particular experimental settings in the context of RNA-seq. PMID:26342128
A Primer on Bayesian Analysis for Experimental Psychopathologists
Krypotos, Angelos-Miltiadis; Blanken, Tessa F.; Arnaudova, Inna; Matzke, Dora; Beckers, Tom
2016-01-01
The principal goals of experimental psychopathology (EPP) research are to offer insights into the pathogenic mechanisms of mental disorders and to provide a stable ground for the development of clinical interventions. The main message of the present article is that those goals are better served by the adoption of Bayesian statistics than by the continued use of null-hypothesis significance testing (NHST). In the first part of the article we list the main disadvantages of NHST and explain why those disadvantages limit the conclusions that can be drawn from EPP research. Next, we highlight the advantages of Bayesian statistics. To illustrate, we then pit NHST and Bayesian analysis against each other using an experimental data set from our lab. Finally, we discuss some challenges when adopting Bayesian statistics. We hope that the present article will encourage experimental psychopathologists to embrace Bayesian statistics, which could strengthen the conclusions drawn from EPP research. PMID:28748068
Palmisano, Aldo N.; Elder, N.E.
2001-01-01
We examined, under standardized conditions, seawater survival of chinook salmon Oncorhynchus tshawytscha at the smolt stage to evaluate the experimental hatchery practices applied to their rearing. The experimental rearing practices included rearing fish at different densities; attempting to control bacterial kidney disease with broodstock segregation, erythromycin injection, and an experimental diet; rearing fish on different water sources; and freeze branding the fish. After application of experimental rearing practices in hatcheries, smolts were transported to a rearing facility for about 2-3 months of seawater rearing. Of 16 experiments, 4 yielded statistically significant differences in seawater survival. In general we found that high variability among replicates, plus the low numbers of replicates available, resulted in low statistical power. We recommend including four or five replicates and using ?? = 0.10 in 1-tailed tests of hatchery experiments to try to increase the statistical power to 0.80.
ERIC Educational Resources Information Center
Mascaró, Maite; Sacristán, Ana Isabel; Rufino, Marta M.
2016-01-01
For the past 4 years, we have been involved in a project that aims to enhance the teaching and learning of experimental analysis and statistics, of environmental and biological sciences students, through computational programming activities (using R code). In this project, through an iterative design, we have developed sequences of R-code-based…
An adaptive multi-feature segmentation model for infrared image
NASA Astrophysics Data System (ADS)
Zhang, Tingting; Han, Jin; Zhang, Yi; Bai, Lianfa
2016-04-01
Active contour models (ACM) have been extensively applied to image segmentation, conventional region-based active contour models only utilize global or local single feature information to minimize the energy functional to drive the contour evolution. Considering the limitations of original ACMs, an adaptive multi-feature segmentation model is proposed to handle infrared images with blurred boundaries and low contrast. In the proposed model, several essential local statistic features are introduced to construct a multi-feature signed pressure function (MFSPF). In addition, we draw upon the adaptive weight coefficient to modify the level set formulation, which is formed by integrating MFSPF with local statistic features and signed pressure function with global information. Experimental results demonstrate that the proposed method can make up for the inadequacy of the original method and get desirable results in segmenting infrared images.
Torkamani, Fatemeh; Aghayousefi, Alireza; Alipour, Ahmad; Nami, Mohammad
Based on existing psychoneuroimmunological insights, the present study aimed at investigating possible effects of a single-session group mantra-meditation on salivary immunoglobulin A (s-IgA) and affective states. A controlled pretest-posttest study enrolled 30 healthy women (mean age 44 ± 3 years) through a multi-stage random sampling method from yoga clubs in Shiraz (Feb-Dec, 2016). Subjects were randomly assigned to experimental (n = 15) and control (n = 15) groups. Participants in both the groups attended a structured introductory lecture about mantra-meditation after which those in the experimental group meditated for 20min. Saliva samples were collected after the intervention, and the participants' affective states were examined by a qualified clinical psychologist blinded to the intervention using the positive and negative affect schedule questionnaire at sequential time-points, i.e., baseline, post-meditation, and one hour later. Similar assessments were done for the control group subjects. The enzyme-linked immunosorbent assay was used to test saliva samples for the IgA titer. The s-IgA and the positive and negative affect schedule (PANAS) test results were statistically evaluated using an analysis of variance. The mean s-IgA titer in the experimental group at 'post-meditation' and '1-hour later' time-points were found to be statistically different from those of the control group (P < .05). In addition, results indicated a significant change in affect among experimental group subjects as compared to controls (P < .05). Our findings suggest that "group mantra-meditation" training even for a single session may positively influence some immunological components and improve affective states. As a simple and low-cost psychoneurobehavioral intervention, this method may offer mental-health benefits at nursing homes as well as group-therapies. Copyright © 2018. Published by Elsevier Inc.
Ebrahiminejad, Shima; Poursharifi, Hamid; Bakhshiour Roodsari, Abbas; Zeinodini, Zahra; Noorbakhsh, Simasadat
2016-01-01
Background Social anxiety is one of the most common psychological disorders that exists among children and adolescents, and it has profound effects on their psychological states and academic achievements. Objectives The aim of this study was to determine the effectiveness of mindfulness-based cognitive therapy (MBCT) on diminishing social anxiety disorder symptoms and improving the self-esteem of female adolescents suffering from social anxiety. Patients and Methods Semi-experimental research was conducted on 30 female students diagnosed with social anxiety. From the population of female students who were studying in Tehran’s high schools in the academic year of 2013 - 2014, 30 students fulfilling the DSM-5 criteria were selected using the convenience sampling method and were randomly assigned to control and experimental groups. The experimental group received eight sessions of MBCT treatment. The control group received no treatment. All participants completed the social phobia inventory (SPIN) and Rosenberg self-esteem scale (RSES) twice as pre- and post-treatment tests. Results The results from the experimental group indicated a statistically reliable difference between the mean scores from SPIN (t (11) = 5.246, P = 0.000) and RSES (t (11) = -2.326, P = 0.040) pre-treatment and post-treatment. On the other hand, the results of the control group failed to reveal a statistically reliable difference between the mean scores from SPIN (t (12) = 1.089, P = 0.297) and RSES pre-treatment and post-treatment (t (12) = 1.089, P = 0.000). Conclusions The results indicate that MBCT is effective on both the improvement of self-esteem and the decrease of social anxiety. The results are in accordance with prior studies performed on adolescents. PMID:28191335
Samadi, Parisa; Alipour, Zahra; Lamyian, Minoor
2018-01-01
Background: Labor pain is the most severe pain women would experience, which could lead to loss of emotional control that plays a key role in creating a traumatic delivery experience and psychological disorders. The goal of this study was to evaluate the effect of acupressure on anxiety level and sedative and analgesics consumption in women during labor. Materials and Methods: This study was a randomized, single-blind clinical trial performed at Maryam Hospital in Tehran, Iran. One hundred and thirty-one pregnant women in Labor Ward were selected by convenience sampling. Subjects were randomly allocated to three groups, namely experimental group (pressure group), control group 1(touh group) and, control group 2 (routine care group). The study data were gathered using demographic information form, and assessed with Faces Anxiety Scale (FAS). For participants belonging to the experimental group, pressure was applied to the Spleen 6 acupoint for 30 min, and for those with only light touch was applied to the Spleen 6 acupoint for 30 min. The collected data were analyzed using Statistical Package for the Social Sciences 16 and descriptive statistics. Results: There was a significant difference between the three groups in terms of the mean of anxiety after 30 min of starting the intervention and 30 min after termination of the intervention; the anxiety of the experimental group was significantly decreased (p = 0.03). Sedative and analgesics consumption was significantly lower in the experimental group compared to the other groups (p = 0.006). Conclusions: This study showed that compression of the Spleen 6 acupoint was an effective complementary method to decrease maternal anxiety and analgesic consumption, especially pethidine. PMID:29628954
NASA Astrophysics Data System (ADS)
Sadi, Maryam
2018-01-01
In this study a group method of data handling model has been successfully developed to predict heat capacity of ionic liquid based nanofluids by considering reduced temperature, acentric factor and molecular weight of ionic liquids, and nanoparticle concentration as input parameters. In order to accomplish modeling, 528 experimental data points extracted from the literature have been divided into training and testing subsets. The training set has been used to predict model coefficients and the testing set has been applied for model validation. The ability and accuracy of developed model, has been evaluated by comparison of model predictions with experimental values using different statistical parameters such as coefficient of determination, mean square error and mean absolute percentage error. The mean absolute percentage error of developed model for training and testing sets are 1.38% and 1.66%, respectively, which indicate excellent agreement between model predictions and experimental data. Also, the results estimated by the developed GMDH model exhibit a higher accuracy when compared to the available theoretical correlations.
Annotating novel genes by integrating synthetic lethals and genomic information
Schöner, Daniel; Kalisch, Markus; Leisner, Christian; Meier, Lukas; Sohrmann, Marc; Faty, Mahamadou; Barral, Yves; Peter, Matthias; Gruissem, Wilhelm; Bühlmann, Peter
2008-01-01
Background Large scale screening for synthetic lethality serves as a common tool in yeast genetics to systematically search for genes that play a role in specific biological processes. Often the amounts of data resulting from a single large scale screen far exceed the capacities of experimental characterization of every identified target. Thus, there is need for computational tools that select promising candidate genes in order to reduce the number of follow-up experiments to a manageable size. Results We analyze synthetic lethality data for arp1 and jnm1, two spindle migration genes, in order to identify novel members in this process. To this end, we use an unsupervised statistical method that integrates additional information from biological data sources, such as gene expression, phenotypic profiling, RNA degradation and sequence similarity. Different from existing methods that require large amounts of synthetic lethal data, our method merely relies on synthetic lethality information from two single screens. Using a Multivariate Gaussian Mixture Model, we determine the best subset of features that assign the target genes to two groups. The approach identifies a small group of genes as candidates involved in spindle migration. Experimental testing confirms the majority of our candidates and we present she1 (YBL031W) as a novel gene involved in spindle migration. We applied the statistical methodology also to TOR2 signaling as another example. Conclusion We demonstrate the general use of Multivariate Gaussian Mixture Modeling for selecting candidate genes for experimental characterization from synthetic lethality data sets. For the given example, integration of different data sources contributes to the identification of genetic interaction partners of arp1 and jnm1 that play a role in the same biological process. PMID:18194531
Ginovart, Marta; Carbó, Rosa; Blanco, Mónica; Portell, Xavier
2017-01-01
Nowadays control of the growth of Saccharomyces to obtain biomass or cellular wall components is crucial for specific industrial applications. The general aim of this contribution is to deal with experimental data obtained from yeast cells and from yeast cultures to attempt the integration of the two levels of information, individual and population, to progress in the control of yeast biotechnological processes by means of the overall analysis of this set of experimental data, and to assist in the improvement of an individual-based model, namely, INDISIM- Saccha . Populations of S. cerevisiae growing in liquid batch culture, in aerobic and microaerophilic conditions, were studied. A set of digital images was taken during the population growth, and a protocol for the treatment and analyses of the images obtained was established. The piecewise linear model of Buchanan was adjusted to the temporal evolutions of the yeast populations to determine the kinetic parameters and changes of growth phases. In parallel, for all the yeast cells analyzed, values of direct morphological parameters, such as area, perimeter, major diameter, minor diameter, and derived ones, such as circularity and elongation, were obtained. Graphical and numerical methods from descriptive statistics were applied to these data to characterize the growth phases and the budding state of the yeast cells in both experimental conditions, and inferential statistical methods were used to compare the diverse groups of data achieved. Oxidative metabolism of yeast in a medium with oxygen available and low initial sugar concentration can be taken into account in order to obtain a greater number of cells or larger cells. Morphological parameters were analyzed statistically to identify which were the most useful for the discrimination of the different states, according to budding and/or growth phase, in aerobic and microaerophilic conditions. The use of the experimental data for subsequent modeling work was then discussed and compared to simulation results generated with INDISIM- Saccha , which allowed us to advance in the development of this yeast model, and illustrated the utility of data at different levels of observation and the needs and logic behind the development of a microbial individual-based model.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
NASA Astrophysics Data System (ADS)
Taştan, Özgecan; Yalçınkaya, Eylem; Boz, Yezdan
2008-10-01
The aim of this study is to compare the effectiveness of conceptual change text instruction (CCT) in the context of energy in chemical reactions. The subjects of the study were 60, 10th grade students at a high school, who were in two different classes and taught by the same teacher. One of the classes was randomly selected as the experimental group in which CCT instruction was applied, and the other as the control group in which traditional teaching method was used. The data were obtained through the use of Energy Concept Test (ECT), the Attitude Scale towards Chemistry (ASC) and Science Process Skill Test (SPST). In order to find out the effect of the conceptual change text on students' learning of energy concept, independent sample t-tests, ANCOVA (analysis of covariance) and ANOVA (analysis of variance) were used. Results revealed that there was a statistically significant mean difference between the experimental and control group in terms of students' ECT total mean scores; however, there was no statistically significant difference between the experimental and control group in terms of students' attitude towards chemistry. These findings suggest that conceptual change text instruction enhances the understanding and achievement.
Code of Federal Regulations, 2014 CFR
2014-07-01
... comparative data to other methods and SRM materials are presented in Reference 23 of Section 16.0. 13... Plasma, Anal. Chem. 52:1965, 1980. 20. Deming, S.N. and S.L. Morgan. Experimental Design for Quality and... Statistical Designs, 9941 Rowlett, Suite 6, Houston, TX 77075, 1989. 21. Winefordner, J.D., Trace Analysis...
Code of Federal Regulations, 2013 CFR
2013-07-01
... comparative data to other methods and SRM materials are presented in Reference 23 of Section 16.0. 13... Plasma, Anal. Chem. 52:1965, 1980. 20. Deming, S.N. and S.L. Morgan. Experimental Design for Quality and... Statistical Designs, 9941 Rowlett, Suite 6, Houston, TX 77075, 1989. 21. Winefordner, J.D., Trace Analysis...
Code of Federal Regulations, 2012 CFR
2012-07-01
... comparative data to other methods and SRM materials are presented in Reference 23 of Section 16.0. 13... Plasma, Anal. Chem. 52:1965, 1980. 20. Deming, S.N. and S.L. Morgan. Experimental Design for Quality and... Statistical Designs, 9941 Rowlett, Suite 6, Houston, TX 77075, 1989. 21. Winefordner, J.D., Trace Analysis...
A Bootstrap Algorithm for Mixture Models and Interval Data in Inter-Comparisons
2001-07-01
parametric bootstrap. The present algorithm will be applied to a thermometric inter-comparison, where data cannot be assumed to be normally distributed. 2 Data...experimental methods, used in each laboratory) often imply that the statistical assumptions are not satisfied, as for example in several thermometric ...triangular). Indeed, in thermometric experiments these three probabilistic models can represent several common stochastic variabilities for
Barreiro, Andrea K; Gautam, Shree Hari; Shew, Woodrow L; Ly, Cheng
2017-10-01
Determining how synaptic coupling within and between regions is modulated during sensory processing is an important topic in neuroscience. Electrophysiological recordings provide detailed information about neural spiking but have traditionally been confined to a particular region or layer of cortex. Here we develop new theoretical methods to study interactions between and within two brain regions, based on experimental measurements of spiking activity simultaneously recorded from the two regions. By systematically comparing experimentally-obtained spiking statistics to (efficiently computed) model spike rate statistics, we identify regions in model parameter space that are consistent with the experimental data. We apply our new technique to dual micro-electrode array in vivo recordings from two distinct regions: olfactory bulb (OB) and anterior piriform cortex (PC). Our analysis predicts that: i) inhibition within the afferent region (OB) has to be weaker than the inhibition within PC, ii) excitation from PC to OB is generally stronger than excitation from OB to PC, iii) excitation from PC to OB and inhibition within PC have to both be relatively strong compared to presynaptic inputs from OB. These predictions are validated in a spiking neural network model of the OB-PC pathway that satisfies the many constraints from our experimental data. We find when the derived relationships are violated, the spiking statistics no longer satisfy the constraints from the data. In principle this modeling framework can be adapted to other systems and be used to investigate relationships between other neural attributes besides network connection strengths. Thus, this work can serve as a guide to further investigations into the relationships of various neural attributes within and across different regions during sensory processing.
Confounding in statistical mediation analysis: What it is and how to address it.
Valente, Matthew J; Pelham, William E; Smyth, Heather; MacKinnon, David P
2017-11-01
Psychology researchers are often interested in mechanisms underlying how randomized interventions affect outcomes such as substance use and mental health. Mediation analysis is a common statistical method for investigating psychological mechanisms that has benefited from exciting new methodological improvements over the last 2 decades. One of the most important new developments is methodology for estimating causal mediated effects using the potential outcomes framework for causal inference. Potential outcomes-based methods developed in epidemiology and statistics have important implications for understanding psychological mechanisms. We aim to provide a concise introduction to and illustration of these new methods and emphasize the importance of confounder adjustment. First, we review the traditional regression approach for estimating mediated effects. Second, we describe the potential outcomes framework. Third, we define what a confounder is and how the presence of a confounder can provide misleading evidence regarding mechanisms of interventions. Fourth, we describe experimental designs that can help rule out confounder bias. Fifth, we describe new statistical approaches to adjust for measured confounders of the mediator-outcome relation and sensitivity analyses to probe effects of unmeasured confounders on the mediated effect. All approaches are illustrated with application to a real counseling intervention dataset. Counseling psychologists interested in understanding the causal mechanisms of their interventions can benefit from incorporating the most up-to-date techniques into their mediation analyses. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Unicomb, Rachael; Colyvas, Kim; Harrison, Elisabeth; Hewat, Sally
2015-06-01
Case-study methodology studying change is often used in the field of speech-language pathology, but it can be criticized for not being statistically robust. Yet with the heterogeneous nature of many communication disorders, case studies allow clinicians and researchers to closely observe and report on change. Such information is valuable and can further inform large-scale experimental designs. In this research note, a statistical analysis for case-study data is outlined that employs a modification to the Reliable Change Index (Jacobson & Truax, 1991). The relationship between reliable change and clinical significance is discussed. Example data are used to guide the reader through the use and application of this analysis. A method of analysis is detailed that is suitable for assessing change in measures with binary categorical outcomes. The analysis is illustrated using data from one individual, measured before and after treatment for stuttering. The application of this approach to assess change in categorical, binary data has potential application in speech-language pathology. It enables clinicians and researchers to analyze results from case studies for their statistical and clinical significance. This new method addresses a gap in the research design literature, that is, the lack of analysis methods for noncontinuous data (such as counts, rates, proportions of events) that may be used in case-study designs.
Shang, Xiaoyan; Carlson, Michelle C; Tang, Xiaoying
2018-04-30
Total intracranial volume (TIV) is often used as a measure of brain size to correct for individual variability in magnetic resonance imaging (MRI) based morphometric studies. An adjustment of TIV can greatly increase the statistical power of brain morphometry methods. As such, an accurate and precise TIV estimation is of great importance in MRI studies. In this paper, we compared three automated TIV estimation methods (multi-atlas likelihood fusion (MALF), Statistical Parametric Mapping 8 (SPM8) and FreeSurfer (FS)) using longitudinal T1-weighted MR images in a cohort of 70 older participants at elevated sociodemographic risk for Alzheimer's disease. Statistical group comparisons in terms of four different metrics were performed. Furthermore, sex, education level, and intervention status were investigated separately for their impacts on the TIV estimation performance of each method. According to our experimental results, MALF was the least susceptible to atrophy, while SPM8 and FS suffered a loss in precision. In group-wise analysis, MALF was the least sensitive method to group variation, whereas SPM8 was particularly sensitive to sex and FS was unstable with respect to education level. In terms of effectiveness, both MALF and SPM8 delivered a user-friendly performance, while FS was relatively computationally intensive. Copyright © 2018 Elsevier B.V. All rights reserved.
Yi, Zhou; Manil-Ségalen, Marion; Sago, Laila; Glatigny, Annie; Redeker, Virginie; Legouis, Renaud; Mucchielli-Giorgi, Marie-Hélène
2016-05-06
Affinity purifications followed by mass spectrometric analysis are used to identify protein-protein interactions. Because quantitative proteomic data are noisy, it is necessary to develop statistical methods to eliminate false-positives and identify true partners. We present here a novel approach for filtering false interactors, named "SAFER" for mass Spectrometry data Analysis by Filtering of Experimental Replicates, which is based on the reproducibility of the replicates and the fold-change of the protein intensities between bait and control. To identify regulators or targets of autophagy, we characterized the interactors of LGG1, a ubiquitin-like protein involved in autophagosome formation in C. elegans. LGG-1 partners were purified by affinity, analyzed by nanoLC-MS/MS mass spectrometry, and quantified by a label-free proteomic approach based on the mass spectrometric signal intensity of peptide precursor ions. Because the selection of confident interactions depends on the method used for statistical analysis, we compared SAFER with several statistical tests and different scoring algorithms on this set of data. We show that SAFER recovers high-confidence interactors that have been ignored by the other methods and identified new candidates involved in the autophagy process. We further validated our method on a public data set and conclude that SAFER notably improves the identification of protein interactors.
Karahan, Nazım; Arslan, İlyas; Orak, Müfit; Midi, Ahmet; Yücel, İstemi
2017-01-01
Aim: The histological effects of intra-articular polydeoxyribonucleic acid and hyaluronic acid in experimentally induced osteoarthritis of the knee joints of rats were investigated. Methods: Thirty rats were divided into three groups, i.e. polydeoxyribonucleic acid group, hyaluronic acid group and saline group. Osteoarthritis of the knee joints of the rats were induced by acl- transection. The polydeoxyribonucleic group was injected with 100 µg (0.05 cc) polydeoxyribonucleic acid. The hyaluronic acid group was injected with 100 µg (0.05 cc) hyaluronic acid, and the saline group was injected with 50 µl (0.05 cc) of 0.9% sodium chloride solution. All of the rats were sacrificed on day 29 and the right knee joints were prepared, and evaluated histologically by Mankin classification. Findings: The differences in total Mankin scores between the three groups were statistically significant (P < 0.01). The differences in total Mankin scores between the polydeoxyribonucleic acid group and the hyaluronic acid group were statistically significant (P < 0.01). The differences in total Mankin scores between hyaluronic acid group and saline group were statistically significant (P < 0.01). Tidemark continuity in all the specimens of the polydeoxyribonucleic acid group was noteworthy. Conclusion: The present study shows that more chondroprotective effect and less degeneration was observed with intra-articularly delivered polydeoxyribonucleic acid compared to hyaluronic acid and saline solution in the experimentally induced osteoarthritis of the knee joints of rats.
Selmke, Markus; Khadka, Utsab; Bregulla, Andreas P; Cichos, Frank; Yang, Haw
2018-04-18
Photon nudging is a new experimental method which enables the force-free manipulation and localization of individual self-propelled artificial micro-swimmers in fluidic environments. It uses a weak laser to stochastically and adaptively turn on and off the swimmer's propulsion when the swimmer, through rotational diffusion, points towards or away from its target, respectively. This contribution presents a theoretical framework for the statistics of both 2D and 3D controls. The main results are: the on- and off-time distributions for the controlling laser, the arrival time statistics for the swimmer to reach a remote target, and how the experimentally accessible control parameters influence the control, e.g., the optimal acceptance angle for directed transport. The results are general in that they are independent of the propulsion or the actuation mechanisms. They provide a concrete physical picture for how a single artificial micro-swimmer could be navigated under thermal fluctuations-insights that could also be useful for understanding biological micro-swimmers.
NASA Astrophysics Data System (ADS)
Cadeville, M. C.; Pierron-Bohnes, V.; Bouzidi, L.; Sanchez, J. M.
1993-01-01
Local and average electronic and magnetic properties of transition metal alloys are strongly correlated to the distribution of atoms on the lattice sites. The ability of some systems to form long range ordered structures at low temperature allows to discuss their properties in term of well identified occupation operators as those related to long range order (LRO) parameters. We show that using theoretical determinations of these LRO parameters through statistical models like the cluster variation method (CVM) developed to simulate the experimental phase diagrams, we are able to reproduce a lot of physical properties. In this paper we focus on two points: (i) a comparison between CVM results and an experimental determination of the LRO parameter by NMR at 59Co in a CoPt3 compound, and (ii) the modelling of the resistivity of ferromagnetic and paramagnetic intermetallic compounds belonging to Co-Pt, Ni-Pt and Fe-Al systems. All experiments were performed on samples in identified thermodynamic states, implying that kinetic effects are thoroughly taken into account.
Integrative pipeline for profiling DNA copy number and inferring tumor phylogeny.
Urrutia, Eugene; Chen, Hao; Zhou, Zilu; Zhang, Nancy R; Jiang, Yuchao
2018-06-15
Copy number variation is an important and abundant source of variation in the human genome, which has been associated with a number of diseases, especially cancer. Massively parallel next-generation sequencing allows copy number profiling with fine resolution. Such efforts, however, have met with mixed successes, with setbacks arising partly from the lack of reliable analytical methods to meet the diverse and unique challenges arising from the myriad experimental designs and study goals in genetic studies. In cancer genomics, detection of somatic copy number changes and profiling of allele-specific copy number (ASCN) are complicated by experimental biases and artifacts as well as normal cell contamination and cancer subclone admixture. Furthermore, careful statistical modeling is warranted to reconstruct tumor phylogeny by both somatic ASCN changes and single nucleotide variants. Here we describe a flexible computational pipeline, MARATHON, which integrates multiple related statistical software for copy number profiling and downstream analyses in disease genetic studies. MARATHON is publicly available at https://github.com/yuchaojiang/MARATHON. Supplementary data are available at Bioinformatics online.
KAVEH, MOHAMMAD HOSSIEN; MORADI, LEILA; GHAHREMANI, LEILA; TABATABAEE, HAMID REZA
2014-01-01
Introduction: One of the main determinants of adolescents’ life satisfaction is parenting skills. Due to the lack of educational trials in this field, this research was done to evaluate the effect of a parenting education program on girls’ life satisfaction in governmental guidance schools of Shiraz. Methods: This study is an educational randomized controlled trial. At first, 152 female students in 2nd grade of governmental guidance schools and 304 parents (152 mother and 152 father) were selected by multistage random cluster sampling method. Then, they were categorized into experimental and control groups. Before and after the intervention, data were collected from two groups using multidimensional students’ life satisfaction scale with stability (Cronbach's alpha=0.89), test–retest and correlation coefficient (r=0.70). Educational intervention for parents was performed in the experimental group through presentations with question and answer, discussion in small groups and distribution of educational booklets in 5 volumes. Finally, the data were analyzed using SPSS 14 and through Mann-Whitney test, Chi-square test, Fisher’s Exact test, Wilcoxon test. Results: Before the intervention, the experimental and control groups did not show a statistically significant difference based on the demographic variables. Thetotal of life satisfaction scores and also its subscales in the experimental and controlgroup, before and six weeks afterthe educational interventiondid showstatisticallysignificant difference (p<0.001). The scores of differences (pre-test/post-test) in total life satisfaction between the experimental and control groups were statistically significant difference (p<0.001). Conclusion: According to low scores of the students in the pre-test, especially in the control group which didn’t undergo any educational program, holding scheduled educational intervention is necessary. This study not only supports the effectiveness of educational intervention but also recommends further educational research to develop knowledge regarding patterns of parenting education. PMID:25512913
Shen, Xiaomeng; Hu, Qiang; Li, Jun; Wang, Jianmin; Qu, Jun
2015-10-02
Comprehensive and accurate evaluation of data quality and false-positive biomarker discovery is critical to direct the method development/optimization for quantitative proteomics, which nonetheless remains challenging largely due to the high complexity and unique features of proteomic data. Here we describe an experimental null (EN) method to address this need. Because the method experimentally measures the null distribution (either technical or biological replicates) using the same proteomic samples, the same procedures and the same batch as the case-vs-contol experiment, it correctly reflects the collective effects of technical variability (e.g., variation/bias in sample preparation, LC-MS analysis, and data processing) and project-specific features (e.g., characteristics of the proteome and biological variation) on the performances of quantitative analysis. To show a proof of concept, we employed the EN method to assess the quantitative accuracy and precision and the ability to quantify subtle ratio changes between groups using different experimental and data-processing approaches and in various cellular and tissue proteomes. It was found that choices of quantitative features, sample size, experimental design, data-processing strategies, and quality of chromatographic separation can profoundly affect quantitative precision and accuracy of label-free quantification. The EN method was also demonstrated as a practical tool to determine the optimal experimental parameters and rational ratio cutoff for reliable protein quantification in specific proteomic experiments, for example, to identify the necessary number of technical/biological replicates per group that affords sufficient power for discovery. Furthermore, we assessed the ability of EN method to estimate levels of false-positives in the discovery of altered proteins, using two concocted sample sets mimicking proteomic profiling using technical and biological replicates, respectively, where the true-positives/negatives are known and span a wide concentration range. It was observed that the EN method correctly reflects the null distribution in a proteomic system and accurately measures false altered proteins discovery rate (FADR). In summary, the EN method provides a straightforward, practical, and accurate alternative to statistics-based approaches for the development and evaluation of proteomic experiments and can be universally adapted to various types of quantitative techniques.
Healing X-ray scattering images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jiliang; Lhermitte, Julien; Tian, Ye
X-ray scattering images contain numerous gaps and defects arising from detector limitations and experimental configuration. Here, we present a method to heal X-ray scattering images, filling gaps in the data and removing defects in a physically meaningful manner. Unlike generic inpainting methods, this method is closely tuned to the expected structure of reciprocal-space data. In particular, we exploit statistical tests and symmetry analysis to identify the structure of an image; we then copy, average and interpolate measured data into gaps in a way that respects the identified structure and symmetry. Importantly, the underlying analysis methods provide useful characterization of structuresmore » present in the image, including the identification of diffuseversussharp features, anisotropy and symmetry. The presented method leverages known characteristics of reciprocal space, enabling physically reasonable reconstruction even with large image gaps. The method will correspondingly fail for images that violate these underlying assumptions. The method assumes point symmetry and is thus applicable to small-angle X-ray scattering (SAXS) data, but only to a subset of wide-angle data. Our method succeeds in filling gaps and healing defects in experimental images, including extending data beyond the original detector borders.« less
Healing X-ray scattering images
Liu, Jiliang; Lhermitte, Julien; Tian, Ye; ...
2017-05-24
X-ray scattering images contain numerous gaps and defects arising from detector limitations and experimental configuration. Here, we present a method to heal X-ray scattering images, filling gaps in the data and removing defects in a physically meaningful manner. Unlike generic inpainting methods, this method is closely tuned to the expected structure of reciprocal-space data. In particular, we exploit statistical tests and symmetry analysis to identify the structure of an image; we then copy, average and interpolate measured data into gaps in a way that respects the identified structure and symmetry. Importantly, the underlying analysis methods provide useful characterization of structuresmore » present in the image, including the identification of diffuseversussharp features, anisotropy and symmetry. The presented method leverages known characteristics of reciprocal space, enabling physically reasonable reconstruction even with large image gaps. The method will correspondingly fail for images that violate these underlying assumptions. The method assumes point symmetry and is thus applicable to small-angle X-ray scattering (SAXS) data, but only to a subset of wide-angle data. Our method succeeds in filling gaps and healing defects in experimental images, including extending data beyond the original detector borders.« less
Assessing the performance of sewer rehabilitation on the reduction of infiltration and inflow.
Staufer, P; Scheidegger, A; Rieckermann, J
2012-10-15
Inflow and Infiltration (I/I) into sewer systems is generally unwanted, because, among other things, it decreases the performance of wastewater treatment plants and increases combined sewage overflows. As sewer rehabilitation to reduce I/I is very expensive, water managers not only need methods to accurately measure I/I, but also they need sound approaches to assess the actual performance of implemented rehabilitation measures. However, such performance assessment is rarely performed. On the one hand, it is challenging to adequately take into account the variability of influential factors, such as hydro-meteorological conditions. On the other hand, it is currently not clear how experimental data can indeed support robust evidence for reduced I/I. In this paper, we therefore statistically assess the performance of rehabilitation measures to reduce I/I. This is possible by using observations in a suitable reference catchment as a control group and assessing the significance of the observed effect by regression analysis, which is well established in other disciplines. We successfully demonstrate the usefulness of the approach in a case study, where rehabilitation reduced groundwater infiltration by 23.9%. A reduction of stormwater inflow of 35.7%, however, was not statistically significant. Investigations into the experimental design of monitoring campaigns confirmed that the variability of the data as well as the number of observations collected before the rehabilitation impact the detection limit of the effect. This implies that it is difficult to improve the data quality after the rehabilitation has been implemented. Therefore, future practical applications should consider a careful experimental design. Further developments could employ more sophisticated monitoring methods, such as stable environmental isotopes, to directly observe the individual infiltration components. In addition, water managers should develop strategies to effectively communicate statistically not significant I/I reduction ratios to decision makers. Copyright © 2012 Elsevier Ltd. All rights reserved.
Pathway Towards Fluency: Using 'disaggregate instruction' to promote science literacy
NASA Astrophysics Data System (ADS)
Brown, Bryan A.; Ryoo, Kihyun; Rodriguez, Jamie
2010-07-01
This study examines the impact of Disaggregate Instruction on students' science learning. Disaggregate Instruction is the idea that science teaching and learning can be separated into conceptual and discursive components. Using randomly assigned experimental and control groups, 49 fifth-grade students received web-based science lessons on photosynthesis using our experimental approach. We supplemented quantitative statistical comparisons of students' performance on pre- and post-test questions (multiple choice and short answer) with a qualitative analysis of students' post-test interviews. The results revealed that students in the experimental group outscored their control group counterparts across all measures. In addition, students taught using the experimental method demonstrated an improved ability to write using scientific language as well as an improved ability to provide oral explanations using scientific language. This study has important implications for how science educators can prepare teachers to teach diverse student populations.
NASA Astrophysics Data System (ADS)
van Poppel, Bret; Owkes, Mark; Nelson, Thomas; Lee, Zachary; Sowell, Tyler; Benson, Michael; Vasquez Guzman, Pablo; Fahrig, Rebecca; Eaton, John; Kurman, Matthew; Kweon, Chol-Bum; Bravo, Luis
2014-11-01
In this work, we present high-fidelity Computational Fluid Dynamics (CFD) results of liquid fuel injection from a pressure-swirl atomizer and compare the simulations to experimental results obtained using both shadowgraphy and phase-averaged X-ray computed tomography (CT) scans. The CFD and experimental results focus on the dense near-nozzle region to identify the dominant mechanisms of breakup during primary atomization. Simulations are performed using the NGA code of Desjardins et al (JCP 227 (2008)) and employ the volume of fluid (VOF) method proposed by Owkes and Desjardins (JCP 270 (2013)), a second order accurate, un-split, conservative, three-dimensional VOF scheme providing second order density fluxes and capable of robust and accurate high density ratio simulations. Qualitative features and quantitative statistics are assessed and compared for the simulation and experimental results, including the onset of atomization, spray cone angle, and drop size and distribution.
What can we learn from noise? — Mesoscopic nonequilibrium statistical physics —
KOBAYASHI, Kensuke
2016-01-01
Mesoscopic systems — small electric circuits working in quantum regime — offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics. PMID:27477456
What can we learn from noise? - Mesoscopic nonequilibrium statistical physics.
Kobayashi, Kensuke
2016-01-01
Mesoscopic systems - small electric circuits working in quantum regime - offer us a unique experimental stage to explorer quantum transport in a tunable and precise way. The purpose of this Review is to show how they can contribute to statistical physics. We introduce the significance of fluctuation, or equivalently noise, as noise measurement enables us to address the fundamental aspects of a physical system. The significance of the fluctuation theorem (FT) in statistical physics is noted. We explain what information can be deduced from the current noise measurement in mesoscopic systems. As an important application of the noise measurement to statistical physics, we describe our experimental work on the current and current noise in an electron interferometer, which is the first experimental test of FT in quantum regime. Our attempt will shed new light in the research field of mesoscopic quantum statistical physics.
Using simulation to interpret experimental data in terms of protein conformational ensembles.
Allison, Jane R
2017-04-01
In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.
Ladar imaging detection of salient map based on PWVD and Rényi entropy
NASA Astrophysics Data System (ADS)
Xu, Yuannan; Zhao, Yuan; Deng, Rong; Dong, Yanbing
2013-10-01
Spatial-frequency information of a given image can be extracted by associating the grey-level spatial data with one of the well-known spatial/spatial-frequency distributions. The Wigner-Ville distribution (WVD) has a good characteristic that the images can be represented in spatial/spatial-frequency domains. For intensity and range images of ladar, through the pseudo Wigner-Ville distribution (PWVD) using one or two dimension window, the statistical property of Rényi entropy is studied. We also analyzed the change of Rényi entropy's statistical property in the ladar intensity and range images when the man-made objects appear. From this foundation, a novel method for generating saliency map based on PWVD and Rényi entropy is proposed. After that, target detection is completed when the saliency map is segmented using a simple and convenient threshold method. For the ladar intensity and range images, experimental results show the proposed method can effectively detect the military vehicles from complex earth background with low false alarm.
NASA Astrophysics Data System (ADS)
Kang, DongYel; Wang, Alex; Volgger, Veronika; Chen, Zhongping; Wong, Brian J. F.
2015-07-01
Detection of an early stage of subglottic edema is vital for airway management and prevention of stenosis, a life-threatening condition in critically ill neonates. As an observer for the task of diagnosing edema in vivo, we investigated spatiotemporal correlation (STC) of full-range optical coherence tomography (OCT) images acquired in the rabbit airway with experimentally simulated edema. Operating the STC observer on OCT images generates STC coefficients as test statistics for the statistical decision task. Resulting from this, the receiver operating characteristic (ROC) curves for the diagnosis of airway edema with full-range OCT in-vivo images were extracted and areas under ROC curves were calculated. These statistically quantified results demonstrated the potential clinical feasibility of the STC method as a means to identify early airway edema.
Catto, James W F; Linkens, Derek A; Abbod, Maysam F; Chen, Minyou; Burton, Julian L; Feeley, Kenneth M; Hamdy, Freddie C
2003-09-15
New techniques for the prediction of tumor behavior are needed, because statistical analysis has a poor accuracy and is not applicable to the individual. Artificial intelligence (AI) may provide these suitable methods. Whereas artificial neural networks (ANN), the best-studied form of AI, have been used successfully, its hidden networks remain an obstacle to its acceptance. Neuro-fuzzy modeling (NFM), another AI method, has a transparent functional layer and is without many of the drawbacks of ANN. We have compared the predictive accuracies of NFM, ANN, and traditional statistical methods, for the behavior of bladder cancer. Experimental molecular biomarkers, including p53 and the mismatch repair proteins, and conventional clinicopathological data were studied in a cohort of 109 patients with bladder cancer. For all three of the methods, models were produced to predict the presence and timing of a tumor relapse. Both methods of AI predicted relapse with an accuracy ranging from 88% to 95%. This was superior to statistical methods (71-77%; P < 0.0006). NFM appeared better than ANN at predicting the timing of relapse (P = 0.073). The use of AI can accurately predict cancer behavior. NFM has a similar or superior predictive accuracy to ANN. However, unlike the impenetrable "black-box" of a neural network, the rules of NFM are transparent, enabling validation from clinical knowledge and the manipulation of input variables to allow exploratory predictions. This technique could be used widely in a variety of areas of medicine.
NASA Technical Reports Server (NTRS)
Ankenman, Bruce; Ermer, Donald; Clum, James A.
1994-01-01
Modules dealing with statistical experimental design (SED), process modeling and improvement, and response surface methods have been developed and tested in two laboratory courses. One course was a manufacturing processes course in Mechanical Engineering and the other course was a materials processing course in Materials Science and Engineering. Each module is used as an 'experiment' in the course with the intent that subsequent course experiments will use SED methods for analysis and interpretation of data. Evaluation of the modules' effectiveness has been done by both survey questionnaires and inclusion of the module methodology in course examination questions. Results of the evaluation have been very positive. Those evaluation results and details of the modules' content and implementation are presented. The modules represent an important component for updating laboratory instruction and to provide training in quality for improved engineering practice.
NASA Astrophysics Data System (ADS)
Powell, M. A.; Rawlinson, K. S.
A kinetic Stirling cycle engine, the Stirling Thermal Motors (STM) STM4-120, was tested at the Sandia National Laboratories Engine Test Facility (ETF) from March 1989-August 1992. Sandia is interested in determining this engine's potential for solar-thermal-electric applications. The last round of testing was conducted from July-August 1992 using Sandia-designed gas-fired heat pipe evaporators as the heat input system to the engine. The STM4-120 was performance mapped over a range of sodium vapor temperatures, cooling water temperatures, and cycle pressures. The resulting shaft power output levels ranged from 5-9 kW. The engine demonstrated high conversion efficiency (24-31%) even though the power output level was less than 40% of the rated output of 25 kW. The engine had been previously derated from 25 kW to 10 kW shaft power due to mechanical limitations that were identified by STM during parallel testing at their facility in Ann Arbor, MI. A statistical method was used to design the experiment, to choose the experimental points, and to generate correlation equations describing the engine performance given the operating parameters. The testing was truncated due to a failure of the heat pipe system caused by entrainment of liquid sodium in the condenser section of the heat pipes. Enough data was gathered to generate the correlations and to demonstrate the experimental technique. The correlation is accurate in the experimental space and is simple enough for use in hand calculations and spreadsheet-based system models. Use of this method can simplify the construction of accurate performance and economic models of systems in which the engine is a component. The purpose of this paper is to present the method used to design the experiments and to analyze the performance data.
PCA as a practical indicator of OPLS-DA model reliability.
Worley, Bradley; Powers, Robert
Principal Component Analysis (PCA) and Orthogonal Projections to Latent Structures Discriminant Analysis (OPLS-DA) are powerful statistical modeling tools that provide insights into separations between experimental groups based on high-dimensional spectral measurements from NMR, MS or other analytical instrumentation. However, when used without validation, these tools may lead investigators to statistically unreliable conclusions. This danger is especially real for Partial Least Squares (PLS) and OPLS, which aggressively force separations between experimental groups. As a result, OPLS-DA is often used as an alternative method when PCA fails to expose group separation, but this practice is highly dangerous. Without rigorous validation, OPLS-DA can easily yield statistically unreliable group separation. A Monte Carlo analysis of PCA group separations and OPLS-DA cross-validation metrics was performed on NMR datasets with statistically significant separations in scores-space. A linearly increasing amount of Gaussian noise was added to each data matrix followed by the construction and validation of PCA and OPLS-DA models. With increasing added noise, the PCA scores-space distance between groups rapidly decreased and the OPLS-DA cross-validation statistics simultaneously deteriorated. A decrease in correlation between the estimated loadings (added noise) and the true (original) loadings was also observed. While the validity of the OPLS-DA model diminished with increasing added noise, the group separation in scores-space remained basically unaffected. Supported by the results of Monte Carlo analyses of PCA group separations and OPLS-DA cross-validation metrics, we provide practical guidelines and cross-validatory recommendations for reliable inference from PCA and OPLS-DA models.
Statistical mechanics of complex neural systems and high dimensional data
NASA Astrophysics Data System (ADS)
Advani, Madhu; Lahiri, Subhaneil; Ganguli, Surya
2013-03-01
Recent experimental advances in neuroscience have opened new vistas into the immense complexity of neuronal networks. This proliferation of data challenges us on two parallel fronts. First, how can we form adequate theoretical frameworks for understanding how dynamical network processes cooperate across widely disparate spatiotemporal scales to solve important computational problems? Second, how can we extract meaningful models of neuronal systems from high dimensional datasets? To aid in these challenges, we give a pedagogical review of a collection of ideas and theoretical methods arising at the intersection of statistical physics, computer science and neurobiology. We introduce the interrelated replica and cavity methods, which originated in statistical physics as powerful ways to quantitatively analyze large highly heterogeneous systems of many interacting degrees of freedom. We also introduce the closely related notion of message passing in graphical models, which originated in computer science as a distributed algorithm capable of solving large inference and optimization problems involving many coupled variables. We then show how both the statistical physics and computer science perspectives can be applied in a wide diversity of contexts to problems arising in theoretical neuroscience and data analysis. Along the way we discuss spin glasses, learning theory, illusions of structure in noise, random matrices, dimensionality reduction and compressed sensing, all within the unified formalism of the replica method. Moreover, we review recent conceptual connections between message passing in graphical models, and neural computation and learning. Overall, these ideas illustrate how statistical physics and computer science might provide a lens through which we can uncover emergent computational functions buried deep within the dynamical complexities of neuronal networks.
The applications of statistical quantification techniques in nanomechanics and nanoelectronics.
Mai, Wenjie; Deng, Xinwei
2010-10-08
Although nanoscience and nanotechnology have been developing for approximately two decades and have achieved numerous breakthroughs, the experimental results from nanomaterials with a higher noise level and poorer repeatability than those from bulk materials still remain as a practical issue, and challenge many techniques of quantification of nanomaterials. This work proposes a physical-statistical modeling approach and a global fitting statistical method to use all the available discrete data or quasi-continuous curves to quantify a few targeted physical parameters, which can provide more accurate, efficient and reliable parameter estimates, and give reasonable physical explanations. In the resonance method for measuring the elastic modulus of ZnO nanowires (Zhou et al 2006 Solid State Commun. 139 222-6), our statistical technique gives E = 128.33 GPa instead of the original E = 108 GPa, and unveils a negative bias adjustment f(0). The causes are suggested by the systematic bias in measuring the length of the nanowires. In the electronic measurement of the resistivity of a Mo nanowire (Zach et al 2000 Science 290 2120-3), the proposed new method automatically identified the importance of accounting for the Ohmic contact resistance in the model of the Ohmic behavior in nanoelectronics experiments. The 95% confidence interval of resistivity in the proposed one-step procedure is determined to be 3.57 +/- 0.0274 x 10( - 5) ohm cm, which should be a more reliable and precise estimate. The statistical quantification technique should find wide applications in obtaining better estimations from various systematic errors and biased effects that become more significant at the nanoscale.
SHARE: system design and case studies for statistical health information release
Gardner, James; Xiong, Li; Xiao, Yonghui; Gao, Jingjing; Post, Andrew R; Jiang, Xiaoqian; Ohno-Machado, Lucila
2013-01-01
Objectives We present SHARE, a new system for statistical health information release with differential privacy. We present two case studies that evaluate the software on real medical datasets and demonstrate the feasibility and utility of applying the differential privacy framework on biomedical data. Materials and Methods SHARE releases statistical information in electronic health records with differential privacy, a strong privacy framework for statistical data release. It includes a number of state-of-the-art methods for releasing multidimensional histograms and longitudinal patterns. We performed a variety of experiments on two real datasets, the surveillance, epidemiology and end results (SEER) breast cancer dataset and the Emory electronic medical record (EeMR) dataset, to demonstrate the feasibility and utility of SHARE. Results Experimental results indicate that SHARE can deal with heterogeneous data present in medical data, and that the released statistics are useful. The Kullback–Leibler divergence between the released multidimensional histograms and the original data distribution is below 0.5 and 0.01 for seven-dimensional and three-dimensional data cubes generated from the SEER dataset, respectively. The relative error for longitudinal pattern queries on the EeMR dataset varies between 0 and 0.3. While the results are promising, they also suggest that challenges remain in applying statistical data release using the differential privacy framework for higher dimensional data. Conclusions SHARE is one of the first systems to provide a mechanism for custodians to release differentially private aggregate statistics for a variety of use cases in the medical domain. This proof-of-concept system is intended to be applied to large-scale medical data warehouses. PMID:23059729
α -induced reactions on 115In: Cross section measurements and statistical model analysis
NASA Astrophysics Data System (ADS)
Kiss, G. G.; Szücs, T.; Mohr, P.; Török, Zs.; Huszánk, R.; Gyürky, Gy.; Fülöp, Zs.
2018-05-01
Background: α -nucleus optical potentials are basic ingredients of statistical model calculations used in nucleosynthesis simulations. While the nucleon+nucleus optical potential is fairly well known, for the α +nucleus optical potential several different parameter sets exist and large deviations, reaching sometimes even an order of magnitude, are found between the cross section predictions calculated using different parameter sets. Purpose: A measurement of the radiative α -capture and the α -induced reaction cross sections on the nucleus 115In at low energies allows a stringent test of statistical model predictions. Since experimental data are scarce in this mass region, this measurement can be an important input to test the global applicability of α +nucleus optical model potentials and further ingredients of the statistical model. Methods: The reaction cross sections were measured by means of the activation method. The produced activities were determined by off-line detection of the γ rays and characteristic x rays emitted during the electron capture decay of the produced Sb isotopes. The 115In(α ,γ )119Sb and 115In(α ,n )Sb118m reaction cross sections were measured between Ec .m .=8.83 and 15.58 MeV, and the 115In(α ,n )Sb118g reaction was studied between Ec .m .=11.10 and 15.58 MeV. The theoretical analysis was performed within the statistical model. Results: The simultaneous measurement of the (α ,γ ) and (α ,n ) cross sections allowed us to determine a best-fit combination of all parameters for the statistical model. The α +nucleus optical potential is identified as the most important input for the statistical model. The best fit is obtained for the new Atomki-V1 potential, and good reproduction of the experimental data is also achieved for the first version of the Demetriou potentials and the simple McFadden-Satchler potential. The nucleon optical potential, the γ -ray strength function, and the level density parametrization are also constrained by the data although there is no unique best-fit combination. Conclusions: The best-fit calculations allow us to extrapolate the low-energy (α ,γ ) cross section of 115In to the astrophysical Gamow window with reasonable uncertainties. However, still further improvements of the α -nucleus potential are required for a global description of elastic (α ,α ) scattering and α -induced reactions in a wide range of masses and energies.
The statistics of identifying differentially expressed genes in Expresso and TM4: a comparison
Sioson, Allan A; Mane, Shrinivasrao P; Li, Pinghua; Sha, Wei; Heath, Lenwood S; Bohnert, Hans J; Grene, Ruth
2006-01-01
Background Analysis of DNA microarray data takes as input spot intensity measurements from scanner software and returns differential expression of genes between two conditions, together with a statistical significance assessment. This process typically consists of two steps: data normalization and identification of differentially expressed genes through statistical analysis. The Expresso microarray experiment management system implements these steps with a two-stage, log-linear ANOVA mixed model technique, tailored to individual experimental designs. The complement of tools in TM4, on the other hand, is based on a number of preset design choices that limit its flexibility. In the TM4 microarray analysis suite, normalization, filter, and analysis methods form an analysis pipeline. TM4 computes integrated intensity values (IIV) from the average intensities and spot pixel counts returned by the scanner software as input to its normalization steps. By contrast, Expresso can use either IIV data or median intensity values (MIV). Here, we compare Expresso and TM4 analysis of two experiments and assess the results against qRT-PCR data. Results The Expresso analysis using MIV data consistently identifies more genes as differentially expressed, when compared to Expresso analysis with IIV data. The typical TM4 normalization and filtering pipeline corrects systematic intensity-specific bias on a per microarray basis. Subsequent statistical analysis with Expresso or a TM4 t-test can effectively identify differentially expressed genes. The best agreement with qRT-PCR data is obtained through the use of Expresso analysis and MIV data. Conclusion The results of this research are of practical value to biologists who analyze microarray data sets. The TM4 normalization and filtering pipeline corrects microarray-specific systematic bias and complements the normalization stage in Expresso analysis. The results of Expresso using MIV data have the best agreement with qRT-PCR results. In one experiment, MIV is a better choice than IIV as input to data normalization and statistical analysis methods, as it yields as greater number of statistically significant differentially expressed genes; TM4 does not support the choice of MIV input data. Overall, the more flexible and extensive statistical models of Expresso achieve more accurate analytical results, when judged by the yardstick of qRT-PCR data, in the context of an experimental design of modest complexity. PMID:16626497
Jenkinson, Garrett; Abante, Jordi; Feinberg, Andrew P; Goutsias, John
2018-03-07
DNA methylation is a stable form of epigenetic memory used by cells to control gene expression. Whole genome bisulfite sequencing (WGBS) has emerged as a gold-standard experimental technique for studying DNA methylation by producing high resolution genome-wide methylation profiles. Statistical modeling and analysis is employed to computationally extract and quantify information from these profiles in an effort to identify regions of the genome that demonstrate crucial or aberrant epigenetic behavior. However, the performance of most currently available methods for methylation analysis is hampered by their inability to directly account for statistical dependencies between neighboring methylation sites, thus ignoring significant information available in WGBS reads. We present a powerful information-theoretic approach for genome-wide modeling and analysis of WGBS data based on the 1D Ising model of statistical physics. This approach takes into account correlations in methylation by utilizing a joint probability model that encapsulates all information available in WGBS methylation reads and produces accurate results even when applied on single WGBS samples with low coverage. Using the Shannon entropy, our approach provides a rigorous quantification of methylation stochasticity in individual WGBS samples genome-wide. Furthermore, it utilizes the Jensen-Shannon distance to evaluate differences in methylation distributions between a test and a reference sample. Differential performance assessment using simulated and real human lung normal/cancer data demonstrate a clear superiority of our approach over DSS, a recently proposed method for WGBS data analysis. Critically, these results demonstrate that marginal methods become statistically invalid when correlations are present in the data. This contribution demonstrates clear benefits and the necessity of modeling joint probability distributions of methylation using the 1D Ising model of statistical physics and of quantifying methylation stochasticity using concepts from information theory. By employing this methodology, substantial improvement of DNA methylation analysis can be achieved by effectively taking into account the massive amount of statistical information available in WGBS data, which is largely ignored by existing methods.
Moeyaert, Mariola; Ugille, Maaike; Ferron, John M; Beretvas, S Natasha; Van den Noortgate, Wim
2014-09-01
The quantitative methods for analyzing single-subject experimental data have expanded during the last decade, including the use of regression models to statistically analyze the data, but still a lot of questions remain. One question is how to specify predictors in a regression model to account for the specifics of the design and estimate the effect size of interest. These quantitative effect sizes are used in retrospective analyses and allow synthesis of single-subject experimental study results which is informative for evidence-based decision making, research and theory building, and policy discussions. We discuss different design matrices that can be used for the most common single-subject experimental designs (SSEDs), namely, the multiple-baseline designs, reversal designs, and alternating treatment designs, and provide empirical illustrations. The purpose of this article is to guide single-subject experimental data analysts interested in analyzing and meta-analyzing SSED data. © The Author(s) 2014.
Boonyasopun, Umaporn; Aree, Patcharaporn; Avant, Kay C
2008-06-01
This quasi-experimental study examined the effects of an empowerment-based nutrition promotion program on food consumption and serum lipid levels among hyperlipidemic Thai elderly. Fifty-six experimental subjects received the program; 48 control subjects maintained their habitual lifestyle. The statistical methods used were the t-test, Z-test, and chi2/Fisher's exact test. After the program, the consumption of high saturated fat, cholesterol, and simple sugar diets was significantly lower for the experimental group than for the control group. The percentage change of the serum total cholesterol of the experimental subjects was significantly higher than that of the control subjects. The number of experimental subjects that changed from hyperlipidemia to normolipidemia significantly increased compared to that for the control subjects. The implementation of this program was related to an improvement in food consumption and serum lipid levels among hyperlipidemic Thai elderly and, therefore, has implications for practice.
ERIC Educational Resources Information Center
Li, Yuan H.; Yang, Yu N.; Tompkins, Leroy J.; Modarresi, Shahpar
2005-01-01
The statistical technique, "Zero-One Linear Programming," that has successfully been used to create multiple tests with similar characteristics (e.g., item difficulties, test information and test specifications) in the area of educational measurement, was deemed to be a suitable method for creating multiple sets of matched samples to be…
Volatile Organic Carbon Emissions. Phase 2.
1987-02-01
on sulfur (S IV) species *0. B. Nurmi, et al, "Sulfite Oxidation in Organic Acid Solutions," Flue Gas Desulfurization , American Chemical Society, 1982...in Organic Acid Solutions," Flue Gas Desulfurization , American Chemical Society, 1982, pp. 173-189. 8. Experimental Statistics; Handbook 91, United...Analysis of percentage solvent removal from absorber 49 inlet gas by Yates’ method 12. Analysis of weight percent solvent in recycle column 50 absorber
Use of Unlabeled Samples for Mitigating the Hughes Phenomenon
NASA Technical Reports Server (NTRS)
Landgrebe, David A.; Shahshahani, Behzad M.
1993-01-01
The use of unlabeled samples in improving the performance of classifiers is studied. When the number of training samples is fixed and small, additional feature measurements may reduce the performance of a statistical classifier. It is shown that by using unlabeled samples, estimates of the parameters can be improved and therefore this phenomenon may be mitigated. Various methods for using unlabeled samples are reviewed and experimental results are provided.
2013-03-21
10 2.3 Time Series Response Data ................................................................................. 12 2.4 Comparison of Response...to 12 evaluating the efficiency of the parameter estimates. In the past, the most popular form of response surface design used the D-optimality...as well. A model can refer to almost anything in math , statistics, or computer science. It can be any “physical, mathematical, or logical
Nam, Mi-Hyun; Chun, Myung-Sun
2018-01-01
The aim of this study is to evaluate the reporting quality of animal experiments in Korea using the Animals in Research: Reporting In Vivo Experiments (ARRIVE) guideline developed in 2010 to overcome the reproducibility problem and to encourage compliance with replacement, refinement and reduction of animals in research (3R's principle). We reviewed 50 papers published by a Korean research group from 2013 to 2016 and scored the conformity with the 20-items ARRIVE guideline. The median conformity score was 39.50%. For more precise evaluation, the 20 items were subdivided into 57 sub-items. Among the sub-items, status of experimental animals, housing and husbandry were described under the average level. Microenvironment sub-items, such as enrichment, bedding material, cage type, number of companions, scored under 10%. Although statistical methods used for the studies were given in most publications (84%), sample size calculation and statistical assumption were rarely described. Most publications mentioned the IACUC approval, but only 8% mentioned welfare-related assessments and interventions, and only 4% mentioned any implications of experimental methods or findings for 3R. We may recommend the revision of the present IACUC proposal to collect more detailed information and improving educational program for animal researchers according to the ARRIVE guideline. PMID:29628972
RAId_DbS: Peptide Identification using Database Searches with Realistic Statistics
Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo
2007-01-01
Background The key to mass-spectrometry-based proteomics is peptide identification. A major challenge in peptide identification is to obtain realistic E-values when assigning statistical significance to candidate peptides. Results Using a simple scoring scheme, we propose a database search method with theoretically characterized statistics. Taking into account possible skewness in the random variable distribution and the effect of finite sampling, we provide a theoretical derivation for the tail of the score distribution. For every experimental spectrum examined, we collect the scores of peptides in the database, and find good agreement between the collected score statistics and our theoretical distribution. Using Student's t-tests, we quantify the degree of agreement between the theoretical distribution and the score statistics collected. The T-tests may be used to measure the reliability of reported statistics. When combined with reported P-value for a peptide hit using a score distribution model, this new measure prevents exaggerated statistics. Another feature of RAId_DbS is its capability of detecting multiple co-eluted peptides. The peptide identification performance and statistical accuracy of RAId_DbS are assessed and compared with several other search tools. The executables and data related to RAId_DbS are freely available upon request. PMID:17961253
Coloc-stats: a unified web interface to perform colocalization analysis of genomic features.
Simovski, Boris; Kanduri, Chakravarthi; Gundersen, Sveinung; Titov, Dmytro; Domanska, Diana; Bock, Christoph; Bossini-Castillo, Lara; Chikina, Maria; Favorov, Alexander; Layer, Ryan M; Mironov, Andrey A; Quinlan, Aaron R; Sheffield, Nathan C; Trynka, Gosia; Sandve, Geir K
2018-06-05
Functional genomics assays produce sets of genomic regions as one of their main outputs. To biologically interpret such region-sets, researchers often use colocalization analysis, where the statistical significance of colocalization (overlap, spatial proximity) between two or more region-sets is tested. Existing colocalization analysis tools vary in the statistical methodology and analysis approaches, thus potentially providing different conclusions for the same research question. As the findings of colocalization analysis are often the basis for follow-up experiments, it is helpful to use several tools in parallel and to compare the results. We developed the Coloc-stats web service to facilitate such analyses. Coloc-stats provides a unified interface to perform colocalization analysis across various analytical methods and method-specific options (e.g. colocalization measures, resolution, null models). Coloc-stats helps the user to find a method that supports their experimental requirements and allows for a straightforward comparison across methods. Coloc-stats is implemented as a web server with a graphical user interface that assists users with configuring their colocalization analyses. Coloc-stats is freely available at https://hyperbrowser.uio.no/coloc-stats/.
Online Denoising Based on the Second-Order Adaptive Statistics Model.
Yi, Sheng-Lun; Jin, Xue-Bo; Su, Ting-Li; Tang, Zhen-Yun; Wang, Fa-Fa; Xiang, Na; Kong, Jian-Lei
2017-07-20
Online denoising is motivated by real-time applications in the industrial process, where the data must be utilizable soon after it is collected. Since the noise in practical process is usually colored, it is quite a challenge for denoising techniques. In this paper, a novel online denoising method was proposed to achieve the processing of the practical measurement data with colored noise, and the characteristics of the colored noise were considered in the dynamic model via an adaptive parameter. The proposed method consists of two parts within a closed loop: the first one is to estimate the system state based on the second-order adaptive statistics model and the other is to update the adaptive parameter in the model using the Yule-Walker algorithm. Specifically, the state estimation process was implemented via the Kalman filter in a recursive way, and the online purpose was therefore attained. Experimental data in a reinforced concrete structure test was used to verify the effectiveness of the proposed method. Results show the proposed method not only dealt with the signals with colored noise, but also achieved a tradeoff between efficiency and accuracy.
Perception-based road hazard identification with Internet support.
Tarko, Andrew P; DeSalle, Brian R
2003-01-01
One of the most important tasks faced by highway agencies is identifying road hazards. Agencies use crash statistics to detect road intersections and segments where the frequency of crashes is excessive. With the crash-based method, a dangerous intersection or segment can be pointed out only after a sufficient number of crashes occur. A more proactive method is needed, and motorist complaints may be able to assist agencies in detecting road hazards before crashes occur. This paper investigates the quality of safety information reported by motorists and the effectiveness of hazard identification based on motorist reports, which were collected with an experimental Internet website. It demonstrates that the intersections pointed out by motorists tended to have more crashes than other intersections. The safety information collected through the website was comparable to 2-3 months of crash data. It was concluded that although the Internet-based method could not substitute for the traditional crash-based methods, its joint use with crash statistics might be useful in detecting new hazards where crash data had been collected for a short time.
Halloran, L
1995-01-01
Computers increasingly are being integrated into nursing education. One method of integration is through computer managed instruction (CMI). Recently, technology has become available that allows the integration of keypad questions into CMI. This brings a new type of interactivity between students and teachers into the classroom. The purpose of this study was to evaluate differences in achievement between a control group taught by traditional classroom lecture (TCL) and an experimental group taught using CMI and keypad questions. Both control and experimental groups consisted of convenience samples of junior nursing students in a baccalaureate program taking a medical/surgical nursing course. Achievement was measured by three instructor-developed multiple choice examinations. Findings demonstrated that although the experimental group demonstrated increasingly higher test scores as the semester progressed, no statistical difference was found in achievement between the two groups. One reason for this may be phenomenon of vampire video. Initially, the method of presentation overshadowed the content. As students became desensitized to the method, they were able to focus and absorb more content. This study suggests that CMI and keypads are a viable teaching option for nursing education. It is equal to TCL in student achievement and provides a new level of interaction in the classroom setting.
Numerical Modeling of Fluorescence Emission Energy Dispersion in Luminescent Solar Concentrator
NASA Astrophysics Data System (ADS)
Li, Lanfang; Sheng, Xing; Rogers, John; Nuzzo, Ralph
2013-03-01
We present a numerical modeling method and the corresponding experimental results, to address fluorescence emission dispersion for applications such as luminescent solar concentrator and light emitting diode color correction. Previously established modeling methods utilized a statistic-thermodynamic theory (Kenard-Stepnov etc.) that required a thorough understanding of the free energy landscape of the fluorophores. Some more recent work used an empirical approximation of the measured emission energy dispersion profile without considering anti-Stokes shifting during absorption and emission. In this work we present a technique for modeling fluorescence absorption and emission that utilizes the experimentally measured spectrum and approximates the observable Frank-Condon vibronic states as a continuum and takes into account thermodynamic energy relaxation by allowing thermal fluctuations. This new approximation method relaxes the requirement for knowledge of the fluorophore system and reduces demand on computing resources while still capturing the essence of physical process. We present simulation results of the energy distribution of emitted photons and compare them with experimental results with good agreement in terms of peak red-shift and intensity attenuation in a luminescent solar concentrator. This work is supported by the DOE `Light-Material Interactions in Energy Conversion' Energy Frontier Research Center under grant DE-SC0001293.
Effects of Mobile Learning in Medical Education: A Counterfactual Evaluation.
Briz-Ponce, Laura; Juanes-Méndez, Juan Antonio; García-Peñalvo, Francisco José; Pereira, Anabela
2016-06-01
The aim of this research is to contribute to the general system education providing new insights and resources. This study performs a quasi-experimental study at University of Salamanca with 30 students to compare results between using an anatomic app for learning and the formal traditional method conducted by a teacher. The findings of the investigation suggest that the performance of learners using mobile apps is statistical better than the students using the traditional method. However, mobile devices should be considered as an additional tool to complement the teachers' explanation and it is necessary to overcome different barriers and challenges to adopt these pedagogical methods at University.
Reconstruction of ensembles of coupled time-delay systems from time series.
Sysoev, I V; Prokhorov, M D; Ponomarenko, V I; Bezruchko, B P
2014-06-01
We propose a method to recover from time series the parameters of coupled time-delay systems and the architecture of couplings between them. The method is based on a reconstruction of model delay-differential equations and estimation of statistical significance of couplings. It can be applied to networks composed of nonidentical nodes with an arbitrary number of unidirectional and bidirectional couplings. We test our method on chaotic and periodic time series produced by model equations of ensembles of diffusively coupled time-delay systems in the presence of noise, and apply it to experimental time series obtained from electronic oscillators with delayed feedback coupled by resistors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
New approach in the quantum statistical parton distribution
NASA Astrophysics Data System (ADS)
Sohaily, Sozha; Vaziri (Khamedi), Mohammad
2017-12-01
An attempt to find simple parton distribution functions (PDFs) based on quantum statistical approach is presented. The PDFs described by the statistical model have very interesting physical properties which help to understand the structure of partons. The longitudinal portion of distribution functions are given by applying the maximum entropy principle. An interesting and simple approach to determine the statistical variables exactly without fitting and fixing parameters is surveyed. Analytic expressions of the x-dependent PDFs are obtained in the whole x region [0, 1], and the computed distributions are consistent with the experimental observations. The agreement with experimental data, gives a robust confirm of our simple presented statistical model.
Angland, P.; Haberberger, D.; Ivancic, S. T.; ...
2017-10-30
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angland, P.; Haberberger, D.; Ivancic, S. T.
Here, a new method of analysis for angular filter refractometry images was developed to characterize laser-produced, long-scale-length plasmas using an annealing algorithm to iterative converge upon a solution. Angular filter refractometry (AFR) is a novel technique used to characterize the density pro files of laser-produced, long-scale-length plasmas. A synthetic AFR image is constructed by a user-defined density profile described by eight parameters, and the algorithm systematically alters the parameters until the comparison is optimized. The optimization and statistical uncertainty calculation is based on a minimization of themore » $$\\chi$$2 test statistic. The algorithm was successfully applied to experimental data of plasma expanding from a flat, laser-irradiated target, resulting in average uncertainty in the density profile of 5-10% in the region of interest.« less
Vargas-Rodriguez, Everardo; Guzman-Chavez, Ana Dinora; Baeza-Serrato, Roberto
2018-06-04
In this work, a novel tailored algorithm to enhance the overall sensitivity of gas concentration sensors based on the Direct Absorption Tunable Laser Absorption Spectroscopy (DA-ATLAS) method is presented. By using this algorithm, the sensor sensitivity can be custom-designed to be quasi constant over a much larger dynamic range compared with that obtained by typical methods based on a single statistics feature of the sensor signal output (peak amplitude, area under the curve, mean or RMS). Additionally, it is shown that with our algorithm, an optimal function can be tailored to get a quasi linear relationship between the concentration and some specific statistics features over a wider dynamic range. In order to test the viability of our algorithm, a basic C 2 H 2 sensor based on DA-ATLAS was implemented, and its experimental measurements support the simulated results provided by our algorithm.
Can the behavioral sciences self-correct? A social epistemic study.
Romero, Felipe
2016-12-01
Advocates of the self-corrective thesis argue that scientific method will refute false theories and find closer approximations to the truth in the long run. I discuss a contemporary interpretation of this thesis in terms of frequentist statistics in the context of the behavioral sciences. First, I identify experimental replications and systematic aggregation of evidence (meta-analysis) as the self-corrective mechanism. Then, I present a computer simulation study of scientific communities that implement this mechanism to argue that frequentist statistics may converge upon a correct estimate or not depending on the social structure of the community that uses it. Based on this study, I argue that methodological explanations of the "replicability crisis" in psychology are limited and propose an alternative explanation in terms of biases. Finally, I conclude suggesting that scientific self-correction should be understood as an interaction effect between inference methods and social structures. Copyright © 2016 Elsevier Ltd. All rights reserved.
Shah, Dipali Yogesh; Dadpe, Ashwini Manish; Kalra, Dheeraj Deepak; Garcha, Vikram P
2015-12-01
The aim of this study was to investigate if a videotaped feedback method enhanced teaching and learning outcomes in a preclinical operative laboratory setting for novice learners. In 2013, 60 dental students at a dental school in India were randomly assigned to two groups: control (n=30) and experimental (n=30). The control group prepared a Class II tooth preparation for amalgam after receiving a video demonstration of the exercise. The experimental group received the same video demonstration as the control group, but they also participated in a discussion and analysis of the control groups' videotaped performance and then performed the same exercise. The self-evaluation scores (SS) and examiner evaluation scores (ES) of the two groups were compared using the unpaired t-test. The experimental group also used a five-point Likert scale to rate each item on the feedback form. The means of SS (13.65±2.43) and ES (14.75±1.97) of the experimental group were statistically higher than the means of SS (11.55±2.09) and ES (11.60±1.82) of the control group. Most students in the experimental group perceived that this technique enhanced their learning experience. Within the limits of this study, the videotaped feedback using both ideal and non-ideal examples enhanced the students' performance.
Statistical Analyses of Raw Material Data for MTM45-1/CF7442A-36% RW: CMH Cure Cycle
NASA Technical Reports Server (NTRS)
Coroneos, Rula; Pai, Shantaram, S.; Murthy, Pappu
2013-01-01
This report describes statistical characterization of physical properties of the composite material system MTM45-1/CF7442A, which has been tested and is currently being considered for use on spacecraft structures. This composite system is made of 6K plain weave graphite fibers in a highly toughened resin system. This report summarizes the distribution types and statistical details of the tests and the conditions for the experimental data generated. These distributions will be used in multivariate regression analyses to help determine material and design allowables for similar material systems and to establish a procedure for other material systems. Additionally, these distributions will be used in future probabilistic analyses of spacecraft structures. The specific properties that are characterized are the ultimate strength, modulus, and Poisson??s ratio by using a commercially available statistical package. Results are displayed using graphical and semigraphical methods and are included in the accompanying appendixes.
Scharfenberger, Christian; Wong, Alexander; Clausi, David A
2015-01-01
We propose a simple yet effective structure-guided statistical textural distinctiveness approach to salient region detection. Our method uses a multilayer approach to analyze the structural and textural characteristics of natural images as important features for salient region detection from a scale point of view. To represent the structural characteristics, we abstract the image using structured image elements and extract rotational-invariant neighborhood-based textural representations to characterize each element by an individual texture pattern. We then learn a set of representative texture atoms for sparse texture modeling and construct a statistical textural distinctiveness matrix to determine the distinctiveness between all representative texture atom pairs in each layer. Finally, we determine saliency maps for each layer based on the occurrence probability of the texture atoms and their respective statistical textural distinctiveness and fuse them to compute a final saliency map. Experimental results using four public data sets and a variety of performance evaluation metrics show that our approach provides promising results when compared with existing salient region detection approaches.
A Novel Method to Decontaminate Surgical Instruments for Operational and Austere Environments.
Knox, Randy W; Demons, Samandra T; Cunningham, Cord W
2015-12-01
The purpose of this investigation was to test a field-expedient, cost-effective method to decontaminate, sterilize, and package surgical instruments in an operational (combat) or austere environment using chlorhexidine sponges, ultraviolet C (UVC) light, and commercially available vacuum sealing. This was a bench study of 4 experimental groups and 1 control group of 120 surgical instruments. Experimental groups were inoculated with a 10(6) concentration of common wound bacteria. The control group was vacuum sealed without inoculum. Groups 1, 2, and 3 were first scrubbed with a chlorhexidine sponge, rinsed, and dried. Group 1 was then packaged; group 2 was irradiated with UVC light, then packaged; group 3 was packaged, then irradiated with UVC light through the bag; and group 4 was packaged without chlorhexidine scrubbing or UVC irradiation. The UVC was not tested by itself, as it does not grossly clean. The instruments were stored overnight and tested for remaining colony forming units (CFU). Data analysis was conducted using analysis of variance and group comparisons using the Tukey method. Group 4 CFU was statistically greater (P < .001) than the control group and groups 1 through 3. There was no statistically significant difference between the control group and groups 1 through 3. Vacuum sealing of chlorhexidine-scrubbed contaminated instruments with and without handheld UVC irradiation appears to be an acceptable method of field decontamination. Chlorhexidine scrubbing alone achieved a 99.9% reduction in CFU, whereas adding UVC before packaging achieved sterilization or 100% reduction in CFU, and UVC through the bag achieved disinfection. Published by Elsevier Inc.
Mueller matrix mapping of biological polycrystalline layers using reference wave
NASA Astrophysics Data System (ADS)
Dubolazov, A.; Ushenko, O. G.; Ushenko, Yu. O.; Pidkamin, L. Y.; Sidor, M. I.; Grytsyuk, M.; Prysyazhnyuk, P. V.
2018-01-01
The paper consists of two parts. The first part is devoted to the short theoretical basics of the method of differential Mueller-matrix description of properties of partially depolarizing layers. It was provided the experimentally measured maps of differential matrix of the 1st order of polycrystalline structure of the histological section of brain tissue. It was defined the statistical moments of the 1st-4th orders, which characterize the distribution of matrix elements. In the second part of the paper it was provided the data of statistic analysis of birefringence and dichroism of the histological sections of mice liver tissue (normal and with diabetes). It were defined the objective criteria of differential diagnostics of diabetes.
Removal of EMG and ECG artifacts from EEG based on wavelet transform and ICA.
Zhou, Weidong; Gotman, Jean
2004-01-01
In this study, the methods of wavelet threshold de-noising and independent component analysis (ICA) are introduced. ICA is a novel signal processing technique based on high order statistics, and is used to separate independent components from measurements. The extended ICA algorithm does not need to calculate the higher order statistics, converges fast, and can be used to separate subGaussian and superGaussian sources. A pre-whitening procedure is performed to de-correlate the mixed signals before extracting sources. The experimental results indicate the electromyogram (EMG) and electrocardiograph (ECG) artifacts in electroencephalograph (EEG) can be removed by a combination of wavelet threshold de-noising and ICA.
Statistical theory of chromatography: new outlooks for affinity chromatography.
Denizot, F C; Delaage, M A
1975-01-01
We have developed further the statistical approach to chromatography initiated by Giddings and Eyring, and applied it to affinity chromatography. By means of a convenient expression of moments the convergence towards the Laplace-Gauss distribution has been established. The Gaussian character is not preserved if other causes of dispersion are taken into account, but expressions of moments can be obtained in a generalized form. A simple procedure is deduced for expressing the fundamental constants of the model in terms of purely experimental quantities. Thus, affinity chromatography can be used to determine rate constants of association and dissociation in a range considered as the domain of the stopped-flow methods. PMID:1061072
Kosor, Begüm Yerci; Artunç, Celal; Şahan, Heval
2015-07-01
A key factor of an implant-retained facial prosthesis is the success of the bonding between the substructure and the silicone elastomer. Little has been reported on the bonding of fiber reinforced composite (FRC) to silicone elastomers. Experimental FRC could be a solution for facial prostheses supported by light-activated aliphatic urethane acrylate, orthodontic acrylic resin, or commercially available FRCs. The purpose of this study was to evaluate the bonding of the experimental FRC, orthodontic acrylic resin, and light-activated aliphatic urethane acrylate to a commercially available high-temperature vulcanizing silicone elastomer. Shear and 180-degree peel bond strengths of 3 different substructures (experimental FRC, orthodontic acrylic resin, light-activated aliphatic urethane acrylate) (n=15) to a high-temperature vulcanizing maxillofacial silicone elastomer (M511) with a primer (G611) were assessed after 200 hours of accelerated artificial light-aging. The specimens were tested in a universal testing machine at a cross-head speed of 10 mm/min. Data were collected and statistically analyzed by 1-way ANOVA, followed by the Bonferroni correction and the Dunnett post hoc test (α=.05). Modes of failure were visually determined and categorized as adhesive, cohesive, or mixed and were statistically analyzed with the chi-squared goodness-of-fit test (α=.05). As the mean shear bond strength values were evaluated statistically, no difference was found among the experimental FRC, aliphatic urethane acrylate, and orthodontic acrylic resin subgroups (P>.05). The mean peel bond strengths of experimental fiber reinforced composite and aliphatic urethane acrylate were not found to be statistically different (P>.05). The mean value of the orthodontic acrylic resin subgroup peel bond strength was found to be statistically lower (P<.05). Shear test failure types were found to be statistically different (P<.05), whereas 180-degree peel test failure types were not found to be statistically significant (P>.05). Shear forces predominantly exhibited cohesive failure (64.4%), whereas peel forces predominantly exhibited adhesive failure (93.3%). The mean shear bond strengths of the experimental FRC and aliphatic urethane acrylate groups were not found to be statistically different (P>.05). The mean value of the 180-degree peel strength of the orthodontic acrylic resin group was found to be lower (P<.05). Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Statistical detection of EEG synchrony using empirical bayesian inference.
Singh, Archana K; Asoh, Hideki; Takeda, Yuji; Phillips, Steven
2015-01-01
There is growing interest in understanding how the brain utilizes synchronized oscillatory activity to integrate information across functionally connected regions. Computing phase-locking values (PLV) between EEG signals is a popular method for quantifying such synchronizations and elucidating their role in cognitive tasks. However, high-dimensionality in PLV data incurs a serious multiple testing problem. Standard multiple testing methods in neuroimaging research (e.g., false discovery rate, FDR) suffer severe loss of power, because they fail to exploit complex dependence structure between hypotheses that vary in spectral, temporal and spatial dimension. Previously, we showed that a hierarchical FDR and optimal discovery procedures could be effectively applied for PLV analysis to provide better power than FDR. In this article, we revisit the multiple comparison problem from a new Empirical Bayes perspective and propose the application of the local FDR method (locFDR; Efron, 2001) for PLV synchrony analysis to compute FDR as a posterior probability that an observed statistic belongs to a null hypothesis. We demonstrate the application of Efron's Empirical Bayes approach for PLV synchrony analysis for the first time. We use simulations to validate the specificity and sensitivity of locFDR and a real EEG dataset from a visual search study for experimental validation. We also compare locFDR with hierarchical FDR and optimal discovery procedures in both simulation and experimental analyses. Our simulation results showed that the locFDR can effectively control false positives without compromising on the power of PLV synchrony inference. Our results from the application locFDR on experiment data detected more significant discoveries than our previously proposed methods whereas the standard FDR method failed to detect any significant discoveries.
Level set method for image segmentation based on moment competition
NASA Astrophysics Data System (ADS)
Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai
2015-05-01
We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.
NASA Astrophysics Data System (ADS)
Mousavi Anzehaee, Mohammad; Adib, Ahmad; Heydarzadeh, Kobra
2015-10-01
The manner of microtremor data collection and filtering operation and also the method used for processing have a considerable effect on the accuracy of estimation of dynamic soil parameters. In this paper, running variance method was used to improve the automatic detection of data sections infected by local perturbations. In this method, the microtremor data running variance is computed using a sliding window. Then the obtained signal is used to remove the ranges of data affected by perturbations from the original data. Additionally, to determinate the fundamental frequency of a site, this study has proposed a statistical characteristics-based method. Actually, statistical characteristics, such as the probability density graph and the average and the standard deviation of all the frequencies corresponding to the maximum peaks in the H/ V spectra of all data windows, are used to differentiate the real peaks from the false peaks resulting from perturbations. The methods have been applied to the data recorded for the City of Meybod in central Iran. Experimental results show that the applied methods are able to successfully reduce the effects of extensive local perturbations on microtremor data and eventually to estimate the fundamental frequency more accurately compared to other common methods.
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-04-26
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chu, Tsong-Lun; Varuttamaseni, Athi; Baek, Joo-Seok
The U.S. Nuclear Regulatory Commission (NRC) encourages the use of probabilistic risk assessment (PRA) technology in all regulatory matters, to the extent supported by the state-of-the-art in PRA methods and data. Although much has been accomplished in the area of risk-informed regulation, risk assessment for digital systems has not been fully developed. The NRC established a plan for research on digital systems to identify and develop methods, analytical tools, and regulatory guidance for (1) including models of digital systems in the PRAs of nuclear power plants (NPPs), and (2) incorporating digital systems in the NRC's risk-informed licensing and oversight activities.more » Under NRC's sponsorship, Brookhaven National Laboratory (BNL) explored approaches for addressing the failures of digital instrumentation and control (I and C) systems in the current NPP PRA framework. Specific areas investigated included PRA modeling digital hardware, development of a philosophical basis for defining software failure, and identification of desirable attributes of quantitative software reliability methods. Based on the earlier research, statistical testing is considered a promising method for quantifying software reliability. This paper describes a statistical software testing approach for quantifying software reliability and applies it to the loop-operating control system (LOCS) of an experimental loop of the Advanced Test Reactor (ATR) at Idaho National Laboratory (INL).« less
Hefron, Ryan; Borghetti, Brett; Schubert Kabban, Christine; Christensen, James; Estepp, Justin
2018-01-01
Applying deep learning methods to electroencephalograph (EEG) data for cognitive state assessment has yielded improvements over previous modeling methods. However, research focused on cross-participant cognitive workload modeling using these techniques is underrepresented. We study the problem of cross-participant state estimation in a non-stimulus-locked task environment, where a trained model is used to make workload estimates on a new participant who is not represented in the training set. Using experimental data from the Multi-Attribute Task Battery (MATB) environment, a variety of deep neural network models are evaluated in the trade-space of computational efficiency, model accuracy, variance and temporal specificity yielding three important contributions: (1) The performance of ensembles of individually-trained models is statistically indistinguishable from group-trained methods at most sequence lengths. These ensembles can be trained for a fraction of the computational cost compared to group-trained methods and enable simpler model updates. (2) While increasing temporal sequence length improves mean accuracy, it is not sufficient to overcome distributional dissimilarities between individuals’ EEG data, as it results in statistically significant increases in cross-participant variance. (3) Compared to all other networks evaluated, a novel convolutional-recurrent model using multi-path subnetworks and bi-directional, residual recurrent layers resulted in statistically significant increases in predictive accuracy and decreases in cross-participant variance. PMID:29701668
Vinciotti, Veronica; Liu, Xiaohui; Turk, Rolf; de Meijer, Emile J; 't Hoen, Peter A C
2006-04-03
The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it allows fast interpretation of the temporal behaviour of genes. The R code is available from V.V. The microarray data have been submitted to GEO under series GSE1574 and GSE3523.
Vinciotti, Veronica; Liu, Xiaohui; Turk, Rolf; de Meijer, Emile J; 't Hoen, Peter AC
2006-01-01
Background The identification of biologically interesting genes in a temporal expression profiling dataset is challenging and complicated by high levels of experimental noise. Most statistical methods used in the literature do not fully exploit the temporal ordering in the dataset and are not suited to the case where temporal profiles are measured for a number of different biological conditions. We present a statistical test that makes explicit use of the temporal order in the data by fitting polynomial functions to the temporal profile of each gene and for each biological condition. A Hotelling T2-statistic is derived to detect the genes for which the parameters of these polynomials are significantly different from each other. Results We validate the temporal Hotelling T2-test on muscular gene expression data from four mouse strains which were profiled at different ages: dystrophin-, beta-sarcoglycan and gamma-sarcoglycan deficient mice, and wild-type mice. The first three are animal models for different muscular dystrophies. Extensive biological validation shows that the method is capable of finding genes with temporal profiles significantly different across the four strains, as well as identifying potential biomarkers for each form of the disease. The added value of the temporal test compared to an identical test which does not make use of temporal ordering is demonstrated via a simulation study, and through confirmation of the expression profiles from selected genes by quantitative PCR experiments. The proposed method maximises the detection of the biologically interesting genes, whilst minimising false detections. Conclusion The temporal Hotelling T2-test is capable of finding relatively small and robust sets of genes that display different temporal profiles between the conditions of interest. The test is simple, it can be used on gene expression data generated from any experimental design and for any number of conditions, and it allows fast interpretation of the temporal behaviour of genes. The R code is available from V.V. The microarray data have been submitted to GEO under series GSE1574 and GSE3523. PMID:16584545
Kumar, G. Naveen Vital; Murthy, K. Raja Venkatesh
2013-01-01
Objective: The objective of this study was to clinically evaluate and compare the efficacy of platelet concentrate graft (PCG) with that of subepithelial connective tissue graft (SCTG) using a coronally advanced flap technique in the treatment of gingival recession. Materials and Methods: Twelve patients with a total of 24 gingival recession defects were selected and randomly assigned either to experimental site-A (SCTG) or experimental site-B (PCG). The clinical parameters were recorded at baseline up to 12 months post-operatively and compared. Results: The mean vertical recession depth (VRD) statistically significantly decreased from 2.50 ± 0.48 mm to 0.54 ± 0.50 mm with PCG and from 2.75 ± 0.58 mm to 0.54 ± 0.45 mm with SCTG at 12 months. No statistically significant differences between the treatments were found for VRD and clinical attachment level (CAL), while keratinized tissue width (KTW) gain was statistically significant. Conclusion: Both the SCTG and the PCG group resulted in a significant amount of root coverage. The PCG technique was less invasive and required minimal time and clinical maneuver. It resulted in superior aesthetic outcome and lower post-surgical discomfort at the 12 months follow-up. PMID:24554889
2011-01-01
Background Although many biological databases are applying semantic web technologies, meaningful biological hypothesis testing cannot be easily achieved. Database-driven high throughput genomic hypothesis testing requires both of the capabilities of obtaining semantically relevant experimental data and of performing relevant statistical testing for the retrieved data. Tissue Microarray (TMA) data are semantically rich and contains many biologically important hypotheses waiting for high throughput conclusions. Methods An application-specific ontology was developed for managing TMA and DNA microarray databases by semantic web technologies. Data were represented as Resource Description Framework (RDF) according to the framework of the ontology. Applications for hypothesis testing (Xperanto-RDF) for TMA data were designed and implemented by (1) formulating the syntactic and semantic structures of the hypotheses derived from TMA experiments, (2) formulating SPARQLs to reflect the semantic structures of the hypotheses, and (3) performing statistical test with the result sets returned by the SPARQLs. Results When a user designs a hypothesis in Xperanto-RDF and submits it, the hypothesis can be tested against TMA experimental data stored in Xperanto-RDF. When we evaluated four previously validated hypotheses as an illustration, all the hypotheses were supported by Xperanto-RDF. Conclusions We demonstrated the utility of high throughput biological hypothesis testing. We believe that preliminary investigation before performing highly controlled experiment can be benefited. PMID:21342584
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
The internal resistance of a PEM fuel cell depends on the operation conditions and on the current delivered by the cell. This work's goal is to obtain a semiempirical model able to reproduce the effect of the operation current on the internal resistance of an individual cell of a commercial PEM fuel cell stack; and to perform a statistical analysis in order to study the effect of the operation temperature and the inlet humidities on the parameters of the model. First, the internal resistance of the individual fuel cell operating in different operation conditions was experimentally measured for different DC currents, using the high frequency intercept of the impedance spectra. Then, a semiempirical model based on Springer and co-workers' model was proposed. This model is able to successfully reproduce the experimental trends. Subsequently, the curves of resistance versus DC current obtained for different operation conditions were fitted to the semiempirical model, and an analysis of variance (ANOVA) was performed in order to determine which factors have a statistically significant effect on each model parameter. Finally, a response surface method was applied in order to obtain a regression model.