Goedert, Kelly M.; Boston, Raymond C.; Barrett, A. M.
2013-01-01
Valid research on neglect rehabilitation demands a statistical approach commensurate with the characteristics of neglect rehabilitation data: neglect arises from impairment in distinct brain networks leading to large between-subject variability in baseline symptoms and recovery trajectories. Studies enrolling medically ill, disabled patients, may suffer from missing, unbalanced data, and small sample sizes. Finally, assessment of rehabilitation requires a description of continuous recovery trajectories. Unfortunately, the statistical method currently employed in most studies of neglect treatment [repeated measures analysis of variance (ANOVA), rANOVA] does not well-address these issues. Here we review an alternative, mixed linear modeling (MLM), that is more appropriate for assessing change over time. MLM better accounts for between-subject heterogeneity in baseline neglect severity and in recovery trajectory. MLM does not require complete or balanced data, nor does it make strict assumptions regarding the data structure. Furthermore, because MLM better models between-subject heterogeneity it often results in increased power to observe treatment effects with smaller samples. After reviewing current practices in the field, and the assumptions of rANOVA, we provide an introduction to MLM. We review its assumptions, uses, advantages, and disadvantages. Using real and simulated data, we illustrate how MLM may improve the ability to detect effects of treatment over ANOVA, particularly with the small samples typical of neglect research. Furthermore, our simulation analyses result in recommendations for the design of future rehabilitation studies. Because between-subject heterogeneity is one important reason why studies of neglect treatments often yield conflicting results, employing statistical procedures that model this heterogeneity more accurately will increase the efficiency of our efforts to find treatments to improve the lives of individuals with neglect. PMID
Catalkaya, Ebru Cokay; Kargi, Fikret
2007-09-01
Advanced oxidation of diuron in aqueous solution by Fenton's reagent using FeSO(4) as source of Fe(II) was investigated in the absence of light. Effects of operating parameters namely the concentrations of pesticide (diuron), H(2)O(2) and Fe(II) on oxidation of diuron was investigated by using Box-Behnken statistical experiment design and the surface response analysis. Diuron oxidation by the Fenton reagent was evaluated by determining the total organic carbon (TOC), diuron, and adsorbable organic halogen (AOX) removals. Concentration ranges of the reagents resulting in the highest level of diuron oxidation were determined. Diuron removal increased with increasing H(2)O(2) and Fe(II) concentrations up to a certain level. Diuron concentration had a more profound effect than H(2)O(2) and Fe(II) in removal of diuron, TOC and AOX from the aqueous solution. Nearly complete (98.5%) disappearance of diuron was achieved after 15min reaction period. However, only 58% of diuron was mineralized after 240min under optimal operating conditions indicating formation of some intermediate products. Optimal H(2)O(2)/Fe(II)/diuron ratio resulting in the maximum diuron removal (98.5%) was found to be 302/38/20 (mgl(-1)).
Intermediate/Advanced Research Design and Statistics
NASA Technical Reports Server (NTRS)
Ploutz-Snyder, Robert
2009-01-01
The purpose of this module is To provide Institutional Researchers (IRs) with an understanding of the principles of advanced research design and the intermediate/advanced statistical procedures consistent with such designs
Introductory Statistics: Questions, Content, and Approach.
ERIC Educational Resources Information Center
Weaver, Frederick Stirton
1989-01-01
An introductory statistics course is a common requirement for undergraduate economics, psychology, and sociology majors. An approach to statistics that involves the effort to encourage habits of systematic, critical quantitative thinking through focusing on descriptive statistics is discussed. (MLW)
Recent advances in statistical energy analysis
NASA Technical Reports Server (NTRS)
Heron, K. H.
1992-01-01
Statistical Energy Analysis (SEA) has traditionally been developed using modal summation and averaging approach, and has led to the need for many restrictive SEA assumptions. The assumption of 'weak coupling' is particularly unacceptable when attempts are made to apply SEA to structural coupling. It is now believed that this assumption is more a function of the modal formulation rather than a necessary formulation of SEA. The present analysis ignores this restriction and describes a wave approach to the calculation of plate-plate coupling loss factors. Predictions based on this method are compared with results obtained from experiments using point excitation on one side of an irregular six-sided box structure. Conclusions show that the use and calculation of infinite transmission coefficients is the way forward for the development of a purely predictive SEA code.
Boltzmann's Approach to Statistical Mechanics
NASA Astrophysics Data System (ADS)
Goldstein, Sheldon
In the last quarter of the nineteenth century, Ludwig Boltzmann explained how irreversible macroscopic laws, in particular the second law of thermodynamics, originate in the time-reversible laws of microscopic physics. Boltzmann's analysis, the essence of which I shall review here, is basically correct. The most famous criticisms of Boltzmann's later work on the subject have little merit. Most twentieth century innovations - such as the identification of the state of a physical system with a probability distribution \\varrho on its phase space, of its thermodynamic entropy with the Gibbs entropy of \\varrho, and the invocation of the notions of ergodicity and mixing for the justification of the foundations of statistical mechanics - are thoroughly misguided.
Teaching Classical Statistical Mechanics: A Simulation Approach.
ERIC Educational Resources Information Center
Sauer, G.
1981-01-01
Describes a one-dimensional model for an ideal gas to study development of disordered motion in Newtonian mechanics. A Monte Carlo procedure for simulation of the statistical ensemble of an ideal gas with fixed total energy is developed. Compares both approaches for a pseudoexperimental foundation of statistical mechanics. (Author/JN)
Using Hypertext To Develop an Algorithmic Approach to Teaching Statistics.
ERIC Educational Resources Information Center
Halavin, James; Sommer, Charles
Hypertext and its more advanced form Hypermedia represent a powerful authoring tool with great potential for allowing statistics teachers to develop documents to assist students in an algorithmic fashion. An introduction to the use of Hypertext is presented, with an example of its use. Hypertext is an approach to information management in which…
Writing to Learn Statistics in an Advanced Placement Statistics Course
ERIC Educational Resources Information Center
Northrup, Christian Glenn
2012-01-01
This study investigated the use of writing in a statistics classroom to learn if writing provided a rich description of problem-solving processes of students as they solved problems. Through analysis of 329 written samples provided by students, it was determined that writing provided a rich description of problem-solving processes and enabled…
Root approach for estimation of statistical distributions
NASA Astrophysics Data System (ADS)
Bogdanov, Yu. I.; Bogdanova, N. A.
2014-12-01
Application of root density estimator to problems of statistical data analysis is demonstrated. Four sets of basis functions based on Chebyshev-Hermite, Laguerre, Kravchuk and Charlier polynomials are considered. The sets may be used for numerical analysis in problems of reconstructing statistical distributions by experimental data. Based on the root approach to reconstruction of statistical distributions and quantum states, we study a family of statistical distributions in which the probability density is the product of a Gaussian distribution and an even-degree polynomial. Examples of numerical modeling are given.
Enhanced bio-manufacturing through advanced multivariate statistical technologies.
Martin, E B; Morris, A J
2002-11-13
The paper describes the interrogation of data, from a reaction vessel producing an active pharmaceutical ingredient (API), using advanced multivariate statistical techniques. Due to the limited number of batches available, data augmentation was used to increase the number of batches thereby enabling the extraction of more subtle process behaviour from the data. A second methodology investigated was that of multi-group modelling. This allowed between cluster variability to be removed, thus allowing attention to focus on within process variability. The paper describes how the different approaches enabled the realisation of a better understanding of the factors causing the onset of an impurity formation to be obtained as well demonstrating the power of multivariate statistical data analysis techniques to provide an enhanced understanding of the process.
Conceptualizing a Framework for Advanced Placement Statistics Teaching Knowledge
ERIC Educational Resources Information Center
Haines, Brenna
2015-01-01
The purpose of this article is to sketch a conceptualization of a framework for Advanced Placement (AP) Statistics Teaching Knowledge. Recent research continues to problematize the lack of knowledge and preparation among secondary level statistics teachers. The College Board's AP Statistics course continues to grow and gain popularity, but is a…
Advanced Algorithms and Statistics for MOS Surveys
NASA Astrophysics Data System (ADS)
Bolton, A. S.
2016-10-01
This paper presents an individual view on the current state of computational data processing and statistics for inference and discovery in multi-object spectroscopic surveys, supplemented by a historical perspective and a few present-day applications. It is more op-ed than review, and hopefully more readable as a result.
Advance Report of Final Mortality Statistics, 1985.
ERIC Educational Resources Information Center
Monthly Vital Statistics Report, 1987
1987-01-01
This document presents mortality statistics for 1985 for the entire United States. Data analysis and discussion of these factors is included: death and death rates; death rates by age, sex, and race; expectation of life at birth and at specified ages; causes of death; infant mortality; and maternal mortality. Highlights reported include: (1) the…
Statistical approach to nuclear level density
Sen'kov, R. A.; Horoi, M.; Zelevinsky, V. G.
2014-10-15
We discuss the level density in a finite many-body system with strong interaction between the constituents. Our primary object of applications is the atomic nucleus but the same techniques can be applied to other mesoscopic systems. We calculate and compare nuclear level densities for given quantum numbers obtained by different methods, such as nuclear shell model (the most successful microscopic approach), our main instrument - moments method (statistical approach), and Fermi-gas model; the calculation with the moments method can use any shell-model Hamiltonian excluding the spurious states of the center-of-mass motion. Our goal is to investigate statistical properties of nuclear level density, define its phenomenological parameters, and offer an affordable and reliable way of calculation.
Advanced statistical methods for the definition of new staging models.
Kates, Ronald; Schmitt, Manfred; Harbeck, Nadia
2003-01-01
Adequate staging procedures are the prerequisite for individualized therapy concepts in cancer, particularly in the adjuvant setting. Molecular staging markers tend to characterize specific, fundamental disease processes to a greater extent than conventional staging markers. At the biological level, the course of the disease will almost certainly involve interactions between multiple underlying processes. Since new therapeutic strategies tend to target specific processes as well, their impact will also involve interactions. Hence, assessment of the prognostic impact of new markers and their utilization for prediction of response to therapy will require increasingly sophisticated statistical tools that are capable of detecting and modeling complicated interactions. Because they are designed to model arbitrary interactions, neural networks offer a promising approach to improved staging. However, the typical clinical data environment poses severe challenges to high-performance survival modeling using neural nets, particularly the key problem of maintaining good generalization. Nonetheless, it turns out that by using newly developed methods to minimize unnecessary complexity in the neural network representation of disease course, it is possible to obtain models with high predictive performance. This performance has been validated on both simulated and real patient data sets. There are important applications for design of studies involving targeted therapy concepts and for identification of the improvement in decision support resulting from new staging markers. In this article, advantages of advanced statistical methods such as neural networks for definition of new staging models will be illustrated using breast cancer as an example.
Introducing linear functions: an alternative statistical approach
NASA Astrophysics Data System (ADS)
Nolan, Caroline; Herbert, Sandra
2015-12-01
The introduction of linear functions is the turning point where many students decide if mathematics is useful or not. This means the role of parameters and variables in linear functions could be considered to be `threshold concepts'. There is recognition that linear functions can be taught in context through the exploration of linear modelling examples, but this has its limitations. Currently, statistical data is easily attainable, and graphics or computer algebra system (CAS) calculators are common in many classrooms. The use of this technology provides ease of access to different representations of linear functions as well as the ability to fit a least-squares line for real-life data. This means these calculators could support a possible alternative approach to the introduction of linear functions. This study compares the results of an end-of-topic test for two classes of Australian middle secondary students at a regional school to determine if such an alternative approach is feasible. In this study, test questions were grouped by concept and subjected to concept by concept analysis of the means of test results of the two classes. This analysis revealed that the students following the alternative approach demonstrated greater competence with non-standard questions.
Robot Trajectories Comparison: A Statistical Approach
Ansuategui, A.; Arruti, A.; Susperregi, L.; Yurramendi, Y.; Jauregi, E.; Lazkano, E.; Sierra, B.
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM2 and WaveFront, using different environments, robots, and local planners. PMID:25525618
Robot trajectories comparison: a statistical approach.
Ansuategui, A; Arruti, A; Susperregi, L; Yurramendi, Y; Jauregi, E; Lazkano, E; Sierra, B
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM(2) and WaveFront, using different environments, robots, and local planners. PMID:25525618
Multivariate analysis: A statistical approach for computations
NASA Astrophysics Data System (ADS)
Michu, Sachin; Kaushik, Vandana
2014-10-01
Multivariate analysis is a type of multivariate statistical approach commonly used in, automotive diagnosis, education evaluating clusters in finance etc and more recently in the health-related professions. The objective of the paper is to provide a detailed exploratory discussion about factor analysis (FA) in image retrieval method and correlation analysis (CA) of network traffic. Image retrieval methods aim to retrieve relevant images from a collected database, based on their content. The problem is made more difficult due to the high dimension of the variable space in which the images are represented. Multivariate correlation analysis proposes an anomaly detection and analysis method based on the correlation coefficient matrix. Anomaly behaviors in the network include the various attacks on the network like DDOs attacks and network scanning.
Robot trajectories comparison: a statistical approach.
Ansuategui, A; Arruti, A; Susperregi, L; Yurramendi, Y; Jauregi, E; Lazkano, E; Sierra, B
2014-01-01
The task of planning a collision-free trajectory from a start to a goal position is fundamental for an autonomous mobile robot. Although path planning has been extensively investigated since the beginning of robotics, there is no agreement on how to measure the performance of a motion algorithm. This paper presents a new approach to perform robot trajectories comparison that could be applied to any kind of trajectories and in both simulated and real environments. Given an initial set of features, it automatically selects the most significant ones and performs a statistical comparison using them. Additionally, a graphical data visualization named polygraph which helps to better understand the obtained results is provided. The proposed method has been applied, as an example, to compare two different motion planners, FM(2) and WaveFront, using different environments, robots, and local planners.
Intelligence and embodiment: a statistical mechanics approach.
Chinea, Alejandro; Korutcheva, Elka
2013-04-01
Evolutionary neuroscience has been mainly dominated by the principle of phylogenetic conservation, specifically, by the search for similarities in brain organization. This principle states that closely related species tend to be similar because they have a common ancestor. However, explaining, for instance, behavioral differences between humans and chimpanzees, has been revealed to be notoriously difficult. In this paper, the hypothesis of a common information-processing principle exploited by the brains evolved through natural evolution is explored. A model combining recent advances in cognitive psychology and evolutionary neuroscience is presented. The macroscopic effects associated with the intelligence-like structures postulated by the model are analyzed from a statistical mechanics point of view. As a result of this analysis, some plausible explanations are put forward concerning the disparities and similarities in cognitive capacities which are observed in nature across species. Furthermore, an interpretation on the efficiency of brain's computations is also provided. These theoretical results and their implications against modern theories of intelligence are shown to be consistent with the formulated hypothesis. PMID:23454920
Statistical approach to partial equilibrium analysis
NASA Astrophysics Data System (ADS)
Wang, Yougui; Stanley, H. E.
2009-04-01
A statistical approach to market equilibrium and efficiency analysis is proposed in this paper. One factor that governs the exchange decisions of traders in a market, named willingness price, is highlighted and constitutes the whole theory. The supply and demand functions are formulated as the distributions of corresponding willing exchange over the willingness price. The laws of supply and demand can be derived directly from these distributions. The characteristics of excess demand function are analyzed and the necessary conditions for the existence and uniqueness of equilibrium point of the market are specified. The rationing rates of buyers and sellers are introduced to describe the ratio of realized exchange to willing exchange, and their dependence on the market price is studied in the cases of shortage and surplus. The realized market surplus, which is the criterion of market efficiency, can be written as a function of the distributions of willing exchange and the rationing rates. With this approach we can strictly prove that a market is efficient in the state of equilibrium.
A Statistical Approach to Automatic Speech Summarization
NASA Astrophysics Data System (ADS)
Hori, Chiori; Furui, Sadaoki; Malkin, Rob; Yu, Hua; Waibel, Alex
2003-12-01
This paper proposes a statistical approach to automatic speech summarization. In our method, a set of words maximizing a summarization score indicating the appropriateness of summarization is extracted from automatically transcribed speech and then concatenated to create a summary. The extraction process is performed using a dynamic programming (DP) technique based on a target compression ratio. In this paper, we demonstrate how an English news broadcast transcribed by a speech recognizer is automatically summarized. We adapted our method, which was originally proposed for Japanese, to English by modifying the model for estimating word concatenation probabilities based on a dependency structure in the original speech given by a stochastic dependency context free grammar (SDCFG). We also propose a method of summarizing multiple utterances using a two-level DP technique. The automatically summarized sentences are evaluated by summarization accuracy based on a comparison with a manual summary of speech that has been correctly transcribed by human subjects. Our experimental results indicate that the method we propose can effectively extract relatively important information and remove redundant and irrelevant information from English news broadcasts.
Uncertainty quantification approaches for advanced reactor analyses.
Briggs, L. L.; Nuclear Engineering Division
2009-03-24
The original approach to nuclear reactor design or safety analyses was to make very conservative modeling assumptions so as to ensure meeting the required safety margins. Traditional regulation, as established by the U. S. Nuclear Regulatory Commission required conservatisms which have subsequently been shown to be excessive. The commission has therefore moved away from excessively conservative evaluations and has determined best-estimate calculations to be an acceptable alternative to conservative models, provided the best-estimate results are accompanied by an uncertainty evaluation which can demonstrate that, when a set of analysis cases which statistically account for uncertainties of all types are generated, there is a 95% probability that at least 95% of the cases meet the safety margins. To date, nearly all published work addressing uncertainty evaluations of nuclear power plant calculations has focused on light water reactors and on large-break loss-of-coolant accident (LBLOCA) analyses. However, there is nothing in the uncertainty evaluation methodologies that is limited to a specific type of reactor or to specific types of plant scenarios. These same methodologies can be equally well applied to analyses for high-temperature gas-cooled reactors and to liquid metal reactors, and they can be applied to steady-state calculations, operational transients, or severe accident scenarios. This report reviews and compares both statistical and deterministic uncertainty evaluation approaches. Recommendations are given for selection of an uncertainty methodology and for considerations to be factored into the process of evaluating uncertainties for advanced reactor best-estimate analyses.
Tools for the advancement of undergraduate statistics education
NASA Astrophysics Data System (ADS)
Schaffner, Andrew Alan
To keep pace with advances in applied statistics and to maintain literate consumers of quantitative analyses, statistics educators stress the need for change in the classroom (Cobb, 1992; Garfield, 1993, 1995; Moore, 1991a; Snee, 1993; Steinhorst and Keeler, 1995). These authors stress a more concept oriented undergraduate introductory statistics course which emphasizes true understanding over mechanical skills. Drawing on recent educational research, this dissertation attempts to realize this vision by developing tools and pedagogy to assist statistics instructors. This dissertation describes statistical facets, pieces of statistical understanding that are building blocks of knowledge, and discusses DIANA, a World-Wide Web tool for diagnosing facets. Further, I show how facets may be incorporated into course design through the development of benchmark lessons based on the principles of collaborative learning (diSessa and Minstrell, 1995; Cohen, 1994; Reynolds et al., 1995; Bruer, 1993; von Glasersfeld, 1991) and activity based courses (Jones, 1991; Yackel, Cobb and Wood, 1991). To support benchmark lessons and collaborative learning in large classes I describe Virtual Benchmark Instruction, benchmark lessons which take place on a structured hypertext bulletin board using the technology of the World-Wide Web. Finally, I present randomized experiments which suggest that these educational developments are effective in a university introductory statistics course.
ERIC Educational Resources Information Center
McGrath, April L.; Ferns, Alyssa; Greiner, Leigh; Wanamaker, Kayla; Brown, Shelley
2015-01-01
In this study we assessed the usefulness of a multifaceted teaching framework in an advanced statistics course. We sought to expand on past findings by using this framework to assess changes in anxiety and self-efficacy, and we collected focus group data to ascertain whether students attribute such changes to a multifaceted teaching approach.…
Project T.E.A.M. (Technical Education Advancement Modules). Advanced Statistical Process Control.
ERIC Educational Resources Information Center
Dunlap, Dale
This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 20-hour advanced statistical process control (SPC) and quality improvement course designed to develop the following competencies: (1) understanding quality systems; (2) knowing the process; (3) solving quality problems; and (4)…
Statistical physics approaches to Alzheimer's disease
NASA Astrophysics Data System (ADS)
Peng, Shouyong
Alzheimer's disease (AD) is the most common cause of late life dementia. In the brain of an AD patient, neurons are lost and spatial neuronal organizations (microcolumns) are disrupted. An adequate quantitative analysis of microcolumns requires that we automate the neuron recognition stage in the analysis of microscopic images of human brain tissue. We propose a recognition method based on statistical physics. Specifically, Monte Carlo simulations of an inhomogeneous Potts model are applied for image segmentation. Unlike most traditional methods, this method improves the recognition of overlapped neurons, and thus improves the overall recognition percentage. Although the exact causes of AD are unknown, as experimental advances have revealed the molecular origin of AD, they have continued to support the amyloid cascade hypothesis, which states that early stages of aggregation of amyloid beta (Abeta) peptides lead to neurodegeneration and death. X-ray diffraction studies reveal the common cross-beta structural features of the final stable aggregates-amyloid fibrils. Solid-state NMR studies also reveal structural features for some well-ordered fibrils. But currently there is no feasible experimental technique that can reveal the exact structure or the precise dynamics of assembly and thus help us understand the aggregation mechanism. Computer simulation offers a way to understand the aggregation mechanism on the molecular level. Because traditional all-atom continuous molecular dynamics simulations are not fast enough to investigate the whole aggregation process, we apply coarse-grained models and discrete molecular dynamics methods to increase the simulation speed. First we use a coarse-grained two-bead (two beads per amino acid) model. Simulations show that peptides can aggregate into multilayer beta-sheet structures, which agree with X-ray diffraction experiments. To better represent the secondary structure transition happening during aggregation, we refine the
Hidden Statistics Approach to Quantum Simulations
NASA Technical Reports Server (NTRS)
Zak, Michail
2010-01-01
Recent advances in quantum information theory have inspired an explosion of interest in new quantum algorithms for solving hard computational (quantum and non-quantum) problems. The basic principle of quantum computation is that the quantum properties can be used to represent structure data, and that quantum mechanisms can be devised and built to perform operations with this data. Three basic non-classical properties of quantum mechanics superposition, entanglement, and direct-product decomposability were main reasons for optimism about capabilities of quantum computers that promised simultaneous processing of large massifs of highly correlated data. Unfortunately, these advantages of quantum mechanics came with a high price. One major problem is keeping the components of the computer in a coherent state, as the slightest interaction with the external world would cause the system to decohere. That is why the hardware implementation of a quantum computer is still unsolved. The basic idea of this work is to create a new kind of dynamical system that would preserve the main three properties of quantum physics superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. In other words, such a system would reinforce the advantages and minimize limitations of both quantum and classical aspects. Based upon a concept of hidden statistics, a new kind of dynamical system for simulation of Schroedinger equation is proposed. The system represents a modified Madelung version of Schroedinger equation. It preserves superposition, entanglement, and direct-product decomposability while allowing one to measure its state variables using classical methods. Such an optimal combination of characteristics is a perfect match for simulating quantum systems. The model includes a transitional component of quantum potential (that has been overlooked in previous treatment of the Madelung equation). The role of the
A statistical approach to root system classification.
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for "plant functional type" identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200
A statistical approach to root system classification
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for “plant functional type” identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200
A statistical mechanics approach to Granovetter theory
NASA Astrophysics Data System (ADS)
Barra, Adriano; Agliari, Elena
2012-05-01
In this paper we try to bridge breakthroughs in quantitative sociology/econometrics, pioneered during the last decades by Mac Fadden, Brock-Durlauf, Granovetter and Watts-Strogatz, by introducing a minimal model able to reproduce essentially all the features of social behavior highlighted by these authors. Our model relies on a pairwise Hamiltonian for decision-maker interactions which naturally extends the multi-populations approaches by shifting and biasing the pattern definitions of a Hopfield model of neural networks. Once introduced, the model is investigated through graph theory (to recover Granovetter and Watts-Strogatz results) and statistical mechanics (to recover Mac-Fadden and Brock-Durlauf results). Due to the internal symmetries of our model, the latter is obtained as the relaxation of a proper Markov process, allowing even to study its out-of-equilibrium properties. The method used to solve its equilibrium is an adaptation of the Hamilton-Jacobi technique recently introduced by Guerra in the spin-glass scenario and the picture obtained is the following: shifting the patterns from [-1,+1]→[0.+1] implies that the larger the amount of similarities among decision makers, the stronger their relative influence, and this is enough to explain both the different role of strong and weak ties in the social network as well as its small-world properties. As a result, imitative interaction strengths seem essentially a robust request (enough to break the gauge symmetry in the couplings), furthermore, this naturally leads to a discrete choice modelization when dealing with the external influences and to imitative behavior à la Curie-Weiss as the one introduced by Brock and Durlauf.
A statistical approach to root system classification.
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for "plant functional type" identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential.
Advances in Statistical Methods for Substance Abuse Prevention Research
MacKinnon, David P.; Lockwood, Chondra M.
2010-01-01
The paper describes advances in statistical methods for prevention research with a particular focus on substance abuse prevention. Standard analysis methods are extended to the typical research designs and characteristics of the data collected in prevention research. Prevention research often includes longitudinal measurement, clustering of data in units such as schools or clinics, missing data, and categorical as well as continuous outcome variables. Statistical methods to handle these features of prevention data are outlined. Developments in mediation, moderation, and implementation analysis allow for the extraction of more detailed information from a prevention study. Advancements in the interpretation of prevention research results include more widespread calculation of effect size and statistical power, the use of confidence intervals as well as hypothesis testing, detailed causal analysis of research findings, and meta-analysis. The increased availability of statistical software has contributed greatly to the use of new methods in prevention research. It is likely that the Internet will continue to stimulate the development and application of new methods. PMID:12940467
Statistical Approaches to Functional Neuroimaging Data
DuBois Bowman, F; Guo, Ying; Derado, Gordana
2007-01-01
Synopsis The field of statistics makes valuable contributions to functional neuroimaging research by establishing procedures for the design and conduct of neuroimaging experiements and by providing tools for objectively quantifying and measuring the strength of scientific evidence provided by the data. Two common functional neuroimaging research objecitves include detecting brain regions that reveal task-related alterations in measured brain activity (activations) and identifying highly correlated brain regions that exhibit similar patterns of activity over time (functional connectivity). In this article, we highlight various statistical procedures for analyzing data from activation studies and from functional connectivity studies, focusing on functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) data. We also discuss emerging statistical methods for prediction using fMRI and PET data, which stand to increase the translational significance of functional neuroimaging data to clinical practice. PMID:17983962
Supersymmetric Liouville theory: A statistical mechanical approach
Barrozo, M.C.; Belvedere, L.V.
1996-02-01
The statistical mechanical system associated with the two-dimensional supersymmetric Liouville theory is obtained through an infrared-finite perturbation expansion. Considering the system confined in a finite volume and in the presence of a uniform neutralizing background, we show that the grand-partition function of this system describes a one-component gas, in which the Boltzmann factor is weighted by an integration over the Grassmann variables. This weight function introduces the dimensional reduction phenomenon. After performing the thermodynamic limit, the resulting supersymmetric quantum theory is translationally invariant. {copyright} {ital 1996 The American Physical Society.}
Statistical mechanical approach to human language
NASA Astrophysics Data System (ADS)
Kosmidis, Kosmas; Kalampokis, Alkiviadis; Argyrakis, Panos
2006-07-01
We use the formulation of equilibrium statistical mechanics in order to study some important characteristics of language. Using a simple expression for the Hamiltonian of a language system, which is directly implied by the Zipf law, we are able to explain several characteristic features of human language that seem completely unrelated, such as the universality of the Zipf exponent, the vocabulary size of children, the reduced communication abilities of people suffering from schizophrenia, etc. While several explanations are necessarily only qualitative at this stage, we have, nevertheless, been able to derive a formula for the vocabulary size of children as a function of age, which agrees rather well with experimental data.
A Hierarchical Statistic Methodology for Advanced Memory System Evaluation
Sun, X.-J.; He, D.; Cameron, K.W.; Luo, Y.
1999-04-12
Advances in technology have resulted in a widening of the gap between computing speed and memory access time. Data access time has become increasingly important for computer system design. Various hierarchical memory architectures have been developed. The performance of these advanced memory systems, however, varies with applications and problem sizes. How to reach an optimal cost/performance design eludes researchers still. In this study, the authors introduce an evaluation methodology for advanced memory systems. This methodology is based on statistical factorial analysis and performance scalability analysis. It is two fold: it first determines the impact of memory systems and application programs toward overall performance; it also identifies the bottleneck in a memory hierarchy and provides cost/performance comparisons via scalability analysis. Different memory systems can be compared in terms of mean performance or scalability over a range of codes and problem sizes. Experimental testing has been performed extensively on the Department of Energy's Accelerated Strategic Computing Initiative (ASCI) machines and benchmarks available at the Los Alamos National Laboratory to validate this newly proposed methodology. Experimental and analytical results show this methodology is simple and effective. It is a practical tool for memory system evaluation and design. Its extension to general architectural evaluation and parallel computer systems are possible and should be further explored.
Floods: some probabilistic and statistical approaches.
Cox, D R; Isham, V S; Northrop, P J
2002-07-15
Some of the many statistical aspects of floods are discussed and some new results are given on a number of features. The probability theory of extreme values is briefly reviewed and its potential for applications outlined. The peaks-over-threshold method of estimation is compared with the direct use of annual maxima and some theoretical comparisons of efficiency are given. The effect of trend on the distribution of maxima is analysed in terms of a simple theoretical model distinguishing the effects of trend in mean level and of trend in dispersion. An empirical Bayes method for pooling information from a number of sources is described and illustrated, and related to the procedures recommended in the Flood estimation handbook. In the final section, a range of further issues is outlined.
Statistical Physics Approaches to RNA Editing
NASA Astrophysics Data System (ADS)
Bundschuh, Ralf
2012-02-01
The central dogma of molecular Biology states that DNA is transcribed base by base into RNA which is in turn translated into proteins. However, some organisms edit their RNA before translation by inserting, deleting, or substituting individual or short stretches of bases. In many instances the mechanisms by which an organism recognizes the positions at which to edit or by which it performs the actual editing are unknown. One model system that stands out by its very high rate of on average one out of 25 bases being edited are the Myxomycetes, a class of slime molds. In this talk we will show how the computational methods and concepts from statistical Physics can be used to analyze DNA and protein sequence data to predict editing sites in these slime molds and to guide experiments that identified previously unknown types of editing as well as the complete set of editing events in the slime mold Physarum polycephalum.
Statistical modeling approach for detecting generalized synchronization
NASA Astrophysics Data System (ADS)
Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon
2012-05-01
Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex.
Aftershock Energy Distribution by Statistical Mechanics Approach
NASA Astrophysics Data System (ADS)
Daminelli, R.; Marcellini, A.
2015-12-01
The aim of our work is to research the most probable distribution of the energy of aftershocks. We started by applying one of the fundamental principles of statistical mechanics that, in case of aftershock sequences, it could be expressed as: the greater the number of different ways in which the energy of aftershocks can be arranged among the energy cells in phase space the more probable the distribution. We assume that each cell in phase space has the same possibility to be occupied, and that more than one cell in the phase space can have the same energy. Seeing that seismic energy is proportional to products of different parameters, a number of different combinations of parameters can produce different energies (e.g., different combination of stress drop and fault area can release the same seismic energy). Let us assume that there are gi cells in the aftershock phase space characterised by the same energy released ɛi. Therefore we can assume that the Maxwell-Boltzmann statistics can be applied to aftershock sequences with the proviso that the judgment on the validity of this hypothesis is the agreement with the data. The aftershock energy distribution can therefore be written as follow: n(ɛ)=Ag(ɛ)exp(-βɛ)where n(ɛ) is the number of aftershocks with energy, ɛ, A and β are constants. Considering the above hypothesis, we can assume g(ɛ) is proportional to ɛ. We selected and analysed different aftershock sequences (data extracted from Earthquake Catalogs of SCEC, of INGV-CNT and other institutions) with a minimum magnitude retained ML=2 (in some cases ML=2.6) and a time window of 35 days. The results of our model are in agreement with the data, except in the very low energy band, where our model resulted in a moderate overestimation.
Assay optimization: a statistical design of experiments approach.
Altekar, Maneesha; Homon, Carol A; Kashem, Mohammed A; Mason, Steven W; Nelson, Richard M; Patnaude, Lori A; Yingling, Jeffrey; Taylor, Paul B
2007-03-01
With the transition from manual to robotic HTS in the last several years, assay optimization has become a significant bottleneck. Recent advances in robotic liquid handling have made it feasible to reduce assay optimization timelines with the application of statistically designed experiments. When implemented, they can efficiently optimize assays by rapidly identifying significant factors, complex interactions, and nonlinear responses. This article focuses on the use of statistically designed experiments in assay optimization.
An Integrated, Statistical Molecular Approach to the Physical Chemistry Curriculum
ERIC Educational Resources Information Center
Cartier, Stephen F.
2009-01-01
As an alternative to the "thermodynamics first" or "quantum first" approaches to the physical chemistry curriculum, the statistical definition of entropy and the Boltzmann distribution are introduced in the first days of the course and the entire two-semester curriculum is then developed from these concepts. Once the tools of statistical mechanics…
Statistical Approach To Determination Of Texture In SAR
NASA Technical Reports Server (NTRS)
Rignot, Eric J.; Kwok, Ronald
1993-01-01
Paper presents statistical approach to analysis of texture in synthetic-aperture-radar (SAR) images. Objective: to extract intrinsic spatial variability of distributed target from overall spatial variability of SAR image.
Advanced Safeguards Approaches for New Reprocessing Facilities
Durst, Philip C.; Therios, Ike; Bean, Robert; Dougan, A.; Boyer, Brian; Wallace, Richard; Ehinger, Michael H.; Kovacic, Don N.; Tolk, K.
2007-06-24
U.S. efforts to promote the international expansion of nuclear energy through the Global Nuclear Energy Partnership (GNEP) will result in a dramatic expansion of nuclear fuel cycle facilities in the United States. New demonstration facilities, such as the Advanced Fuel Cycle Facility (AFCF), the Advanced Burner Reactor (ABR), and the Consolidated Fuel Treatment Center (CFTC) will use advanced nuclear and chemical process technologies that must incorporate increased proliferation resistance to enhance nuclear safeguards. The ASA-100 Project, “Advanced Safeguards Approaches for New Nuclear Fuel Cycle Facilities,” commissioned by the NA-243 Office of NNSA, has been tasked with reviewing and developing advanced safeguards approaches for these demonstration facilities. Because one goal of GNEP is developing and sharing proliferation-resistant nuclear technology and services with partner nations, the safeguards approaches considered are consistent with international safeguards as currently implemented by the International Atomic Energy Agency (IAEA). This first report reviews possible safeguards approaches for the new fuel reprocessing processes to be deployed at the AFCF and CFTC facilities. Similar analyses addressing the ABR and transuranic (TRU) fuel fabrication lines at AFCF and CFTC will be presented in subsequent reports.
Advanced approaches to focal plane integration
NASA Astrophysics Data System (ADS)
Nelson, R. D.; Smith, E. C., Jr.
1980-01-01
Both visible and infrared focal plane assemblies have common architectural driving parameters which guide their design approaches. The key drivers for advanced focal plane assemblies (FPA) are: the detector type and performance required; the number of detector chips; the packaging density; and the geometry. The impact of these drivers is seen to determine the engineering compromises necessary to establish FPA design approach. Several new designs are discussed which show a range of applications from single detector assemblies to monolithic detector chips with on-chip signal processing. The main objective of many advanced designs is to integrate the focal plane components in order to reduce power and reduce the number of interconnections.
Advanced Placement European History: A New Approach.
ERIC Educational Resources Information Center
Beaber, Lawrence
A new approach to the teaching of European history is being implemented in Advanced Placement secondary classes. In the latter 1950's a Committee of Examiners composed of European history professors and secondary teachers formulated a course description comprised of a brief outline of an introductory survey in European history. It was organized…
Chemical Approaches for Advanced Optical Imaging
NASA Astrophysics Data System (ADS)
Chen, Zhixing
Advances in optical microscopy have been constantly expanding our knowledge of biological systems. The achievements therein are a result of close collaborations between physicists/engineers who build the imaging instruments and chemists/biochemists who design the corresponding probe molecules. In this work I present a number of chemical approaches for the development of advanced optical imaging methods. Chapter 1 provides an overview of the recent advances of novel imaging approaches taking advantage of chemical tag technologies. Chapter 2 describes the second-generation covalent trimethoprim-tag as a viable tool for live cell protein-specific labeling and imaging. In Chapter 3 we present a fluorescence lifetime imaging approach to map protein-specific micro-environment in live cells using TMP-Cy3 as a chemical probe. In Chapter 4, we present a method harnessing photo-activatable fluorophores to extend the fundamental depth limit in multi-photon microscopy. Chapter 5 describes the development of isotopically edited alkyne palette for multi-color live cell vibrational imaging of cellular small molecules. These studies exemplify the impact of modern chemical approaches in the development of advanced optical microscopies.
Propensity Score Analysis: An Alternative Statistical Approach for HRD Researchers
ERIC Educational Resources Information Center
Keiffer, Greggory L.; Lane, Forrest C.
2016-01-01
Purpose: This paper aims to introduce matching in propensity score analysis (PSA) as an alternative statistical approach for researchers looking to make causal inferences using intact groups. Design/methodology/approach: An illustrative example demonstrated the varying results of analysis of variance, analysis of covariance and PSA on a heuristic…
Measuring University Students' Approaches to Learning Statistics: An Invariance Study
ERIC Educational Resources Information Center
Chiesi, Francesca; Primi, Caterina; Bilgin, Ayse Aysin; Lopez, Maria Virginia; del Carmen Fabrizio, Maria; Gozlu, Sitki; Tuan, Nguyen Minh
2016-01-01
The aim of the current study was to provide evidence that an abbreviated version of the Approaches and Study Skills Inventory for Students (ASSIST) was invariant across different languages and educational contexts in measuring university students' learning approaches to statistics. Data were collected on samples of university students attending…
NASA Astrophysics Data System (ADS)
Mountcastle, Donald B.; Bucy, Brandon R.; Thompson, John R.
2007-11-01
Equilibrium properties of macroscopic systems are highly predictable as n, the number of particles approaches and exceeds Avogadro's number; theories of statistical physics depend on these results. Typical pedagogical devices used in statistical physics textbooks to introduce entropy (S) and multiplicity (ω) (where S = k ln(ω)) include flipping coins and/or other equivalent binary events, repeated n times. Prior to instruction, our statistical mechanics students usually gave reasonable answers about the probabilities, but not the relative uncertainties, of the predicted outcomes of such events. However, they reliably predicted that the uncertainty in a measured continuous quantity (e.g., the amount of rainfall) does decrease as the number of measurements increases. Typical textbook presentations assume that students understand that the relative uncertainty of binary outcomes will similarly decrease as the number of events increases. This is at odds with our findings, even though most of our students had previously completed mathematics courses in statistics, as well as an advanced electronics laboratory course that included statistical analysis of distributions of dart scores as n increased.
A statistical approach to electromigration design for high performance VLSI
NASA Astrophysics Data System (ADS)
Kitchin, John; Sriram, T. S.
1998-01-01
Statistical Electromigration Budgeting (J. Kitchin, 1995 Symposium on VLSI Circuits) or SEB is based on the concepts: (a) reliable design in VLSI means achieving a chip-level reliability goal and (b) electromigration degradation is inherently statistical in nature. The SEB methodology is reviewed along with results from recent high performance VLSI designs. Two SEB-based approaches for efficiently coupling metallization reliability statistics to design options are developed. Allowable-length-at-stress design rules communicate electromigration risk budget constraints to designers without the need for sophisticated CAD tools for chip-level interconnect analysis. Electromigration risk contours allow comparison of evolving metallization reliability statistics with design requirements having multiple frequency, temperature, and voltage options, a common need in high performance VLSI product development.
A Standardization Approach to Adjusting Pretest Item Statistics.
ERIC Educational Resources Information Center
Chang, Shun-Wen; Hanson, Bradley A.; Harris, Deborah J.
This study presents and evaluates a method of standardization that may be used by test practitioners to standardize classical item statistics when sample sizes are small. The effectiveness of this standardization approach was compared through simulation with the one-parameter logistic (1PL) and three parameter logistic (3PL) models based on the…
The Poisson Distribution: An Experimental Approach to Teaching Statistics
ERIC Educational Resources Information Center
Lafleur, Mimi S.; And Others
1972-01-01
Explains an experimental approach to teaching statistics to students who are essentially either non-science and non-mathematics majors or just beginning study of science. With every day examples, the article illustrates the method of teaching Poisson Distribution. (PS)
Recent progress in the statistical approach of parton distributions
Soffer, Jacques
2011-07-15
We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon. Some predictions from a next-to-leading order QCD analysis are compared to recent experimental results. We also consider their extension to include their transverse momentum dependence.
Reconciling Statistical and Systems Science Approaches to Public Health
ERIC Educational Resources Information Center
Ip, Edward H.; Rahmandad, Hazhir; Shoham, David A.; Hammond, Ross; Huang, Terry T. -K.; Wang, Youfa; Mabry, Patricia L.
2013-01-01
Although systems science has emerged as a set of innovative approaches to study complex phenomena, many topically focused researchers including clinicians and scientists working in public health are somewhat befuddled by this methodology that at times appears to be radically different from analytic methods, such as statistical modeling, to which…
Advances in Testing the Statistical Significance of Mediation Effects
ERIC Educational Resources Information Center
Mallinckrodt, Brent; Abraham, W. Todd; Wei, Meifen; Russell, Daniel W.
2006-01-01
P. A. Frazier, A. P. Tix, and K. E. Barron (2004) highlighted a normal theory method popularized by R. M. Baron and D. A. Kenny (1986) for testing the statistical significance of indirect effects (i.e., mediator variables) in multiple regression contexts. However, simulation studies suggest that this method lacks statistical power relative to some…
Advances in Ecological Speciation: an integrative approach.
Faria, Rui; Renaut, Sebastien; Galindo, Juan; Pinho, Catarina; Melo-Ferreira, José; Melo, Martim; Jones, Felicity; Salzburger, Walter; Schluter, Dolph; Butlin, Roger
2014-02-01
The role of natural selection in promoting reproductive isolation has received substantial renewed interest within the last two decades. As a consequence, the study of ecological speciation has become an extremely productive research area in modern evolutionary biology. Recent innovations in sequencing technologies offer an unprecedented opportunity to study the mechanisms involved in ecological speciation. Genome scans provide significant insights but have some important limitations; efforts are needed to integrate them with other approaches to make full use of the sequencing data deluge. An international conference 'Advances in Ecological Speciation' organized by the University of Porto (Portugal) aimed to review current progress in ecological speciation. Using some of the examples presented at the conference, we highlight the benefits of integrating ecological and genomic data and discuss different mechanisms of parallel evolution. Finally, future avenues of research are suggested to advance our knowledge concerning the role of natural selection in the establishment of reproductive isolation during ecological speciation.
A κ-generalized statistical mechanics approach to income analysis
NASA Astrophysics Data System (ADS)
Clementi, F.; Gallegati, M.; Kaniadakis, G.
2009-02-01
This paper proposes a statistical mechanics approach to the analysis of income distribution and inequality. A new distribution function, having its roots in the framework of κ-generalized statistics, is derived that is particularly suitable for describing the whole spectrum of incomes, from the low-middle income region up to the high income Pareto power-law regime. Analytical expressions for the shape, moments and some other basic statistical properties are given. Furthermore, several well-known econometric tools for measuring inequality, which all exist in a closed form, are considered. A method for parameter estimation is also discussed. The model is shown to fit remarkably well the data on personal income for the United States, and the analysis of inequality performed in terms of its parameters is revealed as very powerful.
Shukla, R.; Yu Daohai; Fulk, F.
1995-12-31
Short-term toxicity tests with aquatic organisms are a valuable measurement tool in the assessment of the toxicity of effluents, environmental samples and single chemicals. Currently toxicity tests are utilized in a wide range of US EPA regulatory activities including effluent discharge compliance. In the current approach for determining the No Observed Effect Concentration, an effluent concentration is presumed safe if there is no statistically significant difference in toxicant response versus control response. The conclusion of a safe concentration may be due to the fact that it truly is safe, or alternatively, that the ability of the statistical test to detect an effect, given its existence, is inadequate. Results of research of a new statistical approach, the basis of which is to move away from a demonstration of no difference to a demonstration of equivalence, will be discussed. The concept of observed confidence distributions, first suggested by Cox, is proposed as a measure of the strength of evidence for practically equivalent responses between a given effluent concentration and the control. The research included determination of intervals of practically equivalent responses as a function of the variability of control response. The approach is illustrated using reproductive data from tests with Ceriodaphnia dubia and survival and growth data from tests with fathead minnow. The data are from the US EPA`s National Reference Toxicant Database.
Statistics of topography : multifractal approach to describe planetary topography
NASA Astrophysics Data System (ADS)
Landais, Francois; Schmidt, Frédéric; Lovejoy, Shaun
2016-04-01
In the last decades, a huge amount of topographic data has been obtained by several techniques (laser and radar altimetry, DTM…) for different bodies in the solar system. In each case, topographic fields exhibit an extremely high variability with details at each scale, from millimeters to thousands of kilometers. In our study, we investigate the statistical properties of the topography. Our statistical approach is motivated by the well known scaling behavior of topography that has been widely studied in the past. Indeed, scaling laws are strongly present in geophysical field and can be studied using fractal formalism. More precisely, we expect multifractal behavior in global topographic fields. This behavior reflects the high variability and intermittency observed in topographic fields that can not be generated by simple scaling models. In the multifractal formalism, each statistical moment exhibits a different scaling law characterized by a function called the moment scaling function. Previous studies were conducted at regional scale to demonstrate that topography present multifractal statistics (Gagnon et al., 2006, NPG). We have obtained similar results on Mars (Landais et al. 2015) and more recently on different body in the the solar system including the Moon, Venus and Mercury. We present the result of different multifractal approaches performed on global and regional basis and compare the fractal parameters from a body to another.
A new statistical approach to climate change detection and attribution
NASA Astrophysics Data System (ADS)
Ribes, Aurélien; Zwiers, Francis W.; Azaïs, Jean-Marc; Naveau, Philippe
2016-04-01
We propose here a new statistical approach to climate change detection and attribution that is based on additive decomposition and simple hypothesis testing. Most current statistical methods for detection and attribution rely on linear regression models where the observations are regressed onto expected response patterns to different external forcings. These methods do not use physical information provided by climate models regarding the expected response magnitudes to constrain the estimated responses to the forcings. Climate modelling uncertainty is difficult to take into account with regression based methods and is almost never treated explicitly. As an alternative to this approach, our statistical model is only based on the additivity assumption; the proposed method does not regress observations onto expected response patterns. We introduce estimation and testing procedures based on likelihood maximization, and show that climate modelling uncertainty can easily be accounted for. Some discussion is provided on how to practically estimate the climate modelling uncertainty based on an ensemble of opportunity. Our approach is based on the "models are statistically indistinguishable from the truth" paradigm, where the difference between any given model and the truth has the same distribution as the difference between any pair of models, but other choices might also be considered. The properties of this approach are illustrated and discussed based on synthetic data. Lastly, the method is applied to the linear trend in global mean temperature over the period 1951-2010. Consistent with the last IPCC assessment report, we find that most of the observed warming over this period (+0.65 K) is attributable to anthropogenic forcings (+0.67 ± 0.12 K, 90 % confidence range), with a very limited contribution from natural forcings (-0.01± 0.02 K).
Statistics of close approaches between asteroids and planets - Project Spaceguard
NASA Technical Reports Server (NTRS)
Milani, A.; Carpino, M.; Marzari, F.
1990-01-01
A data base of close approaches to the major planets has been generated via numerical integrations for a large number of planet-crossing asteroid orbits over the course of 200,000 yr; these data are then applied to such statistical theories as those of Kessler (1981) and Wetherill (1967). Attention is given to the orbits of the Toro-class asteroids, which violate the assumption of a lack of mean motion resonance locking between target planet and asteroid. A modified form of the Kessler theory is proposed which can address the problem of approaches between orbits that are either nearly coplanar or nearly tangent. A correlation analysis is used to test the assumption that the orbital elements of a planet-crossing orbit change solely due to close approaches.
A Statistical Approach to Optimizing Concrete Mixture Design
Alghamdi, Saeid A.
2014-01-01
A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405
Defining statistical perceptions with an empirical Bayesian approach
NASA Astrophysics Data System (ADS)
Tajima, Satohiro
2013-04-01
Extracting statistical structures (including textures or contrasts) from a natural stimulus is a central challenge in both biological and engineering contexts. This study interprets the process of statistical recognition in terms of hyperparameter estimations and free-energy minimization procedures with an empirical Bayesian approach. This mathematical interpretation resulted in a framework for relating physiological insights in animal sensory systems to the functional properties of recognizing stimulus statistics. We applied the present theoretical framework to two typical models of natural images that are encoded by a population of simulated retinal neurons, and demonstrated that the resulting cognitive performances could be quantified with the Fisher information measure. The current enterprise yielded predictions about the properties of human texture perception, suggesting that the perceptual resolution of image statistics depends on visual field angles, internal noise, and neuronal information processing pathways, such as the magnocellular, parvocellular, and koniocellular systems. Furthermore, the two conceptually similar natural-image models were found to yield qualitatively different predictions, striking a note of warning against confusing the two models when describing a natural image.
Assessing risk factors for dental caries: a statistical modeling approach.
Trottini, Mario; Bossù, Maurizio; Corridore, Denise; Ierardo, Gaetano; Luzzi, Valeria; Saccucci, Matteo; Polimeni, Antonella
2015-01-01
The problem of identifying potential determinants and predictors of dental caries is of key importance in caries research and it has received considerable attention in the scientific literature. From the methodological side, a broad range of statistical models is currently available to analyze dental caries indices (DMFT, dmfs, etc.). These models have been applied in several studies to investigate the impact of different risk factors on the cumulative severity of dental caries experience. However, in most of the cases (i) these studies focus on a very specific subset of risk factors; and (ii) in the statistical modeling only few candidate models are considered and model selection is at best only marginally addressed. As a result, our understanding of the robustness of the statistical inferences with respect to the choice of the model is very limited; the richness of the set of statistical models available for analysis in only marginally exploited; and inferences could be biased due the omission of potentially important confounding variables in the model's specification. In this paper we argue that these limitations can be overcome considering a general class of candidate models and carefully exploring the model space using standard model selection criteria and measures of global fit and predictive performance of the candidate models. Strengths and limitations of the proposed approach are illustrated with a real data set. In our illustration the model space contains more than 2.6 million models, which require inferences to be adjusted for 'optimism'.
Advances in assessing geomorphic plausibility in statistical susceptibility modelling
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Petschko, Helene; Glade, Thomas
2014-05-01
The quality, reliability and applicability of landslide susceptibility maps is regularly deduced directly by interpreting quantitative model performance measures. These quantitative estimates are usually calculated for an independent test sample of a landslide inventory. Numerous studies demonstrate that totally unbiased landslide inventories are rarely available. We assume that such biases are also inherent in the test sample used to quantitatively validate the models. Therefore we suppose that the explanatory power of statistical performance measures is limited by the quality of the inventory used to calculate these statistics. To investigate this assumption, we generated and validated 16 statistical susceptibility models by using two landslide inventories of differing qualities for the Rhenodanubian Flysch zone of Lower Austria (1,354 km²). The ALS-based (Airborne Laser Scan) Inventory (n=6,218) was mapped purposely for susceptibility modelling from a high resolution hillshade and exhibits a high positional accuracy. The less accurate building ground register (BGR; n=681) provided by the Geological Survey of Lower Austria represents reported damaging events and shows a substantially lower completeness. Both inventories exhibit differing systematic biases regarding the land cover. For instance, due to human impact on the visibility of geomorphic structures (e.g. planation), few ALS landslides could be mapped on settlements and pastures (ALS-mapping bias). In contrast, damaging events were frequently reported for settlements and pastures (BGR-report bias). Susceptibility maps were calculated by applying four multivariate classification methods, namely generalized linear model, generalized additive model, random forest and support vector machine separately for both inventories and two sets of explanatory variables (with and without land cover). Quantitative validation was performed by calculating the area under the receiver operating characteristics curve (AUROC
Statistical Methods Handbook for Advanced Gas Reactor Fuel Materials
J. J. Einerson
2005-05-01
Fuel materials such as kernels, coated particles, and compacts are being manufactured for experiments simulating service in the next generation of high temperature gas reactors. These must meet predefined acceptance specifications. Many tests are performed for quality assurance, and many of these correspond to criteria that must be met with specified confidence, based on random samples. This report describes the statistical methods to be used. The properties of the tests are discussed, including the risk of false acceptance, the risk of false rejection, and the assumption of normality. Methods for calculating sample sizes are also described.
Advanced Approach of Multiagent Based Buoy Communication
Gricius, Gediminas; Drungilas, Darius; Andziulis, Arunas; Dzemydiene, Dale; Voznak, Miroslav; Kurmis, Mindaugas; Jakovlev, Sergej
2015-01-01
Usually, a hydrometeorological information system is faced with great data flows, but the data levels are often excessive, depending on the observed region of the water. The paper presents advanced buoy communication technologies based on multiagent interaction and data exchange between several monitoring system nodes. The proposed management of buoy communication is based on a clustering algorithm, which enables the performance of the hydrometeorological information system to be enhanced. The experiment is based on the design and analysis of the inexpensive but reliable Baltic Sea autonomous monitoring network (buoys), which would be able to continuously monitor and collect temperature, waviness, and other required data. The proposed approach of multiagent based buoy communication enables all the data from the costal-based station to be monitored with limited transition speed by setting different tasks for the agent-based buoy system according to the clustering information. PMID:26345197
A Flexible Approach for the Statistical Visualization of Ensemble Data
Potter, K.; Wilson, A.; Bremer, P.; Williams, Dean N.; Pascucci, V.; Johnson, C.
2009-09-29
Scientists are increasingly moving towards ensemble data sets to explore relationships present in dynamic systems. Ensemble data sets combine spatio-temporal simulation results generated using multiple numerical models, sampled input conditions and perturbed parameters. While ensemble data sets are a powerful tool for mitigating uncertainty, they pose significant visualization and analysis challenges due to their complexity. We present a collection of overview and statistical displays linked through a high level of interactivity to provide a framework for gaining key scientific insight into the distribution of the simulation results as well as the uncertainty associated with the data. In contrast to methods that present large amounts of diverse information in a single display, we argue that combining multiple linked statistical displays yields a clearer presentation of the data and facilitates a greater level of visual data analysis. We demonstrate this approach using driving problems from climate modeling and meteorology and discuss generalizations to other fields.
Advanced Safeguards Approaches for New Fast Reactors
Durst, Philip C.; Therios, Ike; Bean, Robert; Dougan, A.; Boyer, Brian; Wallace, Rick L.; Ehinger, Michael H.; Kovacic, Don N.; Tolk, K.
2007-12-15
This third report in the series reviews possible safeguards approaches for new fast reactors in general, and the ABR in particular. Fast-neutron spectrum reactors have been used since the early 1960s on an experimental and developmental level, generally with fertile blanket fuels to “breed” nuclear fuel such as plutonium. Whether the reactor is designed to breed plutonium, or transmute and “burn” actinides depends mainly on the design of the reactor neutron reflector and the whether the blanket fuel is “fertile” or suitable for transmutation. However, the safeguards issues are very similar, since they pertain mainly to the receipt, shipment and storage of fresh and spent plutonium and actinide-bearing “TRU”-fuel. For these reasons, the design of existing fast reactors and details concerning how they have been safeguarded were studied in developing advanced safeguards approaches for the new fast reactors. In this regard, the design of the Experimental Breeder Reactor-II “EBR-II” at the Idaho National Laboratory (INL) was of interest, because it was designed as a collocated fast reactor with a pyrometallurgical reprocessing and fuel fabrication line – a design option being considered for the ABR. Similarly, the design of the Fast Flux Facility (FFTF) on the Hanford Site was studied, because it was a successful prototype fast reactor that ran for two decades to evaluate fuels and the design for commercial-scale fast reactors.
Advances on interdisciplinary approaches to urban carbon
NASA Astrophysics Data System (ADS)
Romero-Lankao, P.
2015-12-01
North American urban areas are emerging as climate policy and technology innovators, urbanization process laboratories, fonts of carbon relevant experiments, hubs for grass-roots mobilization, and centers for civil-society experiments to curb carbon emissions and avoid widespread and irreversible climate impacts. Since SOCCR diverse lines of inquiry on urbanization, urban areas and the carbon cycle have advanced our understanding of some of the societal processes through which energy and land uses affect carbon. This presentation provides an overview of these diverse perspectives. It suggests the need for approaches that complement and combine the plethora of existing insights into interdisciplinary explorations of how different urbanization processes, and socio-ecological and technological components of urban areas affect the spatial and temporal patterns of carbon emissions, differentially over time and within and across cities. It also calls for a more holistic approach to examining the carbon implications of urbanization and urban areas as places, based not only on demographics or income, but also on such other interconnected features of urban development pathways as urban form, economic function, economic growth policies and climate policies.
Statistically Based Approach to Broadband Liner Design and Assessment
NASA Technical Reports Server (NTRS)
Nark, Douglas M. (Inventor); Jones, Michael G. (Inventor)
2016-01-01
A broadband liner design optimization includes utilizing in-duct attenuation predictions with a statistical fan source model to obtain optimum impedance spectra over a number of flow conditions for one or more liner locations in a bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners having impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increasing weighting to specific frequencies and/or operating conditions. One or more broadband design approaches are utilized to produce a broadband liner that targets a full range of frequencies and operating conditions.
Learning the Language of Statistics: Challenges and Teaching Approaches
ERIC Educational Resources Information Center
Dunn, Peter K.; Carey, Michael D.; Richardson, Alice M.; McDonald, Christine
2016-01-01
Learning statistics requires learning the language of statistics. Statistics draws upon words from general English, mathematical English, discipline-specific English and words used primarily in statistics. This leads to many linguistic challenges in teaching statistics and the way in which the language is used in statistics creates an extra layer…
Statistical Approaches for the Study of Cognitive and Brain Aging.
Chen, Huaihou; Zhao, Bingxin; Cao, Guanqun; Proges, Eric C; O'Shea, Andrew; Woods, Adam J; Cohen, Ronald A
2016-01-01
Neuroimaging studies of cognitive and brain aging often yield massive datasets that create many analytic and statistical challenges. In this paper, we discuss and address several limitations in the existing work. (1) Linear models are often used to model the age effects on neuroimaging markers, which may be inadequate in capturing the potential nonlinear age effects. (2) Marginal correlations are often used in brain network analysis, which are not efficient in characterizing a complex brain network. (3) Due to the challenge of high-dimensionality, only a small subset of the regional neuroimaging markers is considered in a prediction model, which could miss important regional markers. To overcome those obstacles, we introduce several advanced statistical methods for analyzing data from cognitive and brain aging studies. Specifically, we introduce semiparametric models for modeling age effects, graphical models for brain network analysis, and penalized regression methods for selecting the most important markers in predicting cognitive outcomes. We illustrate these methods using the healthy aging data from the Active Brain Study. PMID:27486400
Statistical Approaches for the Study of Cognitive and Brain Aging
Chen, Huaihou; Zhao, Bingxin; Cao, Guanqun; Proges, Eric C.; O'Shea, Andrew; Woods, Adam J.; Cohen, Ronald A.
2016-01-01
Neuroimaging studies of cognitive and brain aging often yield massive datasets that create many analytic and statistical challenges. In this paper, we discuss and address several limitations in the existing work. (1) Linear models are often used to model the age effects on neuroimaging markers, which may be inadequate in capturing the potential nonlinear age effects. (2) Marginal correlations are often used in brain network analysis, which are not efficient in characterizing a complex brain network. (3) Due to the challenge of high-dimensionality, only a small subset of the regional neuroimaging markers is considered in a prediction model, which could miss important regional markers. To overcome those obstacles, we introduce several advanced statistical methods for analyzing data from cognitive and brain aging studies. Specifically, we introduce semiparametric models for modeling age effects, graphical models for brain network analysis, and penalized regression methods for selecting the most important markers in predicting cognitive outcomes. We illustrate these methods using the healthy aging data from the Active Brain Study. PMID:27486400
Statistical approach to quantifying the elastic deformation of nanomaterials
Deng, Xinwei; Joseph, V. Roshan; Mai, Wenjie; Wang, Zhong Lin; Wu, C. F. Jeff
2009-01-01
Quantifying the mechanical properties of nanomaterials is challenged by its small size, difficulty of manipulation, lack of reliable measurement techniques, and grossly varying measurement conditions and environment. A recently proposed approach is to estimate the elastic modulus from a force-deflection physical model based on the continuous bridged-deformation of a nanobelt/nanowire using an atomic force microscope tip under different contact forces. However, the nanobelt may have some initial bending, surface roughness and imperfect physical boundary conditions during measurement, leading to large systematic errors and uncertainty in data quantification. In this article, a statistical modeling technique, sequential profile adjustment by regression (SPAR), is proposed to account for and eliminate the various experimental errors and artifacts. SPAR can automatically detect and remove the systematic errors and therefore gives more precise estimation of the elastic modulus. This research presents an innovative approach that can potentially have a broad impact in quantitative nanomechanics and nanoelectronics. PMID:19556542
STATISTICS OF DARK MATTER HALOS FROM THE EXCURSION SET APPROACH
Lapi, A.; Salucci, P.; Danese, L.
2013-08-01
We exploit the excursion set approach in integral formulation to derive novel, accurate analytic approximations of the unconditional and conditional first crossing distributions for random walks with uncorrelated steps and general shapes of the moving barrier; we find the corresponding approximations of the unconditional and conditional halo mass functions for cold dark matter (DM) power spectra to represent very well the outcomes of state-of-the-art cosmological N-body simulations. In addition, we apply these results to derive, and confront with simulations, other quantities of interest in halo statistics, including the rates of halo formation and creation, the average halo growth history, and the halo bias. Finally, we discuss how our approach and main results change when considering random walks with correlated instead of uncorrelated steps, and warm instead of cold DM power spectra.
The statistical multifragmentation model: Origins and recent advances
NASA Astrophysics Data System (ADS)
Donangelo, R.; Souza, S. R.
2016-07-01
We review the Statistical Multifragmentation Model (SMM) which considers a generalization of the liquid-drop model for hot nuclei and allows one to calculate thermodynamic quantities characterizing the nuclear ensemble at the disassembly stage. We show how to determine probabilities of definite partitions of finite nuclei and how to determine, through Monte Carlo calculations, observables such as the caloric curve, multiplicity distributions, heat capacity, among others. Some experimental measurements of the caloric curve confirmed the SMM predictions of over 10 years before, leading to a surge in the interest in the model. However, the experimental determination of the fragmentation temperatures relies on the yields of different isotopic species, which were not correctly calculated in the schematic, liquid-drop picture, employed in the SMM. This led to a series of improvements in the SMM, in particular to the more careful choice of nuclear masses and energy densities, specially for the lighter nuclei. With these improvements the SMM is able to make quantitative determinations of isotope production. We show the application of SMM to the production of exotic nuclei through multifragmentation. These preliminary calculations demonstrate the need for a careful choice of the system size and excitation energy to attain maximum yields.
The Precautionary Principle and statistical approaches to uncertainty.
Keiding, Niels; Budtz-Jørgensen, Esben
2004-01-01
The central challenge from the Precautionary Principle to statistical methodology is to help delineate (preferably quantitatively) the possibility that some exposure is hazardous, even in cases where this is not established beyond reasonable doubt. The classical approach to hypothesis testing is unhelpful, because lack of significance can be due either to uninformative data or to genuine lack of effect (the Type II error problem). Its inversion, bioequivalence testing, might sometimes be a model for the Precautionary Principle in its ability to "prove the null hypothesis". Current procedures for setting safe exposure levels are essentially derived from these classical statistical ideas, and we outline how uncertainties in the exposure and response measurements affect the no observed adverse effect level, the Benchmark approach and the "Hockey Stick" model. A particular problem concerns model uncertainty: usually these procedures assume that the class of models describing dose/response is known with certainty; this assumption is, however, often violated, perhaps particularly often when epidemiological data form the source of the risk assessment, and regulatory authorities have occasionally resorted to some average based on competing models. The recent methodology of the Bayesian model averaging might be a systematic version of this, but is this an arena for the Precautionary Principle to come into play?
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements. PMID:27362527
Statistical physics approach to earthquake occurrence and forecasting
NASA Astrophysics Data System (ADS)
de Arcangelis, Lucilla; Godano, Cataldo; Grasso, Jean Robert; Lippiello, Eugenio
2016-04-01
There is striking evidence that the dynamics of the Earth crust is controlled by a wide variety of mutually dependent mechanisms acting at different spatial and temporal scales. The interplay of these mechanisms produces instabilities in the stress field, leading to abrupt energy releases, i.e., earthquakes. As a consequence, the evolution towards instability before a single event is very difficult to monitor. On the other hand, collective behavior in stress transfer and relaxation within the Earth crust leads to emergent properties described by stable phenomenological laws for a population of many earthquakes in size, time and space domains. This observation has stimulated a statistical mechanics approach to earthquake occurrence, applying ideas and methods as scaling laws, universality, fractal dimension, renormalization group, to characterize the physics of earthquakes. In this review we first present a description of the phenomenological laws of earthquake occurrence which represent the frame of reference for a variety of statistical mechanical models, ranging from the spring-block to more complex fault models. Next, we discuss the problem of seismic forecasting in the general framework of stochastic processes, where seismic occurrence can be described as a branching process implementing space-time-energy correlations between earthquakes. In this context we show how correlations originate from dynamical scaling relations between time and energy, able to account for universality and provide a unifying description for the phenomenological power laws. Then we discuss how branching models can be implemented to forecast the temporal evolution of the earthquake occurrence probability and allow to discriminate among different physical mechanisms responsible for earthquake triggering. In particular, the forecasting problem will be presented in a rigorous mathematical framework, discussing the relevance of the processes acting at different temporal scales for different
A feature refinement approach for statistical interior CT reconstruction.
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-21
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)-minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
A feature refinement approach for statistical interior CT reconstruction
NASA Astrophysics Data System (ADS)
Hu, Zhanli; Zhang, Yunwan; Liu, Jianbo; Ma, Jianhua; Zheng, Hairong; Liang, Dong
2016-07-01
Interior tomography is clinically desired to reduce the radiation dose rendered to patients. In this work, a new statistical interior tomography approach for computed tomography is proposed. The developed design focuses on taking into account the statistical nature of local projection data and recovering fine structures which are lost in the conventional total-variation (TV)—minimization reconstruction. The proposed method falls within the compressed sensing framework of TV minimization, which only assumes that the interior ROI is piecewise constant or polynomial and does not need any additional prior knowledge. To integrate the statistical distribution property of projection data, the objective function is built under the criteria of penalized weighed least-square (PWLS-TV). In the implementation of the proposed method, the interior projection extrapolation based FBP reconstruction is first used as the initial guess to mitigate truncation artifacts and also provide an extended field-of-view. Moreover, an interior feature refinement step, as an important processing operation is performed after each iteration of PWLS-TV to recover the desired structure information which is lost during the TV minimization. Here, a feature descriptor is specifically designed and employed to distinguish structure from noise and noise-like artifacts. A modified steepest descent algorithm is adopted to minimize the associated objective function. The proposed method is applied to both digital phantom and in vivo Micro-CT datasets, and compared to FBP, ART-TV and PWLS-TV. The reconstruction results demonstrate that the proposed method performs better than other conventional methods in suppressing noise, reducing truncated and streak artifacts, and preserving features. The proposed approach demonstrates its potential usefulness for feature preservation of interior tomography under truncated projection measurements.
Urban pavement surface temperature. Comparison of numerical and statistical approach
NASA Astrophysics Data System (ADS)
Marchetti, Mario; Khalifa, Abderrahmen; Bues, Michel; Bouilloud, Ludovic; Martin, Eric; Chancibaut, Katia
2015-04-01
The forecast of pavement surface temperature is very specific in the context of urban winter maintenance. to manage snow plowing and salting of roads. Such forecast mainly relies on numerical models based on a description of the energy balance between the atmosphere, the buildings and the pavement, with a canyon configuration. Nevertheless, there is a specific need in the physical description and the numerical implementation of the traffic in the energy flux balance. This traffic was originally considered as a constant. Many changes were performed in a numerical model to describe as accurately as possible the traffic effects on this urban energy balance, such as tires friction, pavement-air exchange coefficient, and infrared flux neat balance. Some experiments based on infrared thermography and radiometry were then conducted to quantify the effect fo traffic on urban pavement surface. Based on meteorological data, corresponding pavement temperature forecast were calculated and were compared with fiels measurements. Results indicated a good agreement between the forecast from the numerical model based on this energy balance approach. A complementary forecast approach based on principal component analysis (PCA) and partial least-square regression (PLS) was also developed, with data from thermal mapping usng infrared radiometry. The forecast of pavement surface temperature with air temperature was obtained in the specific case of urban configurtation, and considering traffic into measurements used for the statistical analysis. A comparison between results from the numerical model based on energy balance, and PCA/PLS was then conducted, indicating the advantages and limits of each approach.
Runup of tsunami waves on a plane beach: statistical approach
NASA Astrophysics Data System (ADS)
Didenkulova, Ira; Pelinovsky, Efim; Sergeeva, Anna
2010-05-01
Tsunami waves approaching the coast frequently cause extensive coastal flooding, destruction of coastal constructions and loss of lives. Its destructivity can be intensified by the seische oscillations in bays and harbours, induced by tsunami waves. Meanwhile, the process of tsunami wave runup on the coast is usually studied for incident solitary-like disturbances: this is a typical input in laboratory and numerical experiments. Theoretically, this process is studied in the framework of nonlinear shallow-water theory with the use of the hodograph (Legendre) transformation. Analytical solutions are obtained for initial disturbances as a solitary wave: soliton, Gaussian and Lorentz pulses, N - waves and some other pulses of specific shapes. All solutions (theoretical and experimental) do not take into account the real long tsunami record, which usually contains seische oscillations. Such oscillations can be considered as a quasi-stationary random process with known statistics. In this paper the runup of irregular waves on the plane beach is studied in the framework of nonlinear shallow-water theory with an assumption of the non-breaking runup. Typical period of seische oscillations in harbours is 15 -45 min that provides validity of this assumption. Statistical analysis of the runup characteristics (water displacement and shoreline velocity) is carried out for an incident wave field with the Gaussian distribution. The probability density function of the runup characteristics is not Gaussian, and its deviation from the Gaussian distribution can be expressed through the breaking parameter or a self-similarity parameter.
Flow Equation Approach to the Statistics of Nonlinear Dynamical Systems
NASA Astrophysics Data System (ADS)
Marston, J. B.; Hastings, M. B.
2005-03-01
The probability distribution function of non-linear dynamical systems is governed by a linear framework that resembles quantum many-body theory, in which stochastic forcing and/or averaging over initial conditions play the role of non-zero . Besides the well-known Fokker-Planck approach, there is a related Hopf functional methodootnotetextUriel Frisch, Turbulence: The Legacy of A. N. Kolmogorov (Cambridge University Press, 1995) chapter 9.5.; in both formalisms, zero modes of linear operators describe the stationary non-equilibrium statistics. To access the statistics, we investigate the method of continuous unitary transformationsootnotetextS. D. Glazek and K. G. Wilson, Phys. Rev. D 48, 5863 (1993); Phys. Rev. D 49, 4214 (1994). (also known as the flow equation approachootnotetextF. Wegner, Ann. Phys. 3, 77 (1994).), suitably generalized to the diagonalization of non-Hermitian matrices. Comparison to the more traditional cumulant expansion method is illustrated with low-dimensional attractors. The treatment of high-dimensional dynamical systems is also discussed.
Rate-equation approach to atomic-laser light statistics
Chusseau, Laurent; Arnaud, Jacques; Philippe, Fabrice
2002-11-01
We consider three- and four-level atomic lasers that are either incoherently (unidirectionally) or coherently (bidirectionally) pumped, the single-mode cavity being resonant with the laser transition. The intracavity Fano factor and the photocurrent spectral density are evaluated on the basis of rate equations. According to that approach, fluctuations are caused by jumps in active and detecting atoms. The algebra is simple. Whenever a comparison is made, the expressions obtained coincide with the previous results. The conditions under which the output light exhibits sub-Poissonian statistics are considered in detail. Analytical results, based on linearization, are verified by comparison with Monte Carlo simulations. An essentially exhaustive investigation of sub-Poissonian light generation by three- and four-level lasers has been performed. Only special forms were reported earlier.
Modulational Instability of Cylindrical and Spherical NLS Equations. Statistical Approach
Grecu, A. T.; Grecu, D.; Visinescu, Anca; De Nicola, S.; Fedele, R.
2010-01-21
The modulational (Benjamin-Feir) instability for cylindrical and spherical NLS equations (c/s NLS equations) is studied using a statistical approach (SAMI). A kinetic equation for a two-point correlation function is written and analyzed using the Wigner-Moyal transform. The linear stability of the Fourier transform of the two-point correlation function is studied and an implicit integral form for the dispersion relation is found. This is solved for different expressions of the initial spectrum (delta-spectrum, Lorentzian, Gaussian), and in the case of a Lorentzian spectrum the total growth of the instability is calculated. The similarities and differences with the usual one-dimensional NLS equation are emphasized.
Multilayer Approach for Advanced Hybrid Lithium Battery.
Ming, Jun; Li, Mengliu; Kumar, Pushpendra; Li, Lain-Jong
2016-06-28
Conventional intercalated rechargeable batteries have shown their capacity limit, and the development of an alternative battery system with higher capacity is strongly needed for sustainable electrical vehicles and hand-held devices. Herein, we introduce a feasible and scalable multilayer approach to fabricate a promising hybrid lithium battery with superior capacity and multivoltage plateaus. A sulfur-rich electrode (90 wt % S) is covered by a dual layer of graphite/Li4Ti5O12, where the active materials S and Li4Ti5O12 can both take part in redox reactions and thus deliver a high capacity of 572 mAh gcathode(-1) (vs the total mass of electrode) or 1866 mAh gs(-1) (vs the mass of sulfur) at 0.1C (with the definition of 1C = 1675 mA gs(-1)). The battery shows unique voltage platforms at 2.35 and 2.1 V, contributed from S, and 1.55 V from Li4Ti5O12. A high rate capability of 566 mAh gcathode(-1) at 0.25C and 376 mAh gcathode(-1) at 1C with durable cycle ability over 100 cycles can be achieved. Operando Raman and electron microscope analysis confirm that the graphite/Li4Ti5O12 layer slows the dissolution/migration of polysulfides, thereby giving rise to a higher sulfur utilization and a slower capacity decay. This advanced hybrid battery with a multilayer concept for marrying different voltage plateaus from various electrode materials opens a way of providing tunable capacity and multiple voltage platforms for energy device applications. PMID:27268064
Multilayer Approach for Advanced Hybrid Lithium Battery.
Ming, Jun; Li, Mengliu; Kumar, Pushpendra; Li, Lain-Jong
2016-06-28
Conventional intercalated rechargeable batteries have shown their capacity limit, and the development of an alternative battery system with higher capacity is strongly needed for sustainable electrical vehicles and hand-held devices. Herein, we introduce a feasible and scalable multilayer approach to fabricate a promising hybrid lithium battery with superior capacity and multivoltage plateaus. A sulfur-rich electrode (90 wt % S) is covered by a dual layer of graphite/Li4Ti5O12, where the active materials S and Li4Ti5O12 can both take part in redox reactions and thus deliver a high capacity of 572 mAh gcathode(-1) (vs the total mass of electrode) or 1866 mAh gs(-1) (vs the mass of sulfur) at 0.1C (with the definition of 1C = 1675 mA gs(-1)). The battery shows unique voltage platforms at 2.35 and 2.1 V, contributed from S, and 1.55 V from Li4Ti5O12. A high rate capability of 566 mAh gcathode(-1) at 0.25C and 376 mAh gcathode(-1) at 1C with durable cycle ability over 100 cycles can be achieved. Operando Raman and electron microscope analysis confirm that the graphite/Li4Ti5O12 layer slows the dissolution/migration of polysulfides, thereby giving rise to a higher sulfur utilization and a slower capacity decay. This advanced hybrid battery with a multilayer concept for marrying different voltage plateaus from various electrode materials opens a way of providing tunable capacity and multiple voltage platforms for energy device applications.
Statistical physics approach to quantifying differences in myelinated nerve fibers
NASA Astrophysics Data System (ADS)
Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene
2014-03-01
We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.
Technology Transfer Automated Retrieval System (TEKTRAN)
Associations between food patterns and adiposity are poorly understood. Two statistical approaches were used to examine the potential association between egg consumption and adiposity. Two statistical approaches were used to examine the potential association between egg consumption and adiposity. Pa...
Challenges and approaches to statistical design and inference in high-dimensional investigations.
Gadbury, Gary L; Garrett, Karen A; Allison, David B
2009-01-01
Advances in modern technologies have facilitated high-dimensional experiments (HDEs) that generate tremendous amounts of genomic, proteomic, and other "omic" data. HDEs involving whole-genome sequences and polymorphisms, expression levels of genes, protein abundance measurements, and combinations thereof have become a vanguard for new analytic approaches to the analysis of HDE data. Such situations demand creative approaches to the processes of statistical inference, estimation, prediction, classification, and study design. The novel and challenging biological questions asked from HDE data have resulted in many specialized analytic techniques being developed. This chapter discusses some of the unique statistical challenges facing investigators studying high-dimensional biology and describes some approaches being developed by statistical scientists. We have included some focus on the increasing interest in questions involving testing multiple propositions simultaneously, appropriate inferential indicators for the types of questions biologists are interested in, and the need for replication of results across independent studies, investigators, and settings. A key consideration inherent throughout is the challenge in providing methods that a statistician judges to be sound and a biologist finds informative.
Statistical approach to meteoroid shape estimation based on recovered meteorites
NASA Astrophysics Data System (ADS)
Vinnikov, V.; Gritsevich, M.; Turchak, L.
2014-07-01
Each meteorite sample can provide data on the chemical and physical properties of interplanetary matter. The set of recovered fragments within one meteorite fall can give additional information on the history of its parent asteroid. A reliably estimated meteoroid shape is a valuable input parameter for the atmospheric entry scenario, since the pre-entry mass, terminal meteorite mass, and fireball luminosity are proportional to the pre-entry shape factor of the meteoroid to the power of 3 [1]. We present a statistical approach to the estimation of meteoroid pre-entry shape [2], applied to the detailed data on recovered meteorite fragments. This is a development of our recent study on the fragment mass distribution functions for the Košice meteorite fall [3]. The idea of the shape estimation technique is based on experiments that show that brittle fracturing produces multiple fragments of sizes smaller than or equal to the smallest dimension of the body [2]. Such shattering has fractal properties similar to many other natural phenomena [4]. Thus, this self-similarity for scaling mass sequences can be described by the power law statistical expressions [5]. The finite mass and the number of fragments N are represented via an exponential cutoff for the maximum fragment mass m_U. The undersampling of tiny unrecoverable fragments is handled via an additional constraint on the minimum fragment mass m_L. The complementary cumulative distribution function has the form F( m)={N-j}/{m_j}( {m}/{m_j})^{-β_0}exp( {m-m_j}/{m_U}). The resulting parameters sought (scaling exponent β_0 and mass limits) are computed to fit the empirical fragment mass distribution: S(β_0, j, m_U) = sum_{i=j}^{N}[F(m_i)-{N-j}/{m_j}]^2, m_j = m_L. The scaling exponent correlates with the dimensionless shape parameter d [2]: 0.13d^2-0.21d+1.1-β=0, which, in turn, is expressed via the ratio of the linear dimensions a, b, c of the shattering body [2]: d = 1+2(ab+ac+bc)(a^2+b^2+c^2)^{-1}. We apply the
New Statistical Approach to the Analysis of Hierarchical Data
NASA Astrophysics Data System (ADS)
Neuman, S. P.; Guadagnini, A.; Riva, M.
2014-12-01
Many variables possess a hierarchical structure reflected in how their increments vary in space and/or time. Quite commonly the increments (a) fluctuate in a highly irregular manner; (b) possess symmetric, non-Gaussian frequency distributions characterized by heavy tails that often decay with separation distance or lag; (c) exhibit nonlinear power-law scaling of sample structure functions in a midrange of lags, with breakdown in such scaling at small and large lags; (d) show extended power-law scaling (ESS) at all lags; and (e) display nonlinear scaling of power-law exponent with order of sample structure function. Some interpret this to imply that the variables are multifractal, which explains neither breakdowns in power-law scaling nor ESS. We offer an alternative interpretation consistent with all above phenomena. It views data as samples from stationary, anisotropic sub-Gaussian random fields subordinated to truncated fractional Brownian motion (tfBm) or truncated fractional Gaussian noise (tfGn). The fields are scaled Gaussian mixtures with random variances. Truncation of fBm and fGn entails filtering out components below data measurement or resolution scale and above domain scale. Our novel interpretation of the data allows us to obtain maximum likelihood estimates of all parameters characterizing the underlying truncated sub-Gaussian fields. These parameters in turn make it possible to downscale or upscale all statistical moments to situations entailing smaller or larger measurement or resolution and sampling scales, respectively. They also allow one to perform conditional or unconditional Monte Carlo simulations of random field realizations corresponding to these scales. Aspects of our approach are illustrated on field and laboratory measured porous and fractured rock permeabilities, as well as soil texture characteristics and neural network estimates of unsaturated hydraulic parameters in a deep vadose zone near Phoenix, Arizona. We also use our approach
Heads Up! a Calculation- & Jargon-Free Approach to Statistics
ERIC Educational Resources Information Center
Giese, Alan R.
2012-01-01
Evaluating the strength of evidence in noisy data is a critical step in scientific thinking that typically relies on statistics. Students without statistical training will benefit from heuristic models that highlight the logic of statistical analysis. The likelihood associated with various coin-tossing outcomes gives students such a model. There…
A new weight-dependent direct statistical approach model
Burn, K.W.
1997-02-01
A weight-dependent capability is inserted into the direct statistical approach (DSA) to optimize splitting and Russian roulette (RR) parameters in Monte Carlo particle transport calculations. In the new model, splitting or RR is carried out on a progenitor arriving at a surface in such a way that the weight of the progeny is fixed (for the particular surface). Thus, the model is named the DSA weight line model. In the presence of weight-dependent games, all components of the second moment, and the time, are not separable. In the absence of weight-dependent games, the component of the second moment describing the weight-dependent splitting or RR is still not separable. Two approximations are examined to render this component separable under these circumstances. One of these approximations, named the noninteger approximation, looks promising. The new DSA model with the noninteger approximation is tested on four sample problems. Comparisons with the previous weight-independent DSA model and with the MCNP (version 4a) weight window generator are made.
Statistical physics and physiology: monofractal and multifractal approaches
NASA Technical Reports Server (NTRS)
Stanley, H. E.; Amaral, L. A.; Goldberger, A. L.; Havlin, S.; Peng, C. K.
1999-01-01
Even under healthy, basal conditions, physiologic systems show erratic fluctuations resembling those found in dynamical systems driven away from a single equilibrium state. Do such "nonequilibrium" fluctuations simply reflect the fact that physiologic systems are being constantly perturbed by external and intrinsic noise? Or, do these fluctuations actually, contain useful, "hidden" information about the underlying nonequilibrium control mechanisms? We report some recent attempts to understand the dynamics of complex physiologic fluctuations by adapting and extending concepts and methods developed very recently in statistical physics. Specifically, we focus on interbeat interval variability as an important quantity to help elucidate possibly non-homeostatic physiologic variability because (i) the heart rate is under direct neuroautonomic control, (ii) interbeat interval variability is readily measured by noninvasive means, and (iii) analysis of these heart rate dynamics may provide important practical diagnostic and prognostic information not obtainable with current approaches. The analytic tools we discuss may be used on a wider range of physiologic signals. We first review recent progress using two analysis methods--detrended fluctuation analysis and wavelets--sufficient for quantifying monofractual structures. We then describe recent work that quantifies multifractal features of interbeat interval series, and the discovery that the multifractal structure of healthy subjects is different than that of diseased subjects.
Rapidity-dependent chemical potentials in a statistical approach
NASA Astrophysics Data System (ADS)
Broniowski, Wojciech; Biedroń, Bartłomiej
2008-04-01
We present a single-freeze-out model with thermal and geometric parameters dependent on the position within the fireball and use it to describe the rapidity distribution and transverse-momentum spectra of pions, kaons, protons and antiprotons measured at RHIC at \\sqrt{s_NN}=200\\,\\, GeV by BRAHMS. THERMINATOR is used to perform the necessary simulation, which includes all resonance decays. The result of the fit to the data is the expected growth of the baryon and strange chemical potentials with the spatial rapidity αpar. The value of the baryon chemical potential at αpar ~ 3 is about 200 MeV, i.e. it lies in the range of the highest SPS energies. The chosen geometry of the fireball has a decreasing transverse size as the magnitude of αpar is increased, which also corresponds to decreasing transverse flow. The strange chemical potential obtained from the fit to the K+/K- ratio is such that the local strangeness density in the fireball is compatible with zero. The resulting rapidity distribution of net protons are described qualitatively within the statistical approach. As a result of our study, the knowledge of the 'topography' of the fireball is acquired, allowing for other analyses and predictions. Research supported by the Polish Ministry of Education and Science, grants N202 034 32/0918 and 2 P03B 02828.
Understanding Vrikshasana using body mounted sensors: A statistical approach
Yelluru, Suhas Niranjan; Shanbhag, Ranjith Ravindra; Omkar, SN
2016-01-01
Aim: A scheme for understanding how the human body organizes postural movements while performing Vrikshasana is developed in the format of this paper. Settings and Design: The structural characteristics of the body and the geometry of the muscular actions are incorporated into a graphical representation of the human movement mechanics in the frontal plane. A series of neural organizational hypotheses enables us to understand the mechanics behind the hip and ankle strategy: (1) Body sway in the mediolateral direction; and (2) influence of hip and ankle to correct instabilities caused in body while performing Vrikshasana. Materials and Methods: A methodological study on 10 participants was performed by mounting four inertial measurement units on the surface of the trapezius, thoracolumbar fascia, vastus lateralis, and gastrocnemius muscles. The kinematic accelerations of three mutually exclusive trials were recorded for a period of 30 s. Results: The results of every trial were processed using two different approaches namely statistical signal processing (variance and cross-correlation). Conclusions obtained from both these studies were in favor of the initial hypothesis. Conclusions: This study enabled us to understand the role of hip abductors and adductors, and ankle extensions and flexions in correcting the posture while performing Vrikshasana. PMID:26865765
ERIC Educational Resources Information Center
Hassan, Mahamood M.; Schwartz, Bill N.
2014-01-01
This paper discusses a student research project that is part of an advanced cost accounting class. The project emphasizes active learning, integrates cost accounting with macroeconomics and statistics by "learning by doing" using real world data. Students analyze sales data for a publicly listed company by focusing on the company's…
ERIC Educational Resources Information Center
Averitt, Sallie D.
This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills and instructing them in basic calculator…
ERIC Educational Resources Information Center
Billings, Paul H.
This instructional guide, one of a series developed by the Technical Education Advancement Modules (TEAM) project, is a 6-hour introductory module on statistical process control (SPC), designed to develop competencies in the following skill areas: (1) identification of the three classes of SPC use; (2) understanding a process and how it works; (3)…
Resistive switching phenomena: A review of statistical physics approaches
Lee, Jae Sung; Lee, Shinbuhm; Noh, Tae Won
2015-08-31
Here we report that resistive switching (RS) phenomena are reversible changes in the metastable resistance state induced by external electric fields. After discovery ~50 years ago, RS phenomena have attracted great attention due to their potential application in next-generation electrical devices. Considerable research has been performed to understand the physical mechanisms of RS and explore the feasibility and limits of such devices. There have also been several reviews on RS that attempt to explain the microscopic origins of how regions that were originally insulators can change into conductors. However, little attention has been paid to the most important factor in determining resistance: how conducting local regions are interconnected. Here, we provide an overview of the underlying physics behind connectivity changes in highly conductive regions under an electric field. We first classify RS phenomena according to their characteristic current–voltage curves: unipolar, bipolar, and threshold switchings. Second, we outline the microscopic origins of RS in oxides, focusing on the roles of oxygen vacancies: the effect of concentration, the mechanisms of channel formation and rupture, and the driving forces of oxygen vacancies. Third, we review RS studies from the perspective of statistical physics to understand connectivity change in RS phenomena. We discuss percolation model approaches and the theory for the scaling behaviors of numerous transport properties observed in RS. Fourth, we review various switching-type conversion phenomena in RS: bipolar-unipolar, memory-threshold, figure-of-eight, and counter-figure-of-eight conversions. Finally, we review several related technological issues, such as improvement in high resistance fluctuations, sneak-path problems, and multilevel switching problems.
Resistive switching phenomena: A review of statistical physics approaches
Lee, Jae Sung; Lee, Shinbuhm; Noh, Tae Won
2015-08-31
Here we report that resistive switching (RS) phenomena are reversible changes in the metastable resistance state induced by external electric fields. After discovery ~50 years ago, RS phenomena have attracted great attention due to their potential application in next-generation electrical devices. Considerable research has been performed to understand the physical mechanisms of RS and explore the feasibility and limits of such devices. There have also been several reviews on RS that attempt to explain the microscopic origins of how regions that were originally insulators can change into conductors. However, little attention has been paid to the most important factor inmore » determining resistance: how conducting local regions are interconnected. Here, we provide an overview of the underlying physics behind connectivity changes in highly conductive regions under an electric field. We first classify RS phenomena according to their characteristic current–voltage curves: unipolar, bipolar, and threshold switchings. Second, we outline the microscopic origins of RS in oxides, focusing on the roles of oxygen vacancies: the effect of concentration, the mechanisms of channel formation and rupture, and the driving forces of oxygen vacancies. Third, we review RS studies from the perspective of statistical physics to understand connectivity change in RS phenomena. We discuss percolation model approaches and the theory for the scaling behaviors of numerous transport properties observed in RS. Fourth, we review various switching-type conversion phenomena in RS: bipolar-unipolar, memory-threshold, figure-of-eight, and counter-figure-of-eight conversions. Finally, we review several related technological issues, such as improvement in high resistance fluctuations, sneak-path problems, and multilevel switching problems.« less
Resistive switching phenomena: A review of statistical physics approaches
NASA Astrophysics Data System (ADS)
Lee, Jae Sung; Lee, Shinbuhm; Noh, Tae Won
2015-09-01
Resistive switching (RS) phenomena are reversible changes in the metastable resistance state induced by external electric fields. After discovery ˜50 years ago, RS phenomena have attracted great attention due to their potential application in next-generation electrical devices. Considerable research has been performed to understand the physical mechanisms of RS and explore the feasibility and limits of such devices. There have also been several reviews on RS that attempt to explain the microscopic origins of how regions that were originally insulators can change into conductors. However, little attention has been paid to the most important factor in determining resistance: how conducting local regions are interconnected. Here, we provide an overview of the underlying physics behind connectivity changes in highly conductive regions under an electric field. We first classify RS phenomena according to their characteristic current-voltage curves: unipolar, bipolar, and threshold switchings. Second, we outline the microscopic origins of RS in oxides, focusing on the roles of oxygen vacancies: the effect of concentration, the mechanisms of channel formation and rupture, and the driving forces of oxygen vacancies. Third, we review RS studies from the perspective of statistical physics to understand connectivity change in RS phenomena. We discuss percolation model approaches and the theory for the scaling behaviors of numerous transport properties observed in RS. Fourth, we review various switching-type conversion phenomena in RS: bipolar-unipolar, memory-threshold, figure-of-eight, and counter-figure-of-eight conversions. Finally, we review several related technological issues, such as improvement in high resistance fluctuations, sneak-path problems, and multilevel switching problems.
Symmetries and the approach to statistical equilibrium in isotropic turbulence
NASA Astrophysics Data System (ADS)
Clark, Timothy T.; Zemach, Charles
1998-11-01
The relaxation in time of an arbitrary isotropic turbulent state to a state of statistical equilibrium is identified as a transition to a state which is invariant under a symmetry group. We deduce the allowed self-similar forms and time-decay laws for equilibrium states by applying Lie-group methods (a) to a family of scaling symmetries, for the limit of high Reynolds number, as well as (b) to a unique scaling symmetry, for nonzero viscosity or nonzero hyperviscosity. This explains why a diverse collection of turbulence models, going back half a century, arrived at the same time-decay laws, either through derivations embedded in the mechanics of a particular model, or through numerical computation. Because the models treat the same dynamical variables having the same physical dimensions, they are subject to the same scaling invariances and hence to the same time-decay laws, independent of the eccentricities of their different formulations. We show in turn, by physical argument, by an explicitly solvable analytical model, and by numerical computation in more sophisticated models, that the physical mechanism which drives (this is distinct from the mathematical circumstance which allows) the relaxation to equilibrium is the cascade of turbulence energy toward higher wave numbers, with the rate of cascade approaching zero in the low wave-number limit and approaching infinity in the high wave-number limit. Only the low-wave-number properties of the initial state can influence the equilibrium state. This supplies the physical basis, beyond simple dimensional analysis, for quantitative estimates of relaxation times. These relaxation times are estimated to be as large as hundreds or more times the initial dominant-eddy cycle times, and are determined by the large-eddy cycle times. This mode of analysis, applied to a viscous turbulent system in a wind tunnel with typical initial laboratory parameters, shows that the time necessary to reach the final stage of decay is
Ice Shelf Modeling: A Cross-Polar Bayesian Statistical Approach
NASA Astrophysics Data System (ADS)
Kirchner, N.; Furrer, R.; Jakobsson, M.; Zwally, H. J.
2010-12-01
Ice streams interlink glacial terrestrial and marine environments: embedded in a grounded inland ice such as the Antarctic Ice Sheet or the paleo ice sheets covering extensive parts of the Eurasian and Amerasian Arctic respectively, ice streams are major drainage agents facilitating the discharge of substantial portions of continental ice into the ocean. At their seaward side, ice streams can either extend onto the ocean as floating ice tongues (such as the Drygalsky Ice Tongue/East Antarctica), or feed large ice shelves (as is the case for e.g. the Siple Coast and the Ross Ice Shelf/West Antarctica). The flow behavior of ice streams has been recognized to be intimately linked with configurational changes in their attached ice shelves; in particular, ice shelf disintegration is associated with rapid ice stream retreat and increased mass discharge from the continental ice mass, contributing eventually to sea level rise. Investigations of ice stream retreat mechanism are however incomplete if based on terrestrial records only: rather, the dynamics of ice shelves (and, eventually, the impact of the ocean on the latter) must be accounted for. However, since floating ice shelves leave hardly any traces behind when melting, uncertainty regarding the spatio-temporal distribution and evolution of ice shelves in times prior to instrumented and recorded observation is high, calling thus for a statistical modeling approach. Complementing ongoing large-scale numerical modeling efforts (Pollard & DeConto, 2009), we model the configuration of ice shelves by using a Bayesian Hiearchial Modeling (BHM) approach. We adopt a cross-polar perspective accounting for the fact that currently, ice shelves exist mainly along the coastline of Antarctica (and are virtually non-existing in the Arctic), while Arctic Ocean ice shelves repeatedly impacted the Arctic ocean basin during former glacial periods. Modeled Arctic ocean ice shelf configurations are compared with geological spatial
Artificial Intelligence Approach to Support Statistical Quality Control Teaching
ERIC Educational Resources Information Center
Reis, Marcelo Menezes; Paladini, Edson Pacheco; Khator, Suresh; Sommer, Willy Arno
2006-01-01
Statistical quality control--SQC (consisting of Statistical Process Control, Process Capability Studies, Acceptance Sampling and Design of Experiments) is a very important tool to obtain, maintain and improve the Quality level of goods and services produced by an organization. Despite its importance, and the fact that it is taught in technical and…
ERIC Educational Resources Information Center
Petocz, Agnes; Newbery, Glenn
2010-01-01
Statistics education in psychology often falls disappointingly short of its goals. The increasing use of qualitative approaches in statistics education research has extended and enriched our understanding of statistical cognition processes, and thus facilitated improvements in statistical education and practices. Yet conceptual analysis, a…
[Approach to dysphagia in advanced dementia].
Gómez-Busto, Fernando; Andia, Virginia; Ruiz de Alegria, Loli; Francés, Inés
2009-11-01
From the onset, dementia affects the patient's nutritional status, producing anorexia, weight loss, feeding apraxia and dysphagia. Distinct strategies are required in each of the stages of this disease, starting with awareness and knowledge of the problem and its prompt detection. In dementia, dysphagia usually appears in advanced phases, when the patient is often institutionalized. When dysphagia is suspected, the patient's tolerance must be evaluated by the volume/viscosity test, environmental and postural strategies should be introduced, and the texture of the diet should be modified. This is a complex task requiring the involvement of a properly trained interdisciplinary team, able to provide information and alternatives and integrate the family environment in the patient's care. The adapted diet should be based on the traditional diet that can also be combined with artificial supplements to provide a varied diet that increases patients', caregivers' and relatives' satisfaction. Tube feeding has shown no nutritional benefits in patients with advanced dementia. Therefore, we propose assisted oral feeding as the most natural and appropriate form of feeding in these patients, always respecting their previously expressed wishes.
An Alternative Approach to Teaching Statistics to Dental Students.
ERIC Educational Resources Information Center
Hutton, Jack G., Jr.; And Others
1982-01-01
Literature on statistics instruction in dental education indicates course guidelines are available, and computer-assisted instruction is recommended. Self-instruction with programed materials is recommended as an effective and less costly alternative. (Author/MSE)
Statistical physics approaches to understanding the firm growth problem
NASA Astrophysics Data System (ADS)
Fu, Dongfeng
This thesis applies statistical physics approaches to investigate quantitatively the size and growth of the complex system of business firms. We study the logarithm of the one-year growth rate of firms g ≡ log(S(t + 1)/S( t)) where S(t) and S( t + 1) are the sizes of firms in the year t and t + 1 measured in monetary values. Part I in this thesis reviews some main empirical results of firm size and firm growth based on different databases. They are (i) the size distribution of firms P(S) are found to be skewed (either log-normal or power-law depending on the different databases), (ii) the growth-rate distributions of firms P(g) are of Laplace form with power-law tails, (iii) the standard deviation of firm growth rates is related by a negative power-law to the firm size. The distribution of firm growth rates conditioned on firm size collapses onto a single curve, which implies that a universal functional form may exist to describe the distribution of firm growth rate. Part II models the Entry & Exit effect and firm proportional growth using a generalized preferential attachment model. The model assumes that a new firm enters the system with a constant rate; a new unit enters/exits one of existing firms preferentially, that it, the larger firms have bigger probability to obtain the new unit, and the larger firms have bigger probability to lose a unit. The model successfully explains the observations: (i) the distribution of unit number P( K) in a firm is power law with exponential tails, (ii) P (g) is of Laplace form with power-law tails with exponent 3. Part III studies the Merging & Splitting effect in the framework of Coase theory using a dynamic percolation model in a 2-dimensional lattice where each row represents a product and each column can represent a consumer; a cell is a potential transaction. The size of the firm would be represented by the number of the cells it covers in the lattice. The model explains the facts that P(S) is power-law, P(g) is tent
Advancing Instructional Communication: Integrating a Biosocial Approach
ERIC Educational Resources Information Center
Horan, Sean M.; Afifi, Tamara D.
2014-01-01
Celebrating 100 years of the National Communication Association necessitates that, as we commemorate our past, we also look toward our future. As part of a larger conversation about the future of instructional communication, this essay reinvestigates the importance of integrating biosocial approaches into instructional communication research. In…
Approaches for advancing scientific understanding of macrosystems
Levy, Ofir; Ball, Becky A.; Bond-Lamberty, Ben; Cheruvelil, Kendra S.; Finley, Andrew O.; Lottig, Noah R.; Surangi W. Punyasena,; Xiao, Jingfeng; Zhou, Jizhong; Buckley, Lauren B.; Filstrup, Christopher T.; Keitt, Tim H.; Kellner, James R.; Knapp, Alan K.; Richardson, Andrew D.; Tcheng, David; Toomey, Michael; Vargas, Rodrigo; Voordeckers, James W.; Wagner, Tyler; Williams, John W.
2014-01-01
The emergence of macrosystems ecology (MSE), which focuses on regional- to continental-scale ecological patterns and processes, builds upon a history of long-term and broad-scale studies in ecology. Scientists face the difficulty of integrating the many elements that make up macrosystems, which consist of hierarchical processes at interacting spatial and temporal scales. Researchers must also identify the most relevant scales and variables to be considered, the required data resources, and the appropriate study design to provide the proper inferences. The large volumes of multi-thematic data often associated with macrosystem studies typically require validation, standardization, and assimilation. Finally, analytical approaches need to describe how cross-scale and hierarchical dynamics and interactions relate to macroscale phenomena. Here, we elaborate on some key methodological challenges of MSE research and discuss existing and novel approaches to meet them.
Links to sources of cancer-related statistics, including the Surveillance, Epidemiology and End Results (SEER) Program, SEER-Medicare datasets, cancer survivor prevalence data, and the Cancer Trends Progress Report.
Advances and challenges in the attribution of climate impacts using statistical inference
NASA Astrophysics Data System (ADS)
Hsiang, S. M.
2015-12-01
We discuss recent advances, challenges, and debates in the use of statistical models to infer and attribute climate impacts, such as distinguishing effects of "climate" vs. "weather," accounting for simultaneous environmental changes along multiple dimensions, evaluating multiple sources of uncertainty, accounting for adaptation, and simulating counterfactual economic or social trajectories. We relate these ideas to recent findings linking temperature to economic productivity/violence and tropical cyclones to economic growth.
Advances in myelofibrosis: a clinical case approach
Mascarenhas, John O.; Orazi, Attilio; Bhalla, Kapil N.; Champlin, Richard E.; Harrison, Claire; Hoffman, Ronald
2013-01-01
Primary myelofibrosis is a member of the myeloproliferative neoplasms, a diverse group of bone marrow malignancies. Symptoms of myelofibrosis, particularly those associated with splenomegaly (abdominal distention and pain, early satiety, dyspnea, and diarrhea) and constitutional symptoms, represent a substantial burden to patients. Most patients eventually die from the disease, with a median survival ranging from approximately 5–7 years. Mutations in Janus kinase 2 (JAK2), a kinase that is essential for the normal development of erythrocytes, granulocytes, and platelets, notably the V617F mutation, have been identified in approximately 50% of patients with myelofibrosis. The approval of a JAK2 inhibitor in 2011 has improved the outlook of many patients with myelofibrosis and has changed the treatment landscape. This article focuses on some of the important issues in current myelofibrosis treatment management, including differentiation of myelofibrosis from essential thrombocythemia and polycythemia vera, up-dated data on the results of JAK2 inhibitor therapy, the role of epigenetic mechanisms in myelofibrosis pathogenesis, investigational therapies for myelofibrosis, and advances in hematopoietic stem cell transplant. Three myelofibrosis cases are included to underscore the issues in diagnosing and treating this complex disease. PMID:24091929
Europe's Neogene and Quaternary lake gastropod diversity - a statistical approach
NASA Astrophysics Data System (ADS)
Neubauer, Thomas A.; Georgopoulou, Elisavet; Harzhauser, Mathias; Mandic, Oleg; Kroh, Andreas
2014-05-01
During the Neogene Europe's geodynamic history gave rise to several long-lived lakes with conspicuous endemic radiations. However, such lacustrine systems are rare today as well as in the past compared to the enormous numbers of "normal" lakes. Most extant European lakes are mainly results of the Ice Ages and are due to their (geologically) temporary nature largely confined to the Pleistocene-Holocene. As glacial lakes are also geographically restricted to glacial regions (and their catchment areas) their preservation potential is fairly low. Also deposits of streams, springs, and groundwater, which today are inhabited by species-rich gastropod assemblages, are rarely preserved. Thus, the pre-Quaternary lacustrine record is biased towards long-lived systems, such as the Late Miocene Lake Pannon, the Early to Middle Miocene Dinaride Lake System, the Middle Miocene Lake Steinheim and several others. All these systems have been studied for more than 150 years concerning their mollusk inventories and the taxonomic literature is formidable. However, apart from few general overviews precise studies on the γ-diversities of the post-Oligocene European lake systems and the shifting biodiversity in European freshwater systems through space and time are entirely missing. Even for the modern faunas, literature on large-scale freshwater gastropod diversity in extant lakes is scarce and lacks a statistical approach. Our preliminary data suggest fundamental differences between modern and pre-Pleistocene freshwater biogeography in central Europe. A rather homogenous central European Pleistocene and Holocene lake fauna is contrasted by considerable provincialism during the early Middle Miocene. Aside from the ancient Dessaretes lakes of the Balkan Peninsula, Holocene lake faunas are dominated by planorbids and lymnaeids in species numbers. This composition differs considerably from many Miocene and Pliocene lake faunas, which comprise pyrgulid-, hydrobiid-, viviparid-, melanopsid
Statistical and Microscopic Approach to Gas Phase Chemical Kinetics.
ERIC Educational Resources Information Center
Perez, J. M.; Quereda, R.
1983-01-01
Describes advanced undergraduate laboratory exercise examining the dependence of the rate constants and the instantaneous concentrations with the nature and energy content in a gas-phase complex reaction. Computer program (with instructions and computation flow charts) used with the exercise is available from the author. (Author/JN)
A BAYESIAN STATISTICAL APPROACH FOR THE EVALUATION OF CMAQ
Bayesian statistical methods are used to evaluate Community Multiscale Air Quality (CMAQ) model simulations of sulfate aerosol over a section of the eastern US for 4-week periods in summer and winter 2001. The observed data come from two U.S. Environmental Protection Agency data ...
An Experimental Approach to Teaching and Learning Elementary Statistical Mechanics
ERIC Educational Resources Information Center
Ellis, Frank B.; Ellis, David C.
2008-01-01
Introductory statistical mechanics is studied for a simple two-state system using an inexpensive and easily built apparatus. A large variety of demonstrations, suitable for students in high school and introductory university chemistry courses, are possible. This article details demonstrations for exothermic and endothermic reactions, the dynamic…
A BAYESIAN STATISTICAL APPROACHES FOR THE EVALUATION OF CMAQ
This research focuses on the application of spatial statistical techniques for the evaluation of the Community Multiscale Air Quality (CMAQ) model. The upcoming release version of the CMAQ model was run for the calendar year 2001 and is in the process of being evaluated by EPA an...
Analysis of Coastal Dunes: A Remote Sensing and Statistical Approach.
ERIC Educational Resources Information Center
Jones, J. Richard
1985-01-01
Remote sensing analysis and statistical methods were used to analyze the coastal dunes of Plum Island, Massachusetts. The research methodology used provides an example of a student project for remote sensing, geomorphology, or spatial analysis courses at the university level. (RM)
Statistical approaches to pharmacodynamic modeling: motivations, methods, and misperceptions.
Mick, R; Ratain, M J
1993-01-01
We have attempted to outline the fundamental statistical aspects of pharmacodynamic modeling. Unexpected yet substantial variability in effect in a group of similarly treated patients is the key motivation for pharmacodynamic investigations. Pharmacokinetic and/or pharmacodynamic factors may influence this variability. Residual variability in effect that persists after accounting for drug exposure indicates that further statistical modeling with pharmacodynamic factors is warranted. Factors that significantly predict interpatient variability in effect may then be employed to individualize the drug dose. In this paper we have emphasized the need to understand the properties of the effect measure and explanatory variables in terms of scale, distribution, and statistical relationship. The assumptions that underlie many types of statistical models have been discussed. The role of residual analysis has been stressed as a useful method to verify assumptions. We have described transformations and alternative regression methods that are employed when these assumptions are found to be in violation. Sequential selection procedures for the construction of multivariate models have been presented. The importance of assessing model performance has been underscored, most notably in terms of bias and precision. In summary, pharmacodynamic analyses are now commonly performed and reported in the oncologic literature. The content and format of these analyses has been variable. The goals of such analyses are to identify and describe pharmacodynamic relationships and, in many cases, to propose a statistical model. However, the appropriateness and performance of the proposed model are often difficult to judge. Table 1 displays suggestions (in a checklist format) for structuring the presentation of pharmacodynamic analyses, which reflect the topics reviewed in this paper. PMID:8269582
Accuracy Evaluation of a Mobile Mapping System with Advanced Statistical Methods
NASA Astrophysics Data System (ADS)
Toschi, I.; Rodríguez-Gonzálvez, P.; Remondino, F.; Minto, S.; Orlandini, S.; Fuller, A.
2015-02-01
This paper discusses a methodology to evaluate the precision and the accuracy of a commercial Mobile Mapping System (MMS) with advanced statistical methods. So far, the metric potentialities of this emerging mapping technology have been studied in few papers, where generally the assumption that errors follow a normal distribution is made. In fact, this hypothesis should be carefully verified in advance, in order to test how well the Gaussian classic statistics can adapt to datasets that are usually affected by asymmetrical gross errors. The workflow adopted in this study relies on a Gaussian assessment, followed by an outlier filtering process. Finally, non-parametric statistical models are applied, in order to achieve a robust estimation of the error dispersion. Among the different MMSs available on the market, the latest solution provided by RIEGL is here tested, i.e. the VMX-450 Mobile Laser Scanning System. The test-area is the historic city centre of Trento (Italy), selected in order to assess the system performance in dealing with a challenging and historic urban scenario. Reference measures are derived from photogrammetric and Terrestrial Laser Scanning (TLS) surveys. All datasets show a large lack of symmetry that leads to the conclusion that the standard normal parameters are not adequate to assess this type of data. The use of non-normal statistics gives thus a more appropriate description of the data and yields results that meet the quoted a-priori errors.
Advancing Profiling Sensors with a Wireless Approach
Galvis, Alex; Russomanno, David J.
2012-01-01
The notion of a profiling sensor was first realized by a Near-Infrared (N-IR) retro-reflective prototype consisting of a vertical column of wired sparse detectors. This paper extends that prior work and presents a wireless version of a profiling sensor as a collection of sensor nodes. The sensor incorporates wireless sensing elements, a distributed data collection and aggregation scheme, and an enhanced classification technique. In this novel approach, a base station pre-processes the data collected from the sensor nodes and performs data re-alignment. A back-propagation neural network was also developed for the wireless version of the N-IR profiling sensor that classifies objects into the broad categories of human, animal or vehicle with an accuracy of approximately 94%. These enhancements improve deployment options as compared with the first generation of wired profiling sensors, possibly increasing the application scenarios for such sensors, including intelligent fence applications. PMID:23443371
PGT: A Statistical Approach to Prediction and Mechanism Design
NASA Astrophysics Data System (ADS)
Wolpert, David H.; Bono, James W.
One of the biggest challenges facing behavioral economics is the lack of a single theoretical framework that is capable of directly utilizing all types of behavioral data. One of the biggest challenges of game theory is the lack of a framework for making predictions and designing markets in a manner that is consistent with the axioms of decision theory. An approach in which solution concepts are distribution-valued rather than set-valued (i.e. equilibrium theory) has both capabilities. We call this approach Predictive Game Theory (or PGT). This paper outlines a general Bayesian approach to PGT. It also presents one simple example to illustrate the way in which this approach differs from equilibrium approaches in both prediction and mechanism design settings.
Statistical approach for subwavelength measurements with a conventional light microscope
Palanker, Daniel; Lewis, Aaron
1991-01-01
A method is developed theoretically that will permit subwavelength measurements of objects that differ from the surroundings by any contrast enhancing parameter, such as fluorescence, second harmonic generation, reflection et cetera, using a statistical analysis of a picture obtained with a conventional light microscope through a set of subwavelength apertures or by repeated scanning of a laser beam over a defined area. It is demonstrated that with this methodology information can be obtained on microdomains that are thirty times less than the diameter of the aperture. For example, for apertures that are 0.3 μm in diameter it is possible to measure the dimension of objects that are ∼10 nm. A technology is described by which it is possible to produce masks with the appropriate apertures. Instrumentation is described that would allow for the realization of these statistical methodologies with either apertures or scanning laser beams. ImagesFIGURE 2FIGURE 6FIGURE 7 PMID:19431808
Statistical approach for detecting cancer lesions from prostate ultrasound images
NASA Astrophysics Data System (ADS)
Houston, A. G.; Premkumar, Saganti B.; Babaian, Richard J.; Pitts, David E.
1993-07-01
Sequential digitized cross-sectional ultrasound image planes of several prostates have been studied at the pixel level during the past year. The statistical distribution of gray scale values in terms of simple statistics, sample means and sample standard deviations, have been considered for estimating the differences between cross-sectional image planes of the gland due to the presence of cancer lesions. Based on a variability measure, the results for identifying the presence of cancer lesions in the peripheral zone of the gland for 25 blind test cases were found to be 64% accurate. This accuracy is higher than that obtained by visual photo interpretation of the image data, though not as high as our earlier results were indicating. Axial-view ultrasound image planes of prostate glands were obtained from the apex to the base of the gland at 2 mm intervals. Results for the 25 different prostate glands, which include pathologically confirmed benign and cancer cases, are presented.
Batch Statistical Process Monitoring Approach to a Cocrystallization Process.
Sarraguça, Mafalda C; Ribeiro, Paulo R S; Santos, Adenilson O Dos; Lopes, João A
2015-12-01
Cocrystals are defined as crystalline structures composed of two or more compounds that are solid at room temperature held together by noncovalent bonds. Their main advantages are the increase of solubility, bioavailability, permeability, stability, and at the same time retaining active pharmaceutical ingredient bioactivity. The cocrystallization between furosemide and nicotinamide by solvent evaporation was monitored on-line using near-infrared spectroscopy (NIRS) as a process analytical technology tool. The near-infrared spectra were analyzed using principal component analysis. Batch statistical process monitoring was used to create control charts to perceive the process trajectory and define control limits. Normal and non-normal operating condition batches were performed and monitored with NIRS. The use of NIRS associated with batch statistical process models allowed the detection of abnormal variations in critical process parameters, like the amount of solvent or amount of initial components present in the cocrystallization.
Class G cement in Brazil - A statistical approach
Rosa, F.C.; Coelho, O. Jr.; Parente, F.J. )
1993-09-01
Since 1975, Petrobras has worked with Brazilian Portland cement manufacturers to develop high-quality Class G cements. The Petrobras R and D Center has analyzed each batch of Class G cement manufactured by prequalified producers to API Spec. 10 standards and to Brazilian Assoc. of Technical Standards (ABNT) NBR 9831 standards. As a consequence, the Drilling Dept. at Petrobras now is supplied by three approved Class G cement factories strategically located in Brazil. This paper statistically analyzes test results on the basis of physical parameters of these Class G cements over 3 years. Statistical indices are reported to evaluate dispersion of the physical properties to obtain a reliability index for each Class G cement.
A statistical mechanics approach to autopoietic immune networks
NASA Astrophysics Data System (ADS)
Barra, Adriano; Agliari, Elena
2010-07-01
In this work we aim to bridge theoretical immunology and disordered statistical mechanics. We introduce a model for the behavior of B-cells which naturally merges the clonal selection theory and the autopoietic network theory as a whole. From the analysis of its features we recover several basic phenomena such as low-dose tolerance, dynamical memory of antigens and self/non-self discrimination.
Statistical approach to linewidth control in a logic fab
NASA Astrophysics Data System (ADS)
Pitter, Michael; Doleschel, Bernhard; Eibl, Ludwig; Steinkirchner, Erwin; Grassmann, Andreas
1999-04-01
We designed an adaptive line width controller specially tailored to the needs of a highly diversified logic fab. Simulations of different controller types fed with historic CD data show advantages of an SPC based controller over a Run by Run controller. This result confirms the SPC assumption that as long as a process is in statistical control, changing the process parameters will only increase the variability of the output.
A Statistical Mechanics Approach for a Rigidity Problem
NASA Astrophysics Data System (ADS)
Mesón, Alejandro; Vericat, Fernando
2007-01-01
We focus the problem of establishing when a statistical mechanics system is determined by its free energy. A lattice system, modelled by a directed and weighted graph G (whose vertices are the spins and its adjacency matrix M will be given by the system transition rules), is considered. For a matrix A( q), depending on the system interactions, with entries which are in the ring Z[ a q : a∈ R +] and such that A(0) equals the integral matrix M, the system free energy β A ( q) will be defined as the spectral radius of A( q). This kind of free energy will be related with that normally introduced in Statistical Mechanics as proportional to the logarithm of the partition function. Then we analyze under what conditions the following statement could be valid: if two systems have respectively matrices A, B and β A = β B then the matrices are equivalent in some sense. Issues of this nature receive the name of rigidity problems. Our scheme, for finite interactions, closely follows that developed, within a dynamical context, by Pollicott and Weiss but now emphasizing their statistical mechanics aspects and including a classification for Gibbs states associated to matrices A( q). Since this procedure is not applicable for infinite range interactions, we discuss a way to obtain also some rigidity results for long range potentials.
Advances in Quantum Trajectory Approaches to Dynamics
NASA Astrophysics Data System (ADS)
Askar, Attila
2001-03-01
The quantum fluid dynamics (QFD) formulation is based on the separation of the amplitude and phase of the complex wave function in Schrodinger's equation. The approach leads to conservation laws for an equivalent "gas continuum". The Lagrangian [1] representation corresponds to following the particles of the fluid continuum, i. e. calculating "quantum trajectories". The Eulerian [2] representation on the other hand, amounts to observing the dynamics of the gas continuum at the points of a fixed coordinate frame. The combination of several factors leads to a most encouraging computational efficiency. QFD enables the numerical analysis to deal with near monotonic amplitude and phase functions. The Lagrangian description concentrates the computation effort to regions of highest probability as an optimal adaptive grid. The Eulerian representation allows the study of multi-coordinate problems as a set of one-dimensional problems within an alternating direction methodology. An explicit time integrator limits the increase in computational effort with the number of discrete points to linear. Discretization of the space via local finite elements [1,2] and global radial functions [3] will be discussed. Applications include wave packets in four-dimensional quadratic potentials and two coordinate photo-dissociation problems for NOCl and NO2. [1] "Quantum fluid dynamics (QFD) in the Lagrangian representation with applications to photo-dissociation problems", F. Sales, A. Askar and H. A. Rabitz, J. Chem. Phys. 11, 2423 (1999) [2] "Multidimensional wave-packet dynamics within the fluid dynamical formulation of the Schrodinger equation", B. Dey, A. Askar and H. A. Rabitz, J. Chem. Phys. 109, 8770 (1998) [3] "Solution of the quantum fluid dynamics equations with radial basis function interpolation", Xu-Guang Hu, Tak-San Ho, H. A. Rabitz and A. Askar, Phys. Rev. E. 61, 5967 (2000)
Students' Attitudes toward Statistics across the Disciplines: A Mixed-Methods Approach
ERIC Educational Resources Information Center
Griffith, James D.; Adams, Lea T.; Gu, Lucy L.; Hart, Christian L.; Nichols-Whitehead, Penney
2012-01-01
Students' attitudes toward statistics were investigated using a mixed-methods approach including a discovery-oriented qualitative methodology among 684 undergraduate students across business, criminal justice, and psychology majors where at least one course in statistics was required. Students were asked about their attitudes toward statistics and…
Source apportionment advances using polar plots of bivariate correlation and regression statistics
NASA Astrophysics Data System (ADS)
Grange, Stuart K.; Lewis, Alastair C.; Carslaw, David C.
2016-11-01
This paper outlines the development of enhanced bivariate polar plots that allow the concentrations of two pollutants to be compared using pair-wise statistics for exploring the sources of atmospheric pollutants. The new method combines bivariate polar plots, which provide source characteristic information, with pair-wise statistics that provide information on how two pollutants are related to one another. The pair-wise statistics implemented include weighted Pearson correlation and slope from two linear regression methods. The development uses a Gaussian kernel to locally weight the statistical calculations on a wind speed-direction surface together with variable-scaling. Example applications of the enhanced polar plots are presented by using routine air quality data for two monitoring sites in London, United Kingdom for a single year (2013). The London examples demonstrate that the combination of bivariate polar plots, correlation, and regression techniques can offer considerable insight into air pollution source characteristics, which would be missed if only scatter plots and mean polar plots were used for analysis. Specifically, using correlation and slopes as pair-wise statistics, long-range transport processes were isolated and black carbon (BC) contributions to PM2.5 for a kerbside monitoring location were quantified. Wider applications and future advancements are also discussed.
A Statistical Approach to Establishing Subsystem Environmental Test Specifications
NASA Technical Reports Server (NTRS)
Keegan, W. B.
1974-01-01
Results are presented of a research task to evaluate structural responses at various subsystem mounting locations during spacecraft level test exposures to the environments of mechanical shock, acoustic noise, and random vibration. This statistical evaluation is presented in the form of recommended subsystem test specifications for these three environments as normalized to a reference set of spacecraft test levels and are thus suitable for extrapolation to a set of different spacecraft test levels. The recommendations are dependent upon a subsystem's mounting location in a spacecraft, and information is presented on how to determine this mounting zone for a given subsystem.
Statistical Thermodynamic Approach to Vibrational Solitary Waves in Acetanilide
NASA Astrophysics Data System (ADS)
Vasconcellos, Áurea R.; Mesquita, Marcus V.; Luzzi, Roberto
1998-03-01
We analyze the behavior of the macroscopic thermodynamic state of polymers, centering on acetanilide. The nonlinear equations of evolution for the populations and the statistically averaged field amplitudes of CO-stretching modes are derived. The existence of excitations of the solitary wave type is evidenced. The infrared spectrum is calculated and compared with the experimental data of Careri et al. [Phys. Rev. Lett. 51, 104 (1983)], resulting in a good agreement. We also consider the situation of a nonthermally highly excited sample, predicting the occurrence of a large increase in the lifetime of the solitary wave excitation.
A Statistical Approach to Characterizing the Reliability of Systems Utilizing HBT Devices
NASA Technical Reports Server (NTRS)
Chen, Yuan; Wang, Qing; Kayali, Sammy
2004-01-01
This paper presents a statistical approach to characterizing the reliability of systems with HBT devices. The proposed approach utilizes the statistical reliability information of the HBT individual devices, along with the analysis on the critical paths of the system, to provide more accurate and more comprehensive reliability information about the HBT systems compared to the conventional worst-case method.
Physics-based statistical learning approach to mesoscopic model selection.
Taverniers, Søren; Haut, Terry S; Barros, Kipton; Alexander, Francis J; Lookman, Turab
2015-11-01
In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.
Physics-based statistical learning approach to mesoscopic model selection
NASA Astrophysics Data System (ADS)
Taverniers, Søren; Haut, Terry S.; Barros, Kipton; Alexander, Francis J.; Lookman, Turab
2015-11-01
In materials science and many other research areas, models are frequently inferred without considering their generalization to unseen data. We apply statistical learning using cross-validation to obtain an optimally predictive coarse-grained description of a two-dimensional kinetic nearest-neighbor Ising model with Glauber dynamics (GD) based on the stochastic Ginzburg-Landau equation (sGLE). The latter is learned from GD "training" data using a log-likelihood analysis, and its predictive ability for various complexities of the model is tested on GD "test" data independent of the data used to train the model on. Using two different error metrics, we perform a detailed analysis of the error between magnetization time trajectories simulated using the learned sGLE coarse-grained description and those obtained using the GD model. We show that both for equilibrium and out-of-equilibrium GD training trajectories, the standard phenomenological description using a quartic free energy does not always yield the most predictive coarse-grained model. Moreover, increasing the amount of training data can shift the optimal model complexity to higher values. Our results are promising in that they pave the way for the use of statistical learning as a general tool for materials modeling and discovery.
Fragmentation and exfoliation of 2-dimensional materials: a statistical approach.
Kouroupis-Agalou, Konstantinos; Liscio, Andrea; Treossi, Emanuele; Ortolani, Luca; Morandi, Vittorio; Pugno, Nicola Maria; Palermo, Vincenzo
2014-06-01
The main advantage for applications of graphene and related 2D materials is that they can be produced on large scales by liquid phase exfoliation. The exfoliation process shall be considered as a particular fragmentation process, where the 2D character of the exfoliated objects will influence significantly fragmentation dynamics as compared to standard materials. Here, we used automatized image processing of Atomic Force Microscopy (AFM) data to measure, one by one, the exact shape and size of thousands of nanosheets obtained by exfoliation of an important 2D-material, boron nitride, and used different statistical functions to model the asymmetric distribution of nanosheet sizes typically obtained. Being the resolution of AFM much larger than the average sheet size, analysis could be performed directly at the nanoscale and at the single sheet level. We find that the size distribution of the sheets at a given time follows a log-normal distribution, indicating that the exfoliation process has a "typical" scale length that changes with time and that exfoliation proceeds through the formation of a distribution of random cracks that follow Poisson statistics. The validity of this model implies that the size distribution does not depend on the different preparation methods used, but is a common feature in the exfoliation of this material and thus probably for other 2D materials.
Geo-Statistical Approach to Estimating Asteroid Exploration Parameters
NASA Technical Reports Server (NTRS)
Lincoln, William; Smith, Jeffrey H.; Weisbin, Charles
2011-01-01
NASA's vision for space exploration calls for a human visit to a near earth asteroid (NEA). Potential human operations at an asteroid include exploring a number of sites and analyzing and collecting multiple surface samples at each site. In this paper two approaches to formulation and scheduling of human exploration activities are compared given uncertain information regarding the asteroid prior to visit. In the first approach a probability model was applied to determine best estimates of mission duration and exploration activities consistent with exploration goals and existing prior data about the expected aggregate terrain information. These estimates were compared to a second approach or baseline plan where activities were constrained to fit within an assumed mission duration. The results compare the number of sites visited, number of samples analyzed per site, and the probability of achieving mission goals related to surface characterization for both cases.
Relative Costs and Statistical Power in the Extreme Groups Approach
ERIC Educational Resources Information Center
Abrahams, Norman M.; Alf, Edward F., Jr.
1978-01-01
The relationship between variables in applied and experimental research is often investigated by the use of extreme groups. Recent analytical work has provided an extreme group procedure that is more powerful than the standard correlational approach. The present article provides procedures to optimize power and thusly resources in such studies.…
Statistical Approaches to Assessing Health and Healthcare Disparities.
Eberly, Lynn E; Cunanan, Kristen; Gurvich, Olga; Savik, Kay; Bliss, Donna Z; Wyman, Jean F
2015-12-01
Determining whether racial and ethnic disparities exist for a health-related outcome requires first specifying how outcomes will be measured and disparities calculated. We explain and contrast two common approaches for quantifying racial/ethnic disparities in health, with an applied example from nursing research. Data from a national for-profit chain of nursing homes in the US were analyzed to estimate racial/ethnic disparities in incidence of pressure ulcer within 90 days of nursing home admission. Two approaches were used and then compared: logistic regression and Peters-Belson. Advantages and disadvantages of each approach are given. Logistic regression can be used to quantify disparities as the odds of the outcome for one group relative to another. Peters-Belson can be used to quantify an overall disparity between groups as a risk difference and also provides the proportion of that disparity that is explained by available risk factors. Extensions to continuous outcomes, to survival outcomes, and to clustered data are outlined. Both logistic regression and Peters-Belson are easily implementable and interpretable and provide information on the predictors associated with the outcome. These disparity estimation methods have different interpretations, assumptions, strengths, and weaknesses, of which the researcher should be aware when planning an analytic approach.
Fragmentation and exfoliation of 2-dimensional materials: a statistical approach
NASA Astrophysics Data System (ADS)
Kouroupis-Agalou, Konstantinos; Liscio, Andrea; Treossi, Emanuele; Ortolani, Luca; Morandi, Vittorio; Pugno, Nicola Maria; Palermo, Vincenzo
2014-05-01
The main advantage for applications of graphene and related 2D materials is that they can be produced on large scales by liquid phase exfoliation. The exfoliation process shall be considered as a particular fragmentation process, where the 2D character of the exfoliated objects will influence significantly fragmentation dynamics as compared to standard materials. Here, we used automatized image processing of Atomic Force Microscopy (AFM) data to measure, one by one, the exact shape and size of thousands of nanosheets obtained by exfoliation of an important 2D-material, boron nitride, and used different statistical functions to model the asymmetric distribution of nanosheet sizes typically obtained. Being the resolution of AFM much larger than the average sheet size, analysis could be performed directly at the nanoscale and at the single sheet level. We find that the size distribution of the sheets at a given time follows a log-normal distribution, indicating that the exfoliation process has a ``typical'' scale length that changes with time and that exfoliation proceeds through the formation of a distribution of random cracks that follow Poisson statistics. The validity of this model implies that the size distribution does not depend on the different preparation methods used, but is a common feature in the exfoliation of this material and thus probably for other 2D materials.The main advantage for applications of graphene and related 2D materials is that they can be produced on large scales by liquid phase exfoliation. The exfoliation process shall be considered as a particular fragmentation process, where the 2D character of the exfoliated objects will influence significantly fragmentation dynamics as compared to standard materials. Here, we used automatized image processing of Atomic Force Microscopy (AFM) data to measure, one by one, the exact shape and size of thousands of nanosheets obtained by exfoliation of an important 2D-material, boron nitride, and used
Experimental Results on Statistical Approaches to Page Replacement Policies
LEUNG,VITUS J.; IRANI,SANDY
2000-12-08
This paper investigates the questions of what statistical information about a memory request sequence is useful to have in making page replacement decisions: Our starting point is the Markov Request Model for page request sequences. Although the utility of modeling page request sequences by the Markov model has been recently put into doubt, we find that two previously suggested algorithms (Maximum Hitting Time and Dominating Distribution) which are based on the Markov model work well on the trace data used in this study. Interestingly, both of these algorithms perform equally well despite the fact that the theoretical results for these two algorithms differ dramatically. We then develop succinct characteristics of memory access patterns in an attempt to approximate the simpler of the two algorithms. Finally, we investigate how to collect these characteristics in an online manner in order to have a purely online algorithm.
A statistical modeling approach for detecting generalized synchronization
Schumacher, Johannes; Haslinger, Robert; Pipa, Gordon
2012-01-01
Detecting nonlinear correlations between time series presents a hard problem for data analysis. We present a generative statistical modeling method for detecting nonlinear generalized synchronization. Truncated Volterra series are used to approximate functional interactions. The Volterra kernels are modeled as linear combinations of basis splines, whose coefficients are estimated via l1 and l2 regularized maximum likelihood regression. The regularization manages the high number of kernel coefficients and allows feature selection strategies yielding sparse models. The method's performance is evaluated on different coupled chaotic systems in various synchronization regimes and analytical results for detecting m:n phase synchrony are presented. Experimental applicability is demonstrated by detecting nonlinear interactions between neuronal local field potentials recorded in different parts of macaque visual cortex. PMID:23004851
Nonextensive statistical mechanics approach to electron trapping in degenerate plasmas
NASA Astrophysics Data System (ADS)
Mebrouk, Khireddine; Gougam, Leila Ait; Tribeche, Mouloud
2016-06-01
The electron trapping in a weakly nondegenerate plasma is reformulated and re-examined by incorporating the nonextensive entropy prescription. Using the q-deformed Fermi-Dirac distribution function including the quantum as well as the nonextensive statistical effects, we derive a new generalized electron density with a new contribution proportional to the electron temperature T, which may dominate the usual thermal correction (∼T2) at very low temperatures. To make the physics behind the effect of this new contribution more transparent, we analyze the modifications arising in the propagation of ion-acoustic solitary waves. Interestingly, we find that due to the nonextensive correction, our plasma model allows the possibility of existence of quantum ion-acoustic solitons with velocity higher than the Fermi ion-sound velocity. Moreover, as the nonextensive parameter q increases, the critical temperature Tc beyond which coexistence of compressive and rarefactive solitons sets in, is shifted towards higher values.
A statistical approach to the temporal development of orbital associations
NASA Astrophysics Data System (ADS)
Kastinen, D.; Kero, J.
2016-01-01
We have performed preliminary studies on the use of a Monte-Carlo based statistical toolbox for small body solar system dynamics to find trends in the temporal development of orbital associations. As a part of this preliminary study four different similarity functions where implemented and applied to the 21P/Giacobini-Zinner meteoroid stream, and resulting simulated meteor showers. The simulations indicate that the temporal behavior of orbital element distributions in the meteoroid stream and the meteor shower differ on century size time scales. The configuration of the meteor shower remains compact for a long time and dissipates an order of magnitude slower than the stream. The main effect driving the shower dissipation is shown to be the addition of new trails to the stream.
Statistical mechanics approach to lock-key supramolecular chemistry interactions.
Odriozola, Gerardo; Lozada-Cassou, Marcelo
2013-03-01
In the supramolecular chemistry field, intuitive concepts such as molecular complementarity and molecular recognition are used to explain the mechanism of lock-key associations. However, these concepts lack a precise definition, and consequently this mechanism is not well defined and understood. Here we address the physical basis of this mechanism, based on formal statistical mechanics, through Monte Carlo simulation and compare our results with recent experimental data for charged or uncharged lock-key colloids. We find that, given the size range of the molecules involved in these associations, the entropy contribution, driven by the solvent, rules the interaction, over that of the enthalpy. A universal behavior for the uncharged lock-key association is found. Based on our results, we propose a supramolecular chemistry definition.
Statistical Approaches to Aerosol Dynamics for Climate Simulation
Zhu, Wei
2014-09-02
In this work, we introduce two general non-parametric regression analysis methods for errors-in-variable (EIV) models: the compound regression, and the constrained regression. It is shown that these approaches are equivalent to each other and, to the general parametric structural modeling approach. The advantages of these methods lie in their intuitive geometric representations, their distribution free nature, and their ability to offer a practical solution when the ratio of the error variances is unknown. Each includes the classic non-parametric regression methods of ordinary least squares, geometric mean regression, and orthogonal regression as special cases. Both methods can be readily generalized to multiple linear regression with two or more random regressors.
Inverse problems and computational cell metabolic models: a statistical approach
NASA Astrophysics Data System (ADS)
Calvetti, D.; Somersalo, E.
2008-07-01
In this article, we give an overview of the Bayesian modelling of metabolic systems at the cellular and subcellular level. The models are based on detailed description of key biochemical reactions occurring in tissue, which may in turn be compartmentalized into cytosol and mitochondria, and of transports between the compartments. The classical deterministic approach which models metabolic systems as dynamical systems with Michaelis-Menten kinetics, is replaced by a stochastic extension where the model parameters are interpreted as random variables with an appropriate probability density. The inverse problem of cell metabolism in this setting consists of estimating the density of the model parameters. After discussing some possible approaches to solving the problem, we address the issue of how to assess the reliability of the predictions of a stochastic model by proposing an output analysis in terms of model uncertainties. Visualization modalities for organizing the large amount of information provided by the Bayesian dynamic sensitivity analysis are also illustrated.
Towards an integrated statistical approach to explanetary spectroscopy
NASA Astrophysics Data System (ADS)
Waldmann, Ingo Peter; Morello, Giuseppe; Rocchetto, Marco; Varley, Ryan; Tsiaras, Angelos; Tinetti, Giovanna
2015-08-01
Within merely two decades, observing the atmosphere of extrasolar worlds went from the realms of science fiction to a quotidian reality. The speed of progress is truly staggering.In the early days of atmospheric characterisation, data have often been sparse with low signal-to-noise (S/N) and past analyses were somewhat heuristic. As the field matures with successful space and ground-based instruments producing a steadily increase in data, we must also upgrade our data analysis and interpretation techniques from their “ad-hoc” beginnings to a solid statistical foundation.For low to mid signal to noise (S/N) observations, we are prone to two sources of biases: 1) Prior selection in the data reduction and analysis; 2) Prior constraints on the spectral retrieval. A unified set of tools addressing both points is required.To de-trend low S/N, correlated data, we demonstrated blind-source-separation (BSS) machine learning techniques to be a significant step forward. Both in photometry (Waldmann 2012, Morello 2015, Morello, Waldmann et al. 2014) and spectroscopy (Waldmann 2012, 2014, Waldmann et al. 2013). BSS finds applications in fields as diverse as medical imaging to cosmology. Applied to exoplanets, it allows us to resolve de-trending biases and demonstrate consistency between data sets that were previously found to be highly discrepant and subject to much debate.For the interpretation of the de-trended data, we developed a novel, bayesian atmospheric retrieval suite, Tau-REx (Waldmann et al. 2015a,b, Rocchetto et al. 2015). Tau-REx implements an unbiased prior selections via a custom built pattern recognition software. A full subsequent mapping of the likelihood space (using cluster computing) allows us, for the first time, to fully study degeneracies and biases in emission and transmission spectroscopy.The development of a coherent end-to-end infrastructure is paramount to the characterisation of ever smaller and fainter foreign worlds. In this conference, I
Application of statistical physics approaches to complex organizations
NASA Astrophysics Data System (ADS)
Matia, Kaushik
The first part of this thesis studies two different kinds of financial markets, namely, the stock market and the commodity market. Stock price fluctuations display certain scale-free statistical features that are not unlike those found in strongly-interacting physical systems. The possibility that new insights can be gained using concepts and methods developed to understand scale-free physical phenomena has stimulated considerable research activity in the physics community. In the first part of this thesis a comparative study of stocks and commodities is performed in terms of probability density function and correlations of stock price fluctuations. It is found that the probability density of the stock price fluctuation has a power law functional form with an exponent 3, which is similar across different markets around the world. We present an autoregressive model to explain the origin of the power law functional form of the probability density function of the price fluctuation. The first part also presents the discovery of unique features of the Indian economy, which we find displays a scale-dependent probability density function. In the second part of this thesis we quantify the statistical properties of fluctuations of complex systems like business firms and world scientific publications. We analyze class size of these systems mentioned above where units agglomerate to form classes. We find that the width of the probability density function of growth rate decays with the class size as a power law with an exponent beta which is universal in the sense that beta is independent of the system studied. We also identify two other scaling exponents, gamma connecting the unit size to the class size and gamma connecting the number of units to the class size, where products are units and firms are classes. Finally we propose a generalized preferential attachment model to describe the class size distribution. This model is successful in explaining the growth rate and class
Bayesian Statistical Approach To Binary Asteroid Orbit Determination
NASA Astrophysics Data System (ADS)
Dmitrievna Kovalenko, Irina; Stoica, Radu S.
2015-08-01
Orbit determination from observations is one of the classical problems in celestial mechanics. Deriving the trajectory of binary asteroid with high precision is much more complicate than the trajectory of simple asteroid. Here we present a method of orbit determination based on the algorithm of Monte Carlo Markov Chain (MCMC). This method can be used for the preliminary orbit determination with relatively small number of observations, or for adjustment of orbit previously determined.The problem consists on determination of a conditional a posteriori probability density with given observations. Applying the Bayesian statistics, the a posteriori probability density of the binary asteroid orbital parameters is proportional to the a priori and likelihood probability densities. The likelihood function is related to the noise probability density and can be calculated from O-C deviations (Observed minus Calculated positions). The optionally used a priori probability density takes into account information about the population of discovered asteroids. The a priori probability density is used to constrain the phase space of possible orbits.As a MCMC method the Metropolis-Hastings algorithm has been applied, adding a globally convergent coefficient. The sequence of possible orbits derives through the sampling of each orbital parameter and acceptance criteria.The method allows to determine the phase space of every possible orbit considering each parameter. It also can be used to derive one orbit with the biggest probability density of orbital elements.
Melting entropy of nanocrystals: an approach from statistical physics.
Safaei, A; Attarian Shandiz, M
2010-12-21
Considering size effect on the equations obtained from statistical mechanical theories for the entropy of crystal and liquid phases, a new model has been developed for the melting entropy of nanocrystals, including the effects of the quasi-harmonic, anharmonic and electronic components of the overall melting entropy. Then with the use of our suggested new proportionality between the melting point and the entropy temperature (θ(0)), the melting entropy of nanocrystals has been obtained in terms of their melting point. Moreover, for the first time, the size-dependency of the electronic component of the overall melting entropy, arising from the change in the electronic ground-state of the nanocrystal upon melting, has been taken into account to calculate the melting entropy of nanocrystals. Through neglecting the effect of the electronic component, the present model can corroborate the previous model for size-dependent melting entropy of crystals represented by Jiang and Shi. The present model has been validated by the available computer simulation results for Ag and V nanoparticles. Moreover, a fairly constant function has been introduced which couples the melting temperature, the entropy temperature and the atomic density of elements to each other.
Statistical approach to anatomical landmark extraction in AP radiographs
NASA Astrophysics Data System (ADS)
Bernard, Rok; Pernus, Franjo
2001-07-01
A novel method for the automated extraction of important geometrical parameters of the pelvis and hips from APR images is presented. The shape and intensity variations in APR images are encompassed by the statistical shape and appearance models built from a set of training images for each of the three anatomies, i.e., pelvis, right and left hip, separately. The identification of the pelvis and hips is defined as a flexible object recognition problem, which is solved by generating anatomically plausible object instances and matching them to the APR image. The criterion function minimizes the resulting match error and considers the object topology. The obtained flexible object defines the positions of anatomical landmarks, which are further used to calculate the hip joint contact stress. A leave-one-out test was used to evaluate the performance of the proposed method on a set of 26 APR images. The results show the method is able to properly treat image variations and can reliably and accurately identify anatomies in the image and extract the anatomical landmarks needed in the hip joint contact stress calculation.
A statistical approach to urban stormwater detention planning
Segarra, R.I.
1988-01-01
A statistical model was developed to study the long-term behavior of a stormwater detention unit. This unit stores a portion of the incoming runoff, corresponding to the empty space available in the unit, from which runoff is pumped to a treatment plant. The objective is to avoid, as much as possible, the discharge of untreated runoff to receiving bodies of water. The model was developed by considering the arrival of independent runoff events at the urban catchment. The process variables of event depth, duration, and interevent time were treated as independent, identically distributed random variables. A storage equation was formulated from which the probability of detention unit overflow was obtained. With this distribution it was possible to define the trap efficiency of the unit in terms of the long-term fraction of the runoff volume trapped by the storage unit. A pollutant load model was also formulated, based on a first-order washoff model. This model was used to define pollutant control isoquants. Optimal values of the required storage capacity and treatment rate were obtained by treating the isoquants as production functions.
Modeling Insurgent Dynamics Including Heterogeneity. A Statistical Physics Approach
NASA Astrophysics Data System (ADS)
Johnson, Neil F.; Manrique, Pedro; Hui, Pak Ming
2013-05-01
Despite the myriad complexities inherent in human conflict, a common pattern has been identified across a wide range of modern insurgencies and terrorist campaigns involving the severity of individual events—namely an approximate power-law x - α with exponent α≈2.5. We recently proposed a simple toy model to explain this finding, built around the reported loose and transient nature of operational cells of insurgents or terrorists. Although it reproduces the 2.5 power-law, this toy model assumes every actor is identical. Here we generalize this toy model to incorporate individual heterogeneity while retaining the model's analytic solvability. In the case of kinship or team rules guiding the cell dynamics, we find that this 2.5 analytic result persists—however an interesting new phase transition emerges whereby this cell distribution undergoes a transition to a phase in which the individuals become isolated and hence all the cells have spontaneously disintegrated. Apart from extending our understanding of the empirical 2.5 result for insurgencies and terrorism, this work illustrates how other statistical physics models of human grouping might usefully be generalized in order to explore the effect of diverse human social, cultural or behavioral traits.
Statistical approaches to short-term electricity forecasting
NASA Astrophysics Data System (ADS)
Kellova, Andrea
The study of the short-term forecasting of electricity demand has played a key role in the economic optimization of the electric energy industry and is essential for power systems planning and operation. In electric energy markets, accurate short-term forecasting of electricity demand is necessary mainly for economic operations. Our focus is directed to the question of electricity demand forecasting in the Czech Republic. Firstly, we describe the current structure and organization of the Czech, as well as the European, electricity market. Secondly, we provide a complex description of the most powerful external factors influencing electricity consumption. The choice of the most appropriate model is conditioned by these electricity demand determining factors. Thirdly, we build up several types of multivariate forecasting models, both linear and nonlinear. These models are, respectively, linear regression models and artificial neural networks. Finally, we compare the forecasting power of both kinds of models using several statistical accuracy measures. Our results suggest that although the electricity demand forecasting in the Czech Republic is for the considered years rather a nonlinear than a linear problem, for practical purposes simple linear models with nonlinear inputs can be adequate. This is confirmed by the values of the empirical loss function applied to the forecasting results.
Statistical mechanics approach to 1-bit compressed sensing
NASA Astrophysics Data System (ADS)
Xu, Yingying; Kabashima, Yoshiyuki
2013-02-01
Compressed sensing is a framework that makes it possible to recover an N-dimensional sparse vector x∈RN from its linear transformation y∈RM of lower dimensionality M < N. A scheme further reducing the data size of the compressed expression by using only the sign of each entry of y to recover x was recently proposed. This is often termed 1-bit compressed sensing. Here, we analyze the typical performance of an l1-norm-based signal recovery scheme for 1-bit compressed sensing using statistical mechanics methods. We show that the signal recovery performance predicted by the replica method under the replica symmetric ansatz, which turns out to be locally unstable for modes breaking the replica symmetry, is in good consistency with experimental results of an approximate recovery algorithm developed earlier. This suggests that the l1-based recovery problem typically has many local optima of a similar recovery accuracy, which can be achieved by the approximate algorithm. We also develop another approximate recovery algorithm inspired by the cavity method. Numerical experiments show that when the density of nonzero entries in the original signal is relatively large the new algorithm offers better performance than the abovementioned scheme and does so with a lower computational cost.
Territorial developments based on graffiti: A statistical mechanics approach
NASA Astrophysics Data System (ADS)
Barbaro, Alethea B. T.; Chayes, Lincoln; D'Orsogna, Maria R.
2013-01-01
We study the well-known sociological phenomenon of gang aggregation and territory formation through an interacting agent system defined on a lattice. We introduce a two-gang Hamiltonian model where agents have red or blue affiliation but are otherwise indistinguishable. In this model, all interactions are indirect and occur only via graffiti markings, on-site as well as on nearest neighbor locations. We also allow for gang proliferation and graffiti suppression. Within the context of this model, we show that gang clustering and territory formation may arise under specific parameter choices and that a phase transition may occur between well-mixed, possibly dilute configurations and well separated, clustered ones. Using methods from statistical mechanics, we study the phase transition between these two qualitatively different scenarios. In the mean-fields rendition of this model, we identify parameter regimes where the transition is first or second order. In all cases, we have found that the transitions are a consequence solely of the gang to graffiti couplings, implying that direct gang to gang interactions are not strictly necessary for gang territory formation; in particular, graffiti may be the sole driving force behind gang clustering. We further discuss possible sociological-as well as ecological-ramifications of our results.
Modelling parasite aggregation: disentangling statistical and ecological approaches.
Yakob, Laith; Soares Magalhães, Ricardo J; Gray, Darren J; Milinovich, Gabriel; Wardrop, Nicola; Dunning, Rebecca; Barendregt, Jan; Bieri, Franziska; Williams, Gail M; Clements, Archie C A
2014-05-01
The overdispersion in macroparasite infection intensity among host populations is commonly simulated using a constant negative binomial aggregation parameter. We describe an alternative to utilising the negative binomial approach and demonstrate important disparities in intervention efficacy projections that can come about from opting for pattern-fitting models that are not process-explicit. We present model output in the context of the epidemiology and control of soil-transmitted helminths due to the significant public health burden imposed by these parasites, but our methods are applicable to other infections with demonstrable aggregation in parasite numbers among hosts.
A Statistical Approach to Provide Individualized Privacy for Surveys.
Esponda, Fernando; Huerta, Kael; Guerrero, Victor M
2016-01-01
In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures.
Einstein's Approach to Statistical Mechanics: The 1902-04 Papers
NASA Astrophysics Data System (ADS)
Peliti, Luca; Rechtman, Raúl
2016-09-01
We summarize the papers published by Einstein in the Annalen der Physik in the years 1902-1904 on the derivation of the properties of thermal equilibrium on the basis of the mechanical equations of motion and of the calculus of probabilities. We point out the line of thought that led Einstein to an especially economical foundation of the discipline, and to focus on fluctuations of the energy as a possible tool for establishing the validity of this foundation. We also sketch a comparison of Einstein's approach with that of Gibbs, suggesting that although they obtained similar results, they had different motivations and interpreted them in very different ways.
A Statistical Approach to Provide Individualized Privacy for Surveys
Esponda, Fernando; Huerta, Kael; Guerrero, Victor M.
2016-01-01
In this paper we propose an instrument for collecting sensitive data that allows for each participant to customize the amount of information that she is comfortable revealing. Current methods adopt a uniform approach where all subjects are afforded the same privacy guarantees; however, privacy is a highly subjective property with intermediate points between total disclosure and non-disclosure: each respondent has a different criterion regarding the sensitivity of a particular topic. The method we propose empowers respondents in this respect while still allowing for the discovery of interesting findings through the application of well-known inferential procedures. PMID:26824758
Demarcating Advanced Learning Approaches from Methodological and Technological Perspectives
ERIC Educational Resources Information Center
Horvath, Imre; Peck, David; Verlinden, Jouke
2009-01-01
In the field of design and engineering education, the fast and expansive evolution of information and communication technologies is steadily converting traditional learning approaches into more advanced ones. Facilitated by Broadband (high bandwidth) personal computers, distance learning has developed into web-hosted electronic learning. The…
NASA Astrophysics Data System (ADS)
Vrac, M. R.; Hayhoe, K.; Stein, M.
2005-12-01
Downscaling methods try to derive local-scale values or characteristics from large-scale information such as AOGCM outputs. These methods can be useful to adress an issue of the climate change from a local point of view by understanding how this change will interact with existing local environmental features. Regional climate assessments require continuous time series for multiple scenarios and AOGCM drivers. This computational task is nowadays out of range of most of dynamical downscaling models. Here, advanced statistical clustering methods are applied to define original atmospheric patterns, that will be included as the bases of a nonhomogeneous stochastic weather typing approach. This method provides accurate and rapid simulations of local-scale precipitation features for 37 raingauges in Illinois at low computational cost. Two different kinds of atmospheric states are defined: "circulation" patterns - developed by a model based method applied to large scale NCEP reanalysis data - and "precipitation" patterns - obtained through a hierarchical ascending clustering method applied directly to the observed rainfall amounts on Illinois with an original metric. By modelling the transition probabilities from one pattern to another by a nonhomogeneous Markov model - i.e. influenced by some large scale atmospheric variables such as geopotential heights, humidity and dew point temperature depression - we see that the precipitation states allow us to model conditional distributions of precipitation given the current weather state - and then to simulate local precipitation intensities - more accurately than with the traditional approach based on upper-air circulation patterns alone.
Statistical physics approaches to quantifying sleep-stage transitions
NASA Astrophysics Data System (ADS)
Lo, Chung-Chuan
Sleep can be viewed as a sequence of transitions in a very complex neuronal system. Traditionally, studies of the dynamics of sleep control have focused on the circadian rhythm of sleep-wake transitions or on the ultradian rhythm of the sleep cycle. However, very little is known about the mechanisms responsible for the time structure or even the statistics of the rapid sleep-stage transitions that appear without periodicity. I study the time dynamics of sleep-wake transitions for different species, including humans, rats, and mice, and find that the wake and sleep episodes exhibit completely different behaviors: the durations of wake episodes are characterized by a scale-free power-law distribution, while the durations of sleep episodes have an exponential distribution with a characteristic time scale. The functional forms of the distributions of the sleep and wake durations hold for human subjects of different ages and for subjects with sleep apnea. They also hold for all the species I investigate. Surprisingly, all species have the same power-law exponent for the distribution of wake durations, but the exponential characteristic time of the distribution of sleep durations changes across species. I develop a stochastic model which accurately reproduces our empirical findings. The model suggests that the difference between the dynamics of the sleep and wake states arises from the constraints on the number of microstates in the sleep-wake system. I develop a measure of asymmetry in sleep-stage transitions using a transition probability matrix. I find that both normal and sleep apnea subjects are characterized by two types of asymmetric sleep-stage transition paths, and that the sleep apnea group exhibits less asymmetry in the sleep-stage transitions.
Evaluation of current statistical approaches for predictive geomorphological mapping
NASA Astrophysics Data System (ADS)
Miska, Luoto; Jan, Hjort
2005-04-01
Predictive models are increasingly used in geomorphology, but systematic evaluations of novel statistical techniques are still limited. The aim of this study was to compare the accuracy of generalized linear models (GLM), generalized additive models (GAM), classification tree analysis (CTA), neural networks (ANN) and multiple adaptive regression splines (MARS) in predictive geomorphological modelling. Five different distribution models both for non-sorted and sorted patterned ground were constructed on the basis of four terrain parameters and four soil variables. To evaluate the models, the original data set of 9997 squares of 1 ha in size was randomly divided into model training (70%, n=6998) and model evaluation sets (30%, n=2999). In general, active sorted patterned ground is clearly defined in upper fell areas with high slope angle and till soils. Active non-sorted patterned ground is more common in valleys with higher soil moisture and fine-scale concave topography. The predictive performance of each model was evaluated using the area under the receiver operating characteristic curve (AUC) and the Kappa value. The relatively high discrimination capacity of all models, AUC=0.85 0.88 and Kappa=0.49 0.56, implies that the model's predictions provide an acceptable index of sorted and non-sorted patterned ground occurrence. The best performance for model calibration data for both data sets was achieved by the CTA. However, when the predictive mapping ability was explored through the evaluation data set, the model accuracies of CTA decreased clearly compared to the other modelling techniques. For model evaluation data MARS performed marginally best. Our results show that the digital elevation model and soil data can be used to predict relatively robustly the activity of patterned ground in fine scale in a subarctic landscape. This indicates that predictive geomorphological modelling has the advantage of providing relevant and useful information on earth surface
Jensen-Feynman approach to the statistics of interacting electrons.
Pain, Jean-Christophe; Gilleron, Franck; Faussurier, Gérald
2009-08-01
Faussurier [Phys. Rev. E 65, 016403 (2001)] proposed to use a variational principle relying on Jensen-Feynman (or Gibbs-Bogoliubov) inequality in order to optimize the accounting for two-particle interactions in the calculation of canonical partition functions. It consists of a decomposition into a reference electron system and a first-order correction. The procedure appears to be very efficient in order to evaluate the free energy and the orbital populations. In this work, we present numerical applications of the method and propose to extend it using a reference energy which includes the interaction between two electrons inside a given orbital. This is possible, thanks to our efficient recursion relation for the calculation of partition functions. We also show that a linear reference energy, however, is usually sufficient to achieve a good precision and that the most promising way to improve the approach of Faussurier is to apply Jensen's inequality to a more convenient convex function.
Proteins top-down: a statistical mechanics approach
NASA Astrophysics Data System (ADS)
Hansen, Alex; Jensen, Mogens H.; Sneppen, Kim; Zocchi, Giovanni
2000-12-01
Biopolymers have many fascinating properties coming from a competition between universal features, well known in the physics of synthetic polymers, and specific elements which are crucial for the biological function of these molecules. Proteins are an example of this richness: proteins are heteropolymers consisting of hydrophobic and hydrophilic segments, and carry charges of both signs along the backbone. Simple models of such heteropolymers have been studied in connection with the folding and evolution of proteins. However, these models can only give a limited understanding of real proteins, and elements specific to proteins must be included. Our approach to this problem has been to construct a model of the specific self-interactions of proteins by defining a unique folding pathway. This model reproduces the thermodynamic properties of proteins.
Advanced Safeguards Approaches for New TRU Fuel Fabrication Facilities
Durst, Philip C.; Ehinger, Michael H.; Boyer, Brian; Therios, Ike; Bean, Robert; Dougan, A.; Tolk, K.
2007-12-15
This second report in a series of three reviews possible safeguards approaches for the new transuranic (TRU) fuel fabrication processes to be deployed at AFCF – specifically, the ceramic TRU (MOX) fuel fabrication line and the metallic (pyroprocessing) line. The most common TRU fuel has been fuel composed of mixed plutonium and uranium dioxide, referred to as “MOX”. However, under the Advanced Fuel Cycle projects custom-made fuels with higher contents of neptunium, americium, and curium may also be produced to evaluate if these “minor actinides” can be effectively burned and transmuted through irradiation in the ABR. A third and final report in this series will evaluate and review the advanced safeguards approach options for the ABR. In reviewing and developing the advanced safeguards approach for the new TRU fuel fabrication processes envisioned for AFCF, the existing international (IAEA) safeguards approach at the Plutonium Fuel Production Facility (PFPF) and the conceptual approach planned for the new J-MOX facility in Japan have been considered as a starting point of reference. The pyro-metallurgical reprocessing and fuel fabrication process at EBR-II near Idaho Falls also provided insight for safeguarding the additional metallic pyroprocessing fuel fabrication line planned for AFCF.
Predicting major element mineral/melt equilibria - A statistical approach
NASA Technical Reports Server (NTRS)
Hostetler, C. J.; Drake, M. J.
1980-01-01
Empirical equations have been developed for calculating the mole fractions of NaO0.5, MgO, AlO1.5, SiO2, KO0.5, CaO, TiO2, and FeO in a solid phase of initially unknown identity given only the composition of the coexisting silicate melt. The approach involves a linear multivariate regression analysis in which solid composition is expressed as a Taylor series expansion of the liquid compositions. An internally consistent precision of approximately 0.94 is obtained, that is, the nature of the liquidus phase in the input data set can be correctly predicted for approximately 94% of the entries. The composition of the liquidus phase may be calculated to better than 5 mol % absolute. An important feature of this 'generalized solid' model is its reversibility; that is, the dependent and independent variables in the linear multivariate regression may be inverted to permit prediction of the composition of a silicate liquid produced by equilibrium partial melting of a polymineralic source assemblage.
Jensen-Feynman approach to the statistics of interacting electrons
Pain, Jean-Christophe; Gilleron, Franck; Faussurier, Gerald
2009-08-15
Faussurier et al. [Phys. Rev. E 65, 016403 (2001)] proposed to use a variational principle relying on Jensen-Feynman (or Gibbs-Bogoliubov) inequality in order to optimize the accounting for two-particle interactions in the calculation of canonical partition functions. It consists of a decomposition into a reference electron system and a first-order correction. The procedure appears to be very efficient in order to evaluate the free energy and the orbital populations. In this work, we present numerical applications of the method and propose to extend it using a reference energy which includes the interaction between two electrons inside a given orbital. This is possible, thanks to our efficient recursion relation for the calculation of partition functions. We also show that a linear reference energy, however, is usually sufficient to achieve a good precision and that the most promising way to improve the approach of Faussurier et al. is to apply Jensen's inequality to a more convenient convex function.
Biorefinery approach for coconut oil valorisation: a statistical study.
Bouaid, Abderrahim; Martínez, Mercedes; Aracil, José
2010-06-01
The biorefinery approach, consisting in transesterification using methanol and potassium hydroxide as catalyst, has been used to assess coconut oil valorisation. Due to the fatty acid composition of coconut oil, low (LMWME) and high (HMWME) molecular weight fatty acid methyl esters were obtained. Methyl laurate (78.30 wt.%) is the major component of the low molecular weight fraction. The influence of variables such as temperature and catalyst concentration on the production of both fractions has been studied and optimized by means of factorial design and response surface methodology (RSM). Two separate optimum conditions were found to be a catalyst concentration of 0.9% and 1% and an operation temperature of 42.5 degrees C and 57 degrees C for LMWME and HMWME, respectively, obtaining conversion rates of 77.54% and 25.41%. The valuable components of LMWME may be recovered for sale as biolubricants or biosolvents, the remaining fraction could be used as biodiesel, matching the corresponding European Standard. PMID:20129777
Improved Test Planning and Analysis Through the Use of Advanced Statistical Methods
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Maxwell, Katherine A.; Glass, David E.; Vaughn, Wallace L.; Barger, Weston; Cook, Mylan
2016-01-01
The goal of this work is, through computational simulations, to provide statistically-based evidence to convince the testing community that a distributed testing approach is superior to a clustered testing approach for most situations. For clustered testing, numerous, repeated test points are acquired at a limited number of test conditions. For distributed testing, only one or a few test points are requested at many different conditions. The statistical techniques of Analysis of Variance (ANOVA), Design of Experiments (DOE) and Response Surface Methods (RSM) are applied to enable distributed test planning, data analysis and test augmentation. The D-Optimal class of DOE is used to plan an optimally efficient single- and multi-factor test. The resulting simulated test data are analyzed via ANOVA and a parametric model is constructed using RSM. Finally, ANOVA can be used to plan a second round of testing to augment the existing data set with new data points. The use of these techniques is demonstrated through several illustrative examples. To date, many thousands of comparisons have been performed and the results strongly support the conclusion that the distributed testing approach outperforms the clustered testing approach.
ERIC Educational Resources Information Center
Perrett, Jamis J.
2012-01-01
This article demonstrates how textbooks differ in their description of the term "experimental unit". Advanced Placement Statistics teachers and students are often limited in their statistical knowledge by the information presented in their classroom textbook. Definitions and descriptions differ among textbooks as well as among different editions…
Whole-genome CNV analysis: advances in computational approaches
Pirooznia, Mehdi; Goes, Fernando S.; Zandi, Peter P.
2015-01-01
Accumulating evidence indicates that DNA copy number variation (CNV) is likely to make a significant contribution to human diversity and also play an important role in disease susceptibility. Recent advances in genome sequencing technologies have enabled the characterization of a variety of genomic features, including CNVs. This has led to the development of several bioinformatics approaches to detect CNVs from next-generation sequencing data. Here, we review recent advances in CNV detection from whole genome sequencing. We discuss the informatics approaches and current computational tools that have been developed as well as their strengths and limitations. This review will assist researchers and analysts in choosing the most suitable tools for CNV analysis as well as provide suggestions for new directions in future development. PMID:25918519
Sikirzhytskaya, Aliaksandra; Sikirzhytski, Vitali; Lednev, Igor K
2014-01-01
Body fluids are a common and important type of forensic evidence. In particular, the identification of menstrual blood stains is often a key step during the investigation of rape cases. Here, we report on the application of near-infrared Raman microspectroscopy for differentiating menstrual blood from peripheral blood. We observed that the menstrual and peripheral blood samples have similar but distinct Raman spectra. Advanced statistical analysis of the multiple Raman spectra that were automatically (Raman mapping) acquired from the 40 dried blood stains (20 donors for each group) allowed us to build classification model with maximum (100%) sensitivity and specificity. We also demonstrated that despite certain common constituents, menstrual blood can be readily distinguished from vaginal fluid. All of the classification models were verified using cross-validation methods. The proposed method overcomes the problems associated with currently used biochemical methods, which are destructive, time consuming and expensive.
Papa, Lesther A; Litson, Kaylee; Lockhart, Ginger; Chassin, Laurie; Geiser, Christian
2015-01-01
Testing mediation models is critical for identifying potential variables that need to be targeted to effectively change one or more outcome variables. In addition, it is now common practice for clinicians to use multiple informant (MI) data in studies of statistical mediation. By coupling the use of MI data with statistical mediation analysis, clinical researchers can combine the benefits of both techniques. Integrating the information from MIs into a statistical mediation model creates various methodological and practical challenges. The authors review prior methodological approaches to MI mediation analysis in clinical research and propose a new latent variable approach that overcomes some limitations of prior approaches. An application of the new approach to mother, father, and child reports of impulsivity, frustration tolerance, and externalizing problems (N = 454) is presented. The results showed that frustration tolerance mediated the relationship between impulsivity and externalizing problems. The new approach allows for a more comprehensive and effective use of MI data when testing mediation models. PMID:26617536
Virkler, Kelly; Lednev, Igor K
2009-09-15
Forensic analysis has become one of the most growing areas of analytical chemistry in recent years. The ability to determine the species of origin of a body fluid sample is a very important and crucial part of a forensic investigation. We introduce here a new technique which utilizes a modern analytical method based on the combination of Raman spectroscopy and advanced statistics to analyze the composition of blood traces from different species. Near-infrared Raman spectroscopy (NIR) was used to analyze multiple dry samples of human, canine, and feline blood for the ultimate application to forensic species identification. All of the spectra were combined into a single data matrix, and the number of principle components that described the system was determined using multiple statistical methods such as significant factor analysis (SFA), principle component analysis (PCA), and several cross-validation methods. Of the six principle components that were determined to be present, the first three, which contributed over 90% to the spectral data of the system, were used to form a three-dimensional scores plot that clearly showed significant separation between the three groups of species. Ellipsoids representing a 99% confidence interval surrounding each species group showed no overlap. This technique using Raman spectroscopy is nondestructive and quick and can potentially be performed at the scene of a crime.
Statistical methods and neural network approaches for classification of data from multiple sources
NASA Technical Reports Server (NTRS)
Benediktsson, Jon Atli; Swain, Philip H.
1990-01-01
Statistical methods for classification of data from multiple data sources are investigated and compared to neural network models. A problem with using conventional multivariate statistical approaches for classification of data of multiple types is in general that a multivariate distribution cannot be assumed for the classes in the data sources. Another common problem with statistical classification methods is that the data sources are not equally reliable. This means that the data sources need to be weighted according to their reliability but most statistical classification methods do not have a mechanism for this. This research focuses on statistical methods which can overcome these problems: a method of statistical multisource analysis and consensus theory. Reliability measures for weighting the data sources in these methods are suggested and investigated. Secondly, this research focuses on neural network models. The neural networks are distribution free since no prior knowledge of the statistical distribution of the data is needed. This is an obvious advantage over most statistical classification methods. The neural networks also automatically take care of the problem involving how much weight each data source should have. On the other hand, their training process is iterative and can take a very long time. Methods to speed up the training procedure are introduced and investigated. Experimental results of classification using both neural network models and statistical methods are given, and the approaches are compared based on these results.
Bhhatarai, Barun; Garg, Rajni; Gramatica, Paola
2010-07-12
Two parallel approaches for quantitative structure-activity relationships (QSAR) are predominant in literature, one guided by mechanistic methods (including read-across) and another by the use of statistical methods. To bridge the gap between these two approaches and to verify their main differences, a comparative study of mechanistically relevant and statistically relevant QSAR models, developed on a case study of 158 cycloalkyl-pyranones, biologically active on inhibition (Ki ) of HIV protease, was performed. Firstly, Multiple Linear Regression (MLR) based models were developed starting from a limited amount of molecular descriptors which were widely proven to have mechanistic interpretation. Then robust and predictive MLR models were developed on the same set using two different statistical approaches unbiased of input descriptors. Development of models based on Statistical I method was guided by stepwise addition of descriptors while Genetic Algorithm based selection of descriptors was used for the Statistical II. Internal validation, the standard error of the estimate, and Fisher's significance test were performed for both the statistical models. In addition, external validation was performed for Statistical II model, and Applicability Domain was verified as normally practiced in this approach. The relationships between the activity and the important descriptors selected in all the models were analyzed and compared. It is concluded that, despite the different type and number of input descriptors, and the applied descriptor selection tools or the algorithms used for developing the final model, the mechanistical and statistical approach are comparable to each other in terms of quality and also for mechanistic interpretability of modelling descriptors. Agreement can be observed between these two approaches and the better result could be a consensus prediction from both the models.
Dual-band, infrared buried mine detection using a statistical pattern recognition approach
Buhl, M.R.; Hernandez, J.E.; Clark, G.A.; Sengupta, S.K.
1993-08-01
The main objective of this work was to detect surrogate land mines, which were buried in clay and sand, using dual-band, infrared images. A statistical pattern recognition approach was used to achieve this objective. This approach is discussed and results of applying it to real images are given.
Spectral-Statistical Approach for Revealing Latent Regular Structures in DNA Sequence.
Chaley, Maria; Kutyrkin, Vladimir
2016-01-01
Methods of the spectral-statistical approach (2S-approach) for revealing latent periodicity in DNA sequences are described. The results of data analysis in the HeteroGenome database which collects the sequences similar to approximate tandem repeats in the genomes of model organisms are adduced. In consequence of further developing of the spectral-statistical approach, the techniques for recognizing latent profile periodicity are considered. These techniques are basing on extension of the notion of approximate tandem repeat. Examples of correlation of latent profile periodicity revealed in the CDSs with structural-functional properties in the proteins are given.
On the Geometry of the Berry-Robbins Approach to Spin-Statistics
NASA Astrophysics Data System (ADS)
Papadopoulos, Nikolaos; Reyes-Lega, Andrés F.
2010-07-01
Within a geometric and algebraic framework, the structures which are related to the spin-statistics connection are discussed. A comparison with the Berry-Robbins approach is made. The underlying geometric structure constitutes an additional support for this approach. In our work, a geometric approach to quantum indistinguishability is introduced which allows the treatment of singlevaluedness of wave functions in a global, model independent way.
NASA Astrophysics Data System (ADS)
Stefani, Jerry A.; Poarch, Scott; Saxena, Sharad; Mozumder, P. K.
1994-09-01
An advanced multivariable off-line process control system, which combines traditional Statistical Process Control (SPC) with feedback control, has been applied to the CVD tungsten process on an Applied Materials Centura reactor. The goal of the model-based controller is to compensate for shifts in the process and maintain the wafer state responses on target. In the present application the controller employs measurements made on test wafers by off-line metrology tools to track the process behavior. This is accomplished by using model- bases SPC, which compares the measurements with predictions obtained from empirically-derived process models. For CVD tungsten, a physically-based modeling approach was employed based on the kinetically-limited H2 reduction of WF6. On detecting a statistically significant shift in the process, the controller calculates adjustments to the settings to bring the process responses back on target. To achieve this a few additional test wafers are processed at slightly different settings than the nominal. This local experiment allows the models to be updated to reflect the current process performance. The model updates are expressed as multiplicative or additive changes in the process inputs and a change in the model constant. This approach for model updating not only tracks the present process/equipment state, but it also provides some diagnostic capability regarding the cause of the process shift. The updated models are used by an optimizer to compute new settings to bring the responses back to target. The optimizer is capable of incrementally entering controllables into the strategy, reflecting the degree to which the engineer desires to manipulates each setting. The capability of the controller to compensate for shifts in the CVD tungsten process has been demonstrated. Targets for film bulk resistivity and deposition rate were maintained while satisfying constraints on film stress and WF6 conversion efficiency.
Fatigue experience in advanced cancer: a phenomenological approach.
Potter, Joan
2004-01-01
This study describes the experience of fatigue in patients with advanced cancer. A phenomenological approach was adopted to allow a fuller expression of the phenomenon of fatigue in the sample of six patients. Five major themes were identified. These were physical, psychological, social and spiritual consequences of fatigue, and helpful and unhelpful coping strategies. The themes demonstrate the complexity of fatigue, which had an all-encompassing effect on patients' lives. The themes were interconnected and cannot be viewed independently. For these patients with advanced cancer the meaning of fatigue was intertwined with the process of adjusting to living with a terminal illness and ultimately death. It was impossible for them to separate the two. Coping strategies that would normally be of use to fatigued individuals were shown to have little or no benefit. Sensitive communication about fatigue and its meaning to the patient may assist adjustment and generate hope.
Unal, Cetin; Pasamehmetoglu, Kemal; Carmack, Jon
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Rcactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems is critical. In order to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating the phase and microstructural behavior of the nuclear fuel system materials and matrices. The purpose of this paper is to identify the modeling and simulation approach in order to deliver predictive tools for advanced fuels development. The coordination between experimental nuclear fuel design, development technical experts, and computational fuel modeling and simulation technical experts is a critical aspect of the approach and naturally leads to an integrated, goal-oriented science-based R & D approach and strengthens both the experimental and computational efforts. The Advanced Fuels Campaign (AFC) and Nuclear Energy Advanced Modeling and Simulation (NEAMS) Fuels Integrated Performance and Safety Code (IPSC) are working together to determine experimental data and modeling needs. The primary objective of the NEAMS fuels IPSC project is to deliver a coupled, three-dimensional, predictive computational platform for modeling the fabrication and both normal and abnormal operation of nuclear fuel pins and assemblies, applicable to both existing and future reactor fuel designs. The science based program is pursuing the development of an integrated multi-scale and multi-physics modeling and simulation platform for nuclear fuels. This overview paper discusses the vision, goals and approaches how to develop and implement the new approach.
NASA Astrophysics Data System (ADS)
Andronov, I. L.; Chinarova, L. L.; Kudashkina, L. S.; Marsakova, V. I.; Tkachenko, M. G.
2016-06-01
We have elaborated a set of new algorithms and programs for advanced time series analysis of (generally) multi-component multi-channel observations with irregularly spaced times of observations, which is a common case for large photometric surveys. Previous self-review on these methods for periodogram, scalegram, wavelet, autocorrelation analysis as well as on "running" or "sub-interval" local approximations were self-reviewed in (2003ASPC..292..391A). For an approximation of the phase light curves of nearly-periodic pulsating stars, we use a Trigonometric Polynomial (TP) fit of the statistically optimal degree and initial period improvement using differential corrections (1994OAP.....7...49A). For the determination of parameters of "characteristic points" (minima, maxima, crossings of some constant value etc.) we use a set of methods self-reviewed in 2005ASPC..335...37A, Results of the analysis of the catalogs compiled using these programs are presented in 2014AASP....4....3A. For more complicated signals, we use "phenomenological approximations" with "special shapes" based on functions defined on sub-intervals rather on the complete interval. E. g. for the Algol-type stars we developed the NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv1212.6707A, 2015JASS...32..127A), which was compared to common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree. The method allows determine the minimal set of parameters required for the "General Catalogue of Variable Stars", as well as an extended set of phenomenological and astrophysical parameters which may be used for the classification. Totally more that 1900 variable stars were studied in our group using these methods in a frame of the "Inter-Longitude Astronomy" campaign (2010OAP....23....8A) and the "Ukrainian Virtual Observatory" project (2012KPCB...28...85V).
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-24
... From the Federal Register Online via the Government Publishing Office DEPARTMENT OF HEALTH AND HUMAN SERVICES Workshop: Advancing Research on Mixtures; New Perspectives and Approaches for Predicting... ``Advancing Research on Mixtures: New Perspectives and Approaches for Predicting Adverse Human Health...
"I am Not a Statistic": Identities of African American Males in Advanced Science Courses
NASA Astrophysics Data System (ADS)
Johnson, Diane Wynn
The United States Bureau of Labor Statistics (2010) expects new industries to generate approximately 2.7 million jobs in science and technology by the year 2018, and there is concern as to whether there will be enough trained individuals to fill these positions. A tremendous resource remains untapped, African American students, especially African American males (National Science Foundation, 2009). Historically, African American males have been omitted from the so called science pipeline. Fewer African American males pursue a science discipline due, in part; to limiting factors they experience in school and at home (Ogbu, 2004). This is a case study of African American males who are enrolled in advanced science courses at a predominantly African American (84%) urban high school. Guided by expectancy-value theory (EVT) of achievement related results (Eccles, 2009; Eccles et al., 1983), twelve African American male students in two advanced science courses were observed in their science classrooms weekly, participated in an in-depth interview, developed a presentation to share with students enrolled in a tenth grade science course, responded to an open-ended identity questionnaire, and were surveyed about their perceptions of school. Additionally, the students' teachers were interviewed, and seven of the students' parents. The interview data analyses highlighted the important role of supportive parents (key socializers) who had high expectations for their sons and who pushed them academically. The students clearly attributed their enrollment in advanced science courses to their high regard for their science teachers, which included positive relationships, hands-on learning in class, and an inviting and encouraging learning environment. Additionally, other family members and coaches played important roles in these young men's lives. Students' PowerPoint(c) presentations to younger high school students on why they should take advanced science courses highlighted these
Classification of human colonic tissues using FTIR spectra and advanced statistical techniques
NASA Astrophysics Data System (ADS)
Zwielly, A.; Argov, S.; Salman, A.; Bogomolny, E.; Mordechai, S.
2010-04-01
One of the major public health hazards is colon cancer. There is a great necessity to develop new methods for early detection of cancer. If colon cancer is detected and treated early, cure rate of more than 90% can be achieved. In this study we used FTIR microscopy (MSP), which has shown a good potential in the last 20 years in the fields of medical diagnostic and early detection of abnormal tissues. Large database of FTIR microscopic spectra was acquired from 230 human colonic biopsies. Five different subgroups were included in our database, normal and cancer tissues as well as three stages of benign colonic polyps, namely, mild, moderate and severe polyps which are precursors of carcinoma. In this study we applied advanced mathematical and statistical techniques including principal component analysis (PCA) and linear discriminant analysis (LDA), on human colonic FTIR spectra in order to differentiate among the mentioned subgroups' tissues. Good classification accuracy between normal, polyps and cancer groups was achieved with approximately 85% success rate. Our results showed that there is a great potential of developing FTIR-micro spectroscopy as a simple, reagent-free viable tool for early detection of colon cancer in particular the early stages of premalignancy among the benign colonic polyps.
An alternative approach to confidence interval estimation for the win ratio statistic.
Luo, Xiaodong; Tian, Hong; Mohanty, Surya; Tsai, Wei Yann
2015-03-01
Pocock et al. (2012, European Heart Journal 33, 176-182) proposed a win ratio approach to analyzing composite endpoints comprised of outcomes with different clinical priorities. In this article, we establish a statistical framework for this approach. We derive the null hypothesis and propose a closed-form variance estimator for the win ratio statistic in all pairwise matching situation. Our simulation study shows that the proposed variance estimator performs well regardless of the magnitude of treatment effect size and the type of the joint distribution of the outcomes.
A Novel Statistical Approach for Brain MR Images Segmentation Based on Relaxation Times
Ferraioli, Giampaolo; Pascazio, Vito
2015-01-01
Brain tissue segmentation in Magnetic Resonance Imaging is useful for a wide range of applications. Classical approaches exploit the gray levels image and implement criteria for differentiating regions. Within this paper a novel approach for brain tissue joint segmentation and classification is presented. Starting from the estimation of proton density and relaxation times, we propose a novel method for identifying the optimal decision regions. The approach exploits the statistical distribution of the involved signals in the complex domain. The technique, compared to classical threshold based ones, is able to globally improve the classification rate. The effectiveness of the approach is evaluated on both simulated and real datasets. PMID:26798631
Statistical approach of parton distributions: a closer look at the high-xregion
Soffer, Jacques
2011-09-21
We recall the physical features of the parton distributions in the quantum statistical approach of the nucleon, which allows to describe simultaneously, unpolarized and polarized Deep Inelastic Scattering data. Some predictionsfrom a next-to-leading order QCD analysis are compared to recent experimental results and we stress the importance of some tests in the high-x region, to confirm the validity of this approach.
ERIC Educational Resources Information Center
McCarthy, Christopher J.; Lambert, Richard G.; Crowe, Elizabeth W.; McCarthy, Colleen J.
2010-01-01
This study examined the relationship of teachers' perceptions of coping resources and demands to job satisfaction factors. Participants were 158 Advanced Placement Statistics high school teachers who completed measures of personal resources for stress prevention, classroom demands and resources, job satisfaction, and intention to leave the field…
ERIC Educational Resources Information Center
Averitt, Sallie D.
This instructor guide, which was developed for use in a manufacturing firm's advanced technical preparation program, contains the materials required to present a learning module that is designed to prepare trainees for the program's statistical process control module by improving their basic math skills in working with line graphs and teaching…
"I am Not a Statistic": Identities of African American Males in Advanced Science Courses
NASA Astrophysics Data System (ADS)
Johnson, Diane Wynn
The United States Bureau of Labor Statistics (2010) expects new industries to generate approximately 2.7 million jobs in science and technology by the year 2018, and there is concern as to whether there will be enough trained individuals to fill these positions. A tremendous resource remains untapped, African American students, especially African American males (National Science Foundation, 2009). Historically, African American males have been omitted from the so called science pipeline. Fewer African American males pursue a science discipline due, in part; to limiting factors they experience in school and at home (Ogbu, 2004). This is a case study of African American males who are enrolled in advanced science courses at a predominantly African American (84%) urban high school. Guided by expectancy-value theory (EVT) of achievement related results (Eccles, 2009; Eccles et al., 1983), twelve African American male students in two advanced science courses were observed in their science classrooms weekly, participated in an in-depth interview, developed a presentation to share with students enrolled in a tenth grade science course, responded to an open-ended identity questionnaire, and were surveyed about their perceptions of school. Additionally, the students' teachers were interviewed, and seven of the students' parents. The interview data analyses highlighted the important role of supportive parents (key socializers) who had high expectations for their sons and who pushed them academically. The students clearly attributed their enrollment in advanced science courses to their high regard for their science teachers, which included positive relationships, hands-on learning in class, and an inviting and encouraging learning environment. Additionally, other family members and coaches played important roles in these young men's lives. Students' PowerPoint(c) presentations to younger high school students on why they should take advanced science courses highlighted these
How large is the gluon polarization in the statistical parton distributions approach?
Soffer, Jacques; Bourrely, Claude; Buccella, Franco
2015-04-10
We review the theoretical foundations of the quantum statistical approach to parton distributions and we show that by using some recent experimental results from Deep Inelastic Scattering, we are able to improve the description of the data by means of a new determination of the parton distributions. We will see that a large gluon polarization emerges, giving a significant contribution to the proton spin.
A Statistical Filtering Approach for Gravity Recovery and Climate Experiment (GRACE) Gravity Data
NASA Technical Reports Server (NTRS)
Davis. J. L.; Tamisiea, M. E.; Elosegui, P.; Mitrovica, J. X.; Hill, E. M.
2008-01-01
We describe and analyze a statistical filtering approach for GRACE data that uses a parametrized model for the temporal evolution of the GRACE coefficients. After least-squares adjustment, a statistical test is performed to assess the significance of the estimated parameters. If the test is passed, the parameters are used by the filter in the reconstruction of the field; otherwise they are rejected. The test is performed, and the filter is formed, separately for annual components of the model and the trend. This new approach is distinct from Gaussian smoothing since it uses the data themselves to test for specific components of the time-varying gravity field. The statistical filter appears inherently to remove most of the "stripes" present in the GRACE fields, although destriping the fields prior to filtering seems to help the trend recovery. We demonstrate that the statistical filter produces reasonable maps for the annual components and trend. We furthermore assess the statistical filter for the annual components using ground-based GPS data in South America by assuming that the annual component of the gravity signal is associated only with groundwater storage. The un-destriped, statistically filtered field has a X2 value relative to the GPS data consistent with the best result from smoothing. In the space domain, the statistical filters are qualitatively similar to Gaussian smoothing. Unlike Gaussian smoothing, however, the statistical filter has significant sidelobes, including large negative sidelobes on the north-south axis, potentially revealing information on the errors, and the correlations among the errors, for the GRACE coefficients.
NASA Astrophysics Data System (ADS)
Plotnikov, M. Yu.; Shkarupa, E. V.
2015-11-01
Presently, the direct simulation Monte Carlo (DSMC) method is widely used for solving rarefied gas dynamics problems. As applied to steady-state problems, a feature of this method is the use of dependent sample values of random variables for the calculation of macroparameters of gas flows. A new combined approach to estimating the statistical error of the method is proposed that does not practically require additional computations, and it is applicable for any degree of probabilistic dependence of sample values. Features of the proposed approach are analyzed theoretically and numerically. The approach is tested using the classical Fourier problem and the problem of supersonic flow of rarefied gas through permeable obstacle.
Reliability Demonstration Approach for Advanced Stirling Radioisotope Generator
NASA Technical Reports Server (NTRS)
Ha, CHuong; Zampino, Edward; Penswick, Barry; Spronz, Michael
2010-01-01
Developed for future space missions as a high-efficiency power system, the Advanced Stirling Radioisotope Generator (ASRG) has a design life requirement of 14 yr in space following a potential storage of 3 yr after fueling. In general, the demonstration of long-life dynamic systems remains difficult in part due to the perception that the wearout of moving parts cannot be minimized, and associated failures are unpredictable. This paper shows a combination of systematic analytical methods, extensive experience gained from technology development, and well-planned tests can be used to ensure a high level reliability of ASRG. With this approach, all potential risks from each life phase of the system are evaluated and the mitigation adequately addressed. This paper also provides a summary of important test results obtained to date for ASRG and the planned effort for system-level extended operation.
A Novel Approach to Material Development for Advanced Reactor Systems
Was, G.S.; Atzmon, M.; Wang, L.
1999-12-22
OAK B188 A Novel Approach to Material Development for Advanced Reactor Systems. Year one of this project had three major goals. First, to specify, order and install a new high current ion source for more rapid and stable proton irradiation. Second, to assess the use low temperature irradiation and chromium pre-enrichment in an effort to isolate a radiation damage microstructure in stainless steels without the effects of RIS. Third, to prepare for the irradiation of reactor pressure vessel steel and Zircaloy. In year 1 quarter 1, the project goal was to order the high current ion source and to procure and prepare samples of stainless steel for low temperature proton irradiation.
A Novel Approach to Material Development for Advanced Reactor Systems
Was, G.S.; Atzmon, M.; Wang, L.
2000-06-27
OAK B188 A Novel Approach to Material Development for Advanced Reactor Systems. Year one of this project had three major goals. First, to specify, order and install a new high current ion source for more rapid and stable proton irradiation. Second, to assess the use of low temperature irradiation and chromium pre-enrichment in an effort to isolate a radiation damage microstructure in stainless steel without the effects of RIS. Third, to initiate irradiation of reactor pressure vessel steel and Zircaloy. In year 1 quarter 3, the project goal was to complete irradiation of model alloys of RPV steels for a range of doses and begin sample characterization. We also planned to prepare samples for microstructure isolation in stainless steels, and to identify sources of Zircaloy for irradiation and characterization.
Comparing geological and statistical approaches for element selection in sediment tracing research
NASA Astrophysics Data System (ADS)
Laceby, J. Patrick; McMahon, Joe; Evrard, Olivier; Olley, Jon
2015-04-01
Elevated suspended sediment loads reduce reservoir capacity and significantly increase the cost of operating water treatment infrastructure, making the management of sediment supply to reservoirs of increasingly importance. Sediment fingerprinting techniques can be used to determine the relative contributions of different sources of sediment accumulating in reservoirs. The objective of this research is to compare geological and statistical approaches to element selection for sediment fingerprinting modelling. Time-integrated samplers (n=45) were used to obtain source samples from four major subcatchments flowing into the Baroon Pocket Dam in South East Queensland, Australia. The geochemistry of potential sources were compared to the geochemistry of sediment cores (n=12) sampled in the reservoir. The geochemical approach selected elements for modelling that provided expected, observed and statistical discrimination between sediment sources. Two statistical approaches selected elements for modelling with the Kruskal-Wallis H-test and Discriminatory Function Analysis (DFA). In particular, two different significance levels (0.05 & 0.35) for the DFA were included to investigate the importance of element selection on modelling results. A distribution model determined the relative contributions of different sources to sediment sampled in the Baroon Pocket Dam. Elemental discrimination was expected between one subcatchment (Obi Obi Creek) and the remaining subcatchments (Lexys, Falls and Bridge Creek). Six major elements were expected to provide discrimination. Of these six, only Fe2O3 and SiO2 provided expected, observed and statistical discrimination. Modelling results with this geological approach indicated 36% (+/- 9%) of sediment sampled in the reservoir cores were from mafic-derived sources and 64% (+/- 9%) were from felsic-derived sources. The geological and the first statistical approach (DFA0.05) differed by only 1% (σ 5%) for 5 out of 6 model groupings with only
NASA Astrophysics Data System (ADS)
Murari, A.; Gelfusa, M.; Peluso, E.; Gaudio, P.; Mazon, D.; Hawkes, N.; Point, G.; Alper, B.; Eich, T.
2014-12-01
In a Tokamak the configuration of the magnetic fields remains the key element to improve performance and to maximise the scientific exploitation of the device. On the other hand, the quality of the reconstructed fields depends crucially on the measurements available. Traditionally in the least square minimisation phase of the algorithms, used to obtain the magnetic field topology, all the diagnostics are given the same weights, a part from a corrective factor taking into account the error bars. This assumption unduly penalises complex diagnostics, such as polarimetry, which have a limited number of highly significant measurements. A completely new method to choose the weights, to be given to the internal measurements of the magnetic fields for improved equilibrium reconstructions, is presented in this paper. The approach is based on various statistical indicators applied to the residuals, the difference between the actual measurements and their estimates from the reconstructed equilibrium. The potential of the method is exemplified using the measurements of the Faraday rotation derived from JET polarimeter. The results indicate quite clearly that the weights have to be determined carefully, since the inappropriate choice can have significant repercussions on the quality of the magnetic reconstruction both in the edge and in the core. These results confirm the limitations of the assumption that all the diagnostics have to be given the same weight, irrespective of the number of measurements they provide and the region of the plasma they probe.
Lung volume reduction for advanced emphysema: surgical and bronchoscopic approaches.
Tidwell, Sherry L; Westfall, Elizabeth; Dransfield, Mark T
2012-01-01
Chronic obstructive pulmonary disease is the third leading cause of death in the United States, affecting more than 24 million people. Inhaled bronchodilators are the mainstay of therapy; they improve symptoms and quality of life and reduce exacerbations. These and smoking cessation and long-term oxygen therapy for hypoxemic patients are the only medical treatments definitively demonstrated to reduce mortality. Surgical approaches include lung transplantation and lung volume reduction and the latter has been shown to improve exercise tolerance, quality of life, and survival in highly selected patients with advanced emphysema. Lung volume reduction surgery results in clinical benefits. The procedure is associated with a short-term risk of mortality and a more significant risk of cardiac and pulmonary perioperative complications. Interest has been growing in the use of noninvasive, bronchoscopic methods to address the pathological hyperinflation that drives the dyspnea and exercise intolerance that is characteristic of emphysema. In this review, the mechanism by which lung volume reduction improves pulmonary function is outlined, along with the risks and benefits of the traditional surgical approach. In addition, the emerging bronchoscopic techniques for lung volume reduction are introduced and recent clinical trials examining their efficacy are summarized. PMID:22189668
NASA Astrophysics Data System (ADS)
Ruggles, Adam J.
2015-11-01
This paper presents improved statistical insight regarding the self-similar scalar mixing process of atmospheric hydrogen jets and the downstream region of under-expanded hydrogen jets. Quantitative planar laser Rayleigh scattering imaging is used to probe both jets. The self-similarity of statistical moments up to the sixth order (beyond the literature established second order) is documented in both cases. This is achieved using a novel self-similar normalization method that facilitated a degree of statistical convergence that is typically limited to continuous, point-based measurements. This demonstrates that image-based measurements of a limited number of samples can be used for self-similar scalar mixing studies. Both jets exhibit the same radial trends of these moments demonstrating that advanced atmospheric self-similarity can be applied in the analysis of under-expanded jets. Self-similar histograms away from the centerline are shown to be the combination of two distributions. The first is attributed to turbulent mixing. The second, a symmetric Poisson-type distribution centered on zero mass fraction, progressively becomes the dominant and eventually sole distribution at the edge of the jet. This distribution is attributed to shot noise-affected pure air measurements, rather than a diffusive superlayer at the jet boundary. This conclusion is reached after a rigorous measurement uncertainty analysis and inspection of pure air data collected with each hydrogen data set. A threshold based upon the measurement noise analysis is used to separate the turbulent and pure air data, and thusly estimate intermittency. Beta-distributions (four parameters) are used to accurately represent the turbulent distribution moments. This combination of measured intermittency and four-parameter beta-distributions constitutes a new, simple approach to model scalar mixing. Comparisons between global moments from the data and moments calculated using the proposed model show excellent
A Statistical Approach for Testing Cross-Phenotype Effects of Rare Variants
Broadaway, K. Alaine; Cutler, David J.; Duncan, Richard; Moore, Jacob L.; Ware, Erin B.; Jhun, Min A.; Bielak, Lawrence F.; Zhao, Wei; Smith, Jennifer A.; Peyser, Patricia A.; Kardia, Sharon L.R.; Ghosh, Debashis; Epstein, Michael P.
2016-01-01
Increasing empirical evidence suggests that many genetic variants influence multiple distinct phenotypes. When cross-phenotype effects exist, multivariate association methods that consider pleiotropy are often more powerful than univariate methods that model each phenotype separately. Although several statistical approaches exist for testing cross-phenotype effects for common variants, there is a lack of similar tests for gene-based analysis of rare variants. In order to fill this important gap, we introduce a statistical method for cross-phenotype analysis of rare variants using a nonparametric distance-covariance approach that compares similarity in multivariate phenotypes to similarity in rare-variant genotypes across a gene. The approach can accommodate both binary and continuous phenotypes and further can adjust for covariates. Our approach yields a closed-form test whose significance can be evaluated analytically, thereby improving computational efficiency and permitting application on a genome-wide scale. We use simulated data to demonstrate that our method, which we refer to as the Gene Association with Multiple Traits (GAMuT) test, provides increased power over competing approaches. We also illustrate our approach using exome-chip data from the Genetic Epidemiology Network of Arteriopathy. PMID:26942286
Pulsipher, B.A.; Kuhn, W.L.
1987-02-01
Current planning for liquid high-level nuclear wastes existing in the US includes processing in a liquid-fed ceramic melter to incorporate it into a high-quality glass, and placement in a deep geologic repository. The nuclear waste vitrification process requires assurance of a quality product with little or no final inspection. Statistical process control (SPC) is a quantitative approach to one quality assurance aspect of vitrified nuclear waste. This method for monitoring and controlling a process in the presence of uncertainties provides a statistical basis for decisions concerning product quality improvement. Statistical process control is shown to be a feasible and beneficial tool to help the waste glass producers demonstrate that the vitrification process can be controlled sufficiently to produce an acceptable product. This quantitative aspect of quality assurance could be an effective means of establishing confidence in the claims to a quality product. 2 refs., 4 figs.
ERIC Educational Resources Information Center
Heaviside, Sheila; And Others
The "Survey of Advanced Telecommunications in U.S. Public Elementary and Secondary Schools, Fall 1996" collected information from 911 regular United States public elementary and secondary schools regarding the availability and use of advanced telecommunications, and in particular, access to the Internet, plans to obtain Internet access, use of…
Burn, K.W.
1995-01-01
The Direct Statistical Approach (DSA) to surface splitting and Russian Roulette (RR) is one of the current routes toward automatism in Monte Carlo and is currently applied to fixed source particle transport problems. A general volumetric particle bifurcation capability has been inserted into the Direct Statistical Approach (DSA) surface parameter and cell models. The resulting extended DSA describes the second moment and time functions in terms of phase-space surface splitting/Russian roulette parameters (surface parameter model) or phase-space cell importances (cell model) in the presence of volumetric particle bifurcations including both natural events [such as (n,xn) or gamma production from neutron collisions] and artificial events (such as DXTRAN). At the same time, other limitations in the DSA models (concerning tally scores direct from the source and tracks surviving an event at which a tally score occurs) are removed. Given the second moment and time functions, the foregoing surface or cell parameters may then be optimized.
Zhang, Han; Ni, Weiping; Yan, Weidong; Bian, Hui; Wu, Junzheng
2014-01-01
A novel fast SAR image change detection method is presented in this paper. Based on a Bayesian approach, the prior information that speckles follow the Nakagami distribution is incorporated into the difference image (DI) generation process. The new DI performs much better than the familiar log ratio (LR) DI as well as the cumulant based Kullback-Leibler divergence (CKLD) DI. The statistical region merging (SRM) approach is first introduced to change detection context. A new clustering procedure with the region variance as the statistical inference variable is exhibited to tailor SAR image change detection purposes, with only two classes in the final map, the unchanged and changed classes. The most prominent advantages of the proposed modified SRM (MSRM) method are the ability to cope with noise corruption and the quick implementation. Experimental results show that the proposed method is superior in both the change detection accuracy and the operation efficiency.
Sound source measurement by using a passive sound insulation and a statistical approach
NASA Astrophysics Data System (ADS)
Dragonetti, Raffaele; Di Filippo, Sabato; Mercogliano, Francesco; Romano, Rosario A.
2015-10-01
This paper describes a measurement technique developed by the authors that allows carrying out acoustic measurements inside noisy environments reducing background noise effects. The proposed method is based on the integration of a traditional passive noise insulation system with a statistical approach. The latter is applied to signals picked up by usual sensors (microphones and accelerometers) equipping the passive sound insulation system. The statistical approach allows improving of the sound insulation given only by the passive sound insulation system at low frequency. The developed measurement technique has been validated by means of numerical simulations and measurements carried out inside a real noisy environment. For the case-studies here reported, an average improvement of about 10 dB has been obtained in a frequency range up to about 250 Hz. Considerations on the lower sound pressure level that can be measured by applying the proposed method and the measurement error related to its application are reported as well.
Ni, Weiping; Yan, Weidong; Bian, Hui; Wu, Junzheng
2014-01-01
A novel fast SAR image change detection method is presented in this paper. Based on a Bayesian approach, the prior information that speckles follow the Nakagami distribution is incorporated into the difference image (DI) generation process. The new DI performs much better than the familiar log ratio (LR) DI as well as the cumulant based Kullback-Leibler divergence (CKLD) DI. The statistical region merging (SRM) approach is first introduced to change detection context. A new clustering procedure with the region variance as the statistical inference variable is exhibited to tailor SAR image change detection purposes, with only two classes in the final map, the unchanged and changed classes. The most prominent advantages of the proposed modified SRM (MSRM) method are the ability to cope with noise corruption and the quick implementation. Experimental results show that the proposed method is superior in both the change detection accuracy and the operation efficiency. PMID:25258740
Time series expression analyses using RNA-seq: a statistical approach.
Oh, Sunghee; Song, Seongho; Grabowski, Gregory; Zhao, Hongyu; Noonan, James P
2013-01-01
RNA-seq is becoming the de facto standard approach for transcriptome analysis with ever-reducing cost. It has considerable advantages over conventional technologies (microarrays) because it allows for direct identification and quantification of transcripts. Many time series RNA-seq datasets have been collected to study the dynamic regulations of transcripts. However, statistically rigorous and computationally efficient methods are needed to explore the time-dependent changes of gene expression in biological systems. These methods should explicitly account for the dependencies of expression patterns across time points. Here, we discuss several methods that can be applied to model timecourse RNA-seq data, including statistical evolutionary trajectory index (SETI), autoregressive time-lagged regression (AR(1)), and hidden Markov model (HMM) approaches. We use three real datasets and simulation studies to demonstrate the utility of these dynamic methods in temporal analysis. PMID:23586021
Ganju, Jitendra; Yu, Xinxin; Ma, Guoguang Julie
2013-01-01
Formal inference in randomized clinical trials is based on controlling the type I error rate associated with a single pre-specified statistic. The deficiency of using just one method of analysis is that it depends on assumptions that may not be met. For robust inference, we propose pre-specifying multiple test statistics and relying on the minimum p-value for testing the null hypothesis of no treatment effect. The null hypothesis associated with the various test statistics is that the treatment groups are indistinguishable. The critical value for hypothesis testing comes from permutation distributions. Rejection of the null hypothesis when the smallest p-value is less than the critical value controls the type I error rate at its designated value. Even if one of the candidate test statistics has low power, the adverse effect on the power of the minimum p-value statistic is not much. Its use is illustrated with examples. We conclude that it is better to rely on the minimum p-value rather than a single statistic particularly when that single statistic is the logrank test, because of the cost and complexity of many survival trials.
A Statistical-Physics Approach to Language Acquisition and Language Change
NASA Astrophysics Data System (ADS)
Cassandro, Marzio; Collet, Pierre; Galves, Antonio; Galves, Charlotte
1999-02-01
The aim of this paper is to explain why Statistical Physics can help understanding two related linguistic questions. The first question is how to model first language acquisition by a child. The second question is how language change proceeds in time. Our approach is based on a Gibbsian model for the interface between syntax and prosody. We also present a simulated annealing model of language acquisition, which extends the Triggering Learning Algorithm recently introduced in the linguistic literature.
Carboni, Michele; Gianneo, Andrea; Giglio, Marco
2015-07-01
This research investigates a Lamb-wave based structural health monitoring approach matching an out-of-phase actuation of a pair of piezoceramic transducers at low frequency. The target is a typical quasi-isotropic carbon fibre reinforced polymer aeronautical laminate subjected to artificial, via Teflon patches, and natural, via suitable low velocity drop weight impact tests, delaminations. The performance and main influencing factors of such an approach are studied through a Design of Experiment statistical method, considering both Pulse Echo and Pitch Catch configurations of PZT sensors. Results show that some factors and their interactions can effectively influence the detection of a delamination-like damage.
NASA Astrophysics Data System (ADS)
Gnutzmann, Sven; Seif, Burkhard
2004-05-01
A semiclassical approach to the universal ergodic spectral statistics in quantum star graphs is presented for all known ten symmetry classes of quantum systems. The approach is based on periodic orbit theory, the exact semiclassical trace formula for star graphs, and on diagrammatic techniques. The appropriate spectral form factors are calculated up to one order beyond the diagonal and self-dual approximations. The results are in accordance with the corresponding random-matrix theories which supports a properly generalized Bohigas-Giannoni-Schmit conjecture.
A Challenging Surgical Approach to Locally Advanced Primary Urethral Carcinoma
Lucarelli, Giuseppe; Spilotros, Marco; Vavallo, Antonio; Palazzo, Silvano; Miacola, Carlos; Forte, Saverio; Matera, Matteo; Campagna, Marcello; Colamonico, Ottavio; Schiralli, Francesco; Sebastiani, Francesco; Di Cosmo, Federica; Bettocchi, Carlo; Di Lorenzo, Giuseppe; Buonerba, Carlo; Vincenti, Leonardo; Ludovico, Giuseppe; Ditonno, Pasquale; Battaglia, Michele
2016-01-01
Abstract Primary urethral carcinoma (PUC) is a rare and aggressive cancer, often underdetected and consequently unsatisfactorily treated. We report a case of advanced PUC, surgically treated with combined approaches. A 47-year-old man underwent transurethral resection of a urethral lesion with histological evidence of a poorly differentiated squamous cancer of the bulbomembranous urethra. Computed tomography (CT) and bone scans excluded metastatic spread of the disease but showed involvement of both corpora cavernosa (cT3N0M0). A radical surgical approach was advised, but the patient refused this and opted for chemotherapy. After 17 months the patient was referred to our department due to the evidence of a fistula in the scrotal area. CT scan showed bilateral metastatic disease in the inguinal, external iliac, and obturator lymph nodes as well as the involvement of both corpora cavernosa. Additionally, a fistula originating from the right corpus cavernosum extended to the scrotal skin. At this stage, the patient accepted the surgical treatment, consisting of different phases. Phase I: Radical extraperitoneal cystoprostatectomy with iliac-obturator lymph nodes dissection. Phase II: Creation of a urinary diversion through a Bricker ileal conduit. Phase III: Repositioning of the patient in lithotomic position for an overturned Y skin incision, total penectomy, fistula excision, and “en bloc” removal of surgical specimens including the bladder, through the perineal breach. Phase IV: Right inguinal lymphadenectomy. The procedure lasted 9-and-a-half hours, was complication-free, and intraoperative blood loss was 600 mL. The patient was discharged 8 days after surgery. Pathological examination documented a T4N2M0 tumor. The clinical situation was stable during the first 3 months postoperatively but then metastatic spread occurred, not responsive to adjuvant chemotherapy, which led to the patient's death 6 months after surgery. Patients with advanced stage tumors of
Maruvada, Padma; Srivastava, Sudhir
2006-06-01
Cancer remains the second leading cause of death in the United States, in spite of tremendous advances made in therapeutic and diagnostic strategies. Successful cancer treatment depends on improved methods to detect cancers at early stages when they can be treated more effectively. Biomarkers for early detection of cancer enable screening of asymptomatic populations and thus play a critical role in cancer diagnosis. However, the approaches for validating biomarkers have yet to be addressed clearly. In an effort to delineate the ambiguities related to biomarker validation and related statistical considerations, the National Cancer Institute, in collaboration with the Food and Drug Administration, conducted a workshop in July 2004 entitled "Research Strategies, Study Designs, and Statistical Approaches to Biomarker Validation for Cancer Diagnosis and Detection." The main objective of this workshop was to review basic considerations underpinning the study designs, statistical methodologies, and novel approaches necessary to rapidly advance the clinical application of cancer biomarkers. The current commentary describes various aspects of statistical considerations and study designs for cancer biomarker validation discussed in this workshop.
NASA Astrophysics Data System (ADS)
Tsutsumi, Morito; Seya, Hajime
2009-12-01
This study discusses the theoretical foundation of the application of spatial hedonic approaches—the hedonic approach employing spatial econometrics or/and spatial statistics—to benefits evaluation. The study highlights the limitations of the spatial econometrics approach since it uses a spatial weight matrix that is not employed by the spatial statistics approach. Further, the study presents empirical analyses by applying the Spatial Autoregressive Error Model (SAEM), which is based on the spatial econometrics approach, and the Spatial Process Model (SPM), which is based on the spatial statistics approach. SPMs are conducted based on both isotropy and anisotropy and applied to different mesh sizes. The empirical analysis reveals that the estimated benefits are quite different, especially between isotropic and anisotropic SPM and between isotropic SPM and SAEM; the estimated benefits are similar for SAEM and anisotropic SPM. The study demonstrates that the mesh size does not affect the estimated amount of benefits. Finally, the study provides a confidence interval for the estimated benefits and raises an issue with regard to benefit evaluation.
Rumer, Leonid; Domingo, Cristina; Donoso Mantke, Oliver; Dobrydneva, Yuliya; Greiner, Matthias; Niedrig, Matthias
2016-10-01
Management of viral diagnostic quality is based on external quality assurance (EQA), where laboratories involved in diagnostics of a targeted virus are offered to analyze a panel of blinded samples. The utility of EQAs is compromised because of the absence of an approach to EQA design which upfront defines acceptance criteria and associated statistical analysis ensuring fair and consistent interpretation. We offer a rigorous statistically based approach for EQA planning. Instead of a conventional performance characteristic (the score) which is calculated as the sum of the points for correctly identified samples in a blinded test panel, Youden index is used as the performance measure. Unlike the score, Youden index requires an estimate of sensitivity and specificity and incorporates the relationship of these performance parameters. Based on the assumption that the coordinator is a reputable expert of viral diagnostics, the performance of the coordinator's laboratory is defined as a proficiency standard for performance evaluation. The immediate goal of EQA is defined as to obtain a statistically reliable estimation for every laboratory whether its performance meets the proficiency standard, while the overall goal is to match every laboratory to its specific performance level. Dependence of informational capacities of test panel from the panel size and content is quantitatively analyzed and the optimal design and informational capacities of both idealized panels (whose size is not restricted by financial factors) and currently feasible panels are considered. Our approach provides the basis both for rational design of currently feasible EQA test panels and for an increased panel size. PMID:27092652
A Statistical Approach for the Concurrent Coupling of Molecular Dynamics and Finite Element Methods
NASA Technical Reports Server (NTRS)
Saether, E.; Yamakov, V.; Glaessgen, E.
2007-01-01
Molecular dynamics (MD) methods are opening new opportunities for simulating the fundamental processes of material behavior at the atomistic level. However, increasing the size of the MD domain quickly presents intractable computational demands. A robust approach to surmount this computational limitation has been to unite continuum modeling procedures such as the finite element method (FEM) with MD analyses thereby reducing the region of atomic scale refinement. The challenging problem is to seamlessly connect the two inherently different simulation techniques at their interface. In the present work, a new approach to MD-FEM coupling is developed based on a restatement of the typical boundary value problem used to define a coupled domain. The method uses statistical averaging of the atomistic MD domain to provide displacement interface boundary conditions to the surrounding continuum FEM region, which, in return, generates interface reaction forces applied as piecewise constant traction boundary conditions to the MD domain. The two systems are computationally disconnected and communicate only through a continuous update of their boundary conditions. With the use of statistical averages of the atomistic quantities to couple the two computational schemes, the developed approach is referred to as an embedded statistical coupling method (ESCM) as opposed to a direct coupling method where interface atoms and FEM nodes are individually related. The methodology is inherently applicable to three-dimensional domains, avoids discretization of the continuum model down to atomic scales, and permits arbitrary temperatures to be applied.
Bridging the gap between GLUE and formal statistical approaches: approximate Bayesian computation
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2013-12-01
In recent years, a strong debate has emerged in the hydrologic literature regarding how to properly treat nontraditional error residual distributions and quantify parameter and predictive uncertainty. Particularly, there is strong disagreement whether such uncertainty framework should have its roots within a proper statistical (Bayesian) context using Markov chain Monte Carlo (MCMC) simulation techniques, or whether such a framework should be based on a quite different philosophy and implement informal likelihood functions and simplistic search methods to summarize parameter and predictive distributions. This paper is a follow-up of our previous work published in Vrugt and Sadegh (2013) and demonstrates that approximate Bayesian computation (ABC) bridges the gap between formal and informal statistical model-data fitting approaches. The ABC methodology has recently emerged in the fields of biology and population genetics and relaxes the need for an explicit likelihood function in favor of one or multiple different summary statistics that measure the distance of each model simulation to the data. This paper further studies the theoretical and numerical equivalence of formal and informal Bayesian approaches using discharge and forcing data from different watersheds in the United States, in particular generalized likelihood uncertainty estimation (GLUE). We demonstrate that the limits of acceptability approach of GLUE is a special variant of ABC if each discharge observation of the calibration data set is used as a summary diagnostic.
Organic and inorganic nitrogen dynamics in soil - advanced Ntrace approach
NASA Astrophysics Data System (ADS)
Andresen, Louise C.; Björsne, Anna-Karin; Bodé, Samuel; Klemedtsson, Leif; Boeckx, Pascal; Rütting, Tobias
2016-04-01
Depolymerization of soil organic nitrogen (SON) into monomers (e.g. amino acids) is currently thought to be the rate limiting step for the terrestrial nitrogen (N) cycle. The production of free amino acids (AA) is followed by AA mineralization to ammonium, which is an important fraction of the total N mineralization. Accurate assessment of depolymerization and AA mineralization rate is important for a better understanding of the rate limiting steps. Recent developments in the 15N pool dilution techniques, based on 15N labelling of AA's, allow quantifying gross rates of SON depolymerization and AA mineralization (Wanek et al., 2010; Andersen et al., 2015) in addition to gross N mineralization. However, it is well known that the 15N pool dilution approach has limitations; in particular that gross rates of consumption processes (e.g. AA mineralization) are overestimated. This has consequences for evaluating the rate limiting step of the N cycle, as well as for estimating the nitrogen use efficiency (NUE). Here we present a novel 15N tracing approach, which combines 15N-AA labelling with an advanced version of the 15N tracing model Ntrace (Müller et al., 2007) explicitly accounting for AA turnover in soil. This approach (1) provides a more robust quantification of gross depolymerization and AA mineralization and (2) suggests a more realistic estimate for the microbial NUE of amino acids. Advantages of the new 15N tracing approach will be discussed and further improvements will be identified. References: Andresen, L.C., Bodé, S., Tietema, A., Boeckx, P., and Rütting, T.: Amino acid and N mineralization dynamics in heathland soil after long-term warming and repetitive drought, SOIL, 1, 341-349, 2015. Müller, C., Rütting, T., Kattge, J., Laughlin, R. J., and Stevens, R. J.: Estimation of parameters in complex 15N tracing models via Monte Carlo sampling, Soil Biology & Biochemistry, 39, 715-726, 2007. Wanek, W., Mooshammer, M., Blöchl, A., Hanreich, A., and Richter
Statistical Analysis of RTS Noise and Low Frequency Noise in 1M MOSFETs Using an Advanced TEG
NASA Astrophysics Data System (ADS)
Abe, K.; Sugawa, S.; Watabe, S.; Miyamoto, N.; Teramoto, A.; Toita, M.; Kamata, Y.; Shibusawa, K.; Ohmi, T.
2007-07-01
In this paper, we developed an advanced Test Element Group (TEG) which can measure Random Telegraph Signal (RTS) noise in over 106 nMOSFETs including various gate sizes with high accuracy in a very short time. We measured and analyzed these noises statistically, as the result, we confirmed that appearance probabilities in the TEG and noise intensities of RTS are dependent on gate sizes.
Wei, Julong; Xu, Shizhong
2016-02-01
Most standard QTL mapping procedures apply to populations derived from the cross of two parents. QTL detected from such biparental populations are rarely relevant to breeding programs because of the narrow genetic basis: only two alleles are involved per locus. To improve the generality and applicability of mapping results, QTL should be detected using populations initiated from multiple parents, such as the multiparent advanced generation intercross (MAGIC) populations. The greatest challenges of QTL mapping in MAGIC populations come from multiple founder alleles and control of the genetic background information. We developed a random-model methodology by treating the founder effects of each locus as random effects following a normal distribution with a locus-specific variance. We also fit a polygenic effect to the model to control the genetic background. To improve the statistical power for a scanned marker, we release the marker effect absorbed by the polygene back to the model. In contrast to the fixed-model approach, we estimate and test the variance of each locus and scan the entire genome one locus at a time using likelihood-ratio test statistics. Simulation studies showed that this method can increase statistical power and reduce type I error compared with composite interval mapping (CIM) and multiparent whole-genome average interval mapping (MPWGAIM). We demonstrated the method using a public Arabidopsis thaliana MAGIC population and a mouse MAGIC population.
Oxidative Stress in Aging: Advances in Proteomic Approaches
Ortuño-Sahagún, Daniel; Pallàs, Mercè; Rojas-Mayorquín, Argelia E.
2014-01-01
Aging is a gradual, complex process in which cells, tissues, organs, and the whole organism itself deteriorate in a progressive and irreversible manner that, in the majority of cases, implies pathological conditions that affect the individual's Quality of Life (QOL). Although extensive research efforts in recent years have been made, the anticipation of aging and prophylactic or treatment strategies continue to experience major limitations. In this review, the focus is essentially on the compilation of the advances generated by cellular expression profile analysis through proteomics studies (two-dimensional [2D] electrophoresis and mass spectrometry [MS]), which are currently used as an integral approach to study the aging process. Additionally, the relevance of the oxidative stress factors is discussed. Emphasis is placed on postmitotic tissues, such as neuronal, muscular, and red blood cells, which appear to be those most frequently studied with respect to aging. Additionally, models for the study of aging are discussed in a number of organisms, such as Caenorhabditis elegans, senescence-accelerated probe-8 mice (SAMP8), naked mole-rat (Heterocephalus glaber), and the beagle canine. Proteomic studies in specific tissues and organisms have revealed the extensive involvement of reactive oxygen species (ROS) and oxidative stress in aging. PMID:24688629
Oxidative stress in aging: advances in proteomic approaches.
Ortuño-Sahagún, Daniel; Pallàs, Mercè; Rojas-Mayorquín, Argelia E
2014-01-01
Aging is a gradual, complex process in which cells, tissues, organs, and the whole organism itself deteriorate in a progressive and irreversible manner that, in the majority of cases, implies pathological conditions that affect the individual's Quality of Life (QOL). Although extensive research efforts in recent years have been made, the anticipation of aging and prophylactic or treatment strategies continue to experience major limitations. In this review, the focus is essentially on the compilation of the advances generated by cellular expression profile analysis through proteomics studies (two-dimensional [2D] electrophoresis and mass spectrometry [MS]), which are currently used as an integral approach to study the aging process. Additionally, the relevance of the oxidative stress factors is discussed. Emphasis is placed on postmitotic tissues, such as neuronal, muscular, and red blood cells, which appear to be those most frequently studied with respect to aging. Additionally, models for the study of aging are discussed in a number of organisms, such as Caenorhabditis elegans, senescence-accelerated probe-8 mice (SAMP8), naked mole-rat (Heterocephalus glaber), and the beagle canine. Proteomic studies in specific tissues and organisms have revealed the extensive involvement of reactive oxygen species (ROS) and oxidative stress in aging.
O'Toole, Alice J; Jiang, Fang; Abdi, Hervé; Pénard, Nils; Dunlop, Joseph P; Parent, Marc A
2007-11-01
The goal of pattern-based classification of functional neuroimaging data is to link individual brain activation patterns to the experimental conditions experienced during the scans. These "brain-reading" analyses advance functional neuroimaging on three fronts. From a technical standpoint, pattern-based classifiers overcome fatal f laws in the status quo inferential and exploratory multivariate approaches by combining pattern-based analyses with a direct link to experimental variables. In theoretical terms, the results that emerge from pattern-based classifiers can offer insight into the nature of neural representations. This shifts the emphasis in functional neuroimaging studies away from localizing brain activity toward understanding how patterns of brain activity encode information. From a practical point of view, pattern-based classifiers are already well established and understood in many areas of cognitive science. These tools are familiar to many researchers and provide a quantitatively sound and qualitatively satisfying answer to most questions addressed in functional neuroimaging studies. Here, we examine the theoretical, statistical, and practical underpinnings of pattern-based classification approaches to functional neuroimaging analyses. Pattern-based classification analyses are well positioned to become the standard approach to analyzing functional neuroimaging data.
Harrigan, George G; Harrison, Jay M
2012-01-01
New transgenic (GM) crops are subjected to extensive safety assessments that include compositional comparisons with conventional counterparts as a cornerstone of the process. The influence of germplasm, location, environment, and agronomic treatments on compositional variability is, however, often obscured in these pair-wise comparisons. Furthermore, classical statistical significance testing can often provide an incomplete and over-simplified summary of highly responsive variables such as crop composition. In order to more clearly describe the influence of the numerous sources of compositional variation we present an introduction to two alternative but complementary approaches to data analysis and interpretation. These include i) exploratory data analysis (EDA) with its emphasis on visualization and graphics-based approaches and ii) Bayesian statistical methodology that provides easily interpretable and meaningful evaluations of data in terms of probability distributions. The EDA case-studies include analyses of herbicide-tolerant GM soybean and insect-protected GM maize and soybean. Bayesian approaches are presented in an analysis of herbicide-tolerant GM soybean. Advantages of these approaches over classical frequentist significance testing include the more direct interpretation of results in terms of probabilities pertaining to quantities of interest and no confusion over the application of corrections for multiple comparisons. It is concluded that a standardized framework for these methodologies could provide specific advantages through enhanced clarity of presentation and interpretation in comparative assessments of crop composition.
A combinatorial approach to the discovery of advanced materials
NASA Astrophysics Data System (ADS)
Sun, Xiao-Dong
This thesis discusses the application of combinatorial methods to the search of advanced materials. The goal of this research is to develop a "parallel" or "fast sequential" methodology for both the synthesis and characterization of materials with novel electronic, magnetic and optical properties. Our hope is to dramatically accelerate the rate at which materials are generated and studied. We have developed two major combinatorial methodologies to this end. One involves generating thin film materials libraries using a combination of various thin film deposition and masking strategies with multi-layer thin film precursors. The second approach is to generate powder materials libraries with solution precursors delivered with a multi-nozzle inkjet system. The first step in this multistep combinatorial process involves the design and synthesis of high density libraries of diverse materials aimed at exploring a large segment of the compositional space of interest based on our understanding of the physical and structural properties of a particular class of materials. Rapid, sensitive measurements of one or more relevant physical properties of each library member result in the identification of a family of "lead" compositions with a desired property. These compositions are then optimized by continuously varying the stoichiometries of a more focused set of precursors. Materials with the optimal composition are then synthesized in quantities sufficient for detailed characterization of their structural and physical properties. Finally, the information obtained from this process should enhance our predictive ability in subsequent experiments. Combinatorial methods have been successfully used in the synthesis and discovery of materials with novel properties. For example, a class of cobaltite based giant magnetoresistance (GMR) ceramics was discovered; Application of this method to luminescence materials has resulted in the discovery of a few highly efficient tricolor
Robust statistical approaches to assess the degree of agreement of clinical data
NASA Astrophysics Data System (ADS)
Grilo, Luís M.; Grilo, Helena L.
2016-06-01
To analyze the blood of patients who took vitamin B12 for a period of time, two different medicine measurement methods were used (one is the established method, with more human intervention, and the other method uses essentially machines). Given the non-normality of the differences between both measurement methods, the limits of agreement are estimated using also a non-parametric approach to assess the degree of agreement of the clinical data. The bootstrap resampling method is applied in order to obtain robust confidence intervals for mean and median of differences. The approaches used are easy to apply, running a friendly software, and their outputs are also easy to interpret. In this case study the results obtained with (non)parametric approaches lead us to different statistical conclusions, but the decision whether agreement is acceptable or not is always a clinical judgment.
NASA Astrophysics Data System (ADS)
Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis
2016-09-01
Polarimetric radar-based hydrometeor classification is the procedure of identifying different types of hydrometeors by exploiting polarimetric radar observations. The main drawback of the existing supervised classification methods, mostly based on fuzzy logic, is a significant dependency on a presumed electromagnetic behaviour of different hydrometeor types. Namely, the results of the classification largely rely upon the quality of scattering simulations. When it comes to the unsupervised approach, it lacks the constraints related to the hydrometeor microphysics. The idea of the proposed method is to compensate for these drawbacks by combining the two approaches in a way that microphysical hypotheses can, to a degree, adjust the content of the classes obtained statistically from the observations. This is done by means of an iterative approach, performed offline, which, in a statistical framework, examines clustered representative polarimetric observations by comparing them to the presumed polarimetric properties of each hydrometeor class. Aside from comparing, a routine alters the content of clusters by encouraging further statistical clustering in case of non-identification. By merging all identified clusters, the multi-dimensional polarimetric signatures of various hydrometeor types are obtained for each of the studied representative datasets, i.e. for each radar system of interest. These are depicted by sets of centroids which are then employed in operational labelling of different hydrometeors. The method has been applied on three C-band datasets, each acquired by different operational radar from the MeteoSwiss Rad4Alp network, as well as on two X-band datasets acquired by two research mobile radars. The results are discussed through a comparative analysis which includes a corresponding supervised and unsupervised approach, emphasising the operational potential of the proposed method.
Multi-level approach for statistical appearance models with probabilistic correspondences
NASA Astrophysics Data System (ADS)
Krüger, Julia; Ehrhardt, Jan; Handels, Heinz
2016-03-01
Statistical shape and appearance models are often based on the accurate identification of one-to-one correspondences in a training data set. At the same time, the determination of these corresponding landmarks is the most challenging part of such methods. Hufnagel et al.1 developed an alternative method using correspondence probabilities for a statistical shape model. In Krüuger et al.2, 3 we propose the use of probabilistic correspondences for statistical appearance models by incorporating appearance information into the framework. We employ a point-based representation of image data combining position and appearance information. The model is optimized and adapted by a maximum a-posteriori (MAP) approach deriving a single global optimization criterion with respect to model parameters and observation dependent parameters that directly affects shape and appearance information of the considered structures. Because initially unknown correspondence probabilities are used and a higher number of degrees of freedom is introduced to the model a regularization of the model generation process is advantageous. For this purpose we extend the derived global criterion by a regularization term which penalizes implausible topological changes. Furthermore, we propose a multi-level approach for the optimization, to increase the robustness of the model generation process.
Channel network identification from high-resolution DTM: a statistical approach
NASA Astrophysics Data System (ADS)
Sofia, G.; Tarolli, P.; Cazorzi, F.; Dalla Fontana, G.
2010-12-01
A statistical approach to LiDAR derived topographic attributes for the automatic extraction of channel network is presented in this paper. The basis of this approach is to use statistical descriptors to identify channel where terrain geometry denotes significant convergences. Two case study areas of different morphology and degree of organization are used with their 1 m LiDAR Digital Terrain Models (DTMs). Topographic attribute maps (curvature and openness) for different window sizes are derived from the DTMs in order to detect surface convergences. For the choice of the optimum kernel size, a statistical analysis on values distributions of these maps is carried out. For the network extraction, we propose a three-step method based (a) on the normalization and overlapping of openness and minimum curvature in order to highlight the more likely surface convergences, (b) a weighting of the upslope area according to such normalized maps in order to identify drainage flow paths and flow accumulation consistent with terrain geometry, (c) the z-score normalization of the weighted upslope area and the use of z-score values as non-subjective threshold for channel network identification. As a final step for optimal definition and representation of the whole network, a noise-filtering and connection procedure is applied. The advantage of the proposed methodology, and the efficiency and accurate localization of extracted features are demonstrated using LiDAR data of two different areas and comparing both extractions with field surveyed networks.
Griffith, Lauren E.; van den Heuvel, Edwin; Fortier, Isabel; Sohel, Nazmul; Hofer, Scott M.; Payette, Hélène; Wolfson, Christina; Belleville, Sylvie; Kenny, Meghan; Doiron, Dany; Raina, Parminder
2015-01-01
Objectives To identify statistical methods for harmonization which could be used in the context of summary data and individual participant data meta-analysis of cognitive measures. Study Design and Setting Environmental scan methods were used to conduct two reviews to identify: 1) studies that quantitatively combined data on cognition, and 2) general literature on statistical methods for data harmonization. Search results were rapidly screened to identify articles of relevance. Results All 33 meta-analyses combining cognition measures either restricted their analyses to a subset of studies using a common measure or combined standardized effect sizes across studies; none reported their harmonization steps prior to producing summary effects. In the second scan, three general classes of statistical harmonization models were identified: 1) standardization methods, 2) latent variable models, and 3) multiple imputation models; few publications compared methods. Conclusions Although it is an implicit part of conducting a meta-analysis or pooled analysis, the methods used to assess inferential equivalence of complex constructs are rarely reported or discussed. Progress in this area will be supported by guidelines for the conduct and reporting of the data harmonization and integration and by evaluating and developing statistical approaches to harmonization. PMID:25497980
A hybrid electromagnetic-statistical approach for characterizing MMW scattering by terrain
NASA Astrophysics Data System (ADS)
Ulaby, Fawwaz T.; Siqueira, Paul; Sarabandi, Kamal
1993-11-01
The performance of millimeter wave (MMW) radar systems in target detection, navigation, and other applications depends in part on the scattering characteristics of the terrain background. Two different approaches have been pursued in the literature for characterizing MMW scattering by terrain. The first approach relies on the development of electromagnetic scattering models that relate the backscattering coefficient sigma of a given terrain type (such as bare ground surfaces, snow cover, and vegetation) to the physical properties of the terrain target, and then verifying model predictions though experimental observations conducted under semicontrolled field conditions. The second approach is entirely empirical in nature; it relies on the acquisition of extensive radar data from which statistical distributions are generated. This paper provides an overview of how the hybrid approach can be used to simulate the statistical properties of terrain backscatter at millimeter wavelengths for several types of terrain, including bare soil surfaces, vegetation, and snow cover. The hybrid approach incorporates scintillation effects associated with coherent sensors together with information about the mix of terrain categories present in the scene. Two types of input data (or a merged set of both) can be used as input to the clutter simulation package: measured data that is available in a University of Michigan data base or data generated by electromagnetic models. The data base is available as a handbook that contains MMW scattering observations reported in the literature for certain terrain types and conditions. Alternatively, a set of electromagnetic models can be used for calculating the backscattering coefficient sigma of the specified terrain type. These models, which are semiempirical in form, are based on highly complicated theoretical models that had been tested against experimental observations. With this approach, it is possible to generate a probability density
Design of Complex Systems in the presence of Large Uncertainties: a statistical approach
Koutsourelakis, P
2007-07-31
The design or optimization of engineering systems is generally based on several assumptions related to the loading conditions, physical or mechanical properties, environmental effects, initial or boundary conditions etc. The effect of those assumptions to the optimum design or the design finally adopted is generally unknown particularly in large, complex systems. A rational recourse would be to cast the problem in a probabilistic framework which accounts for the various uncertainties but also allows to quantify their effect in the response/behavior/performance of the system. In such a framework the performance function(s) of interest are also random and optimization of the system with respect to the design variables has to be reformulated with respect to statistical properties of these objectives functions (e.g. probability of exceeding certain thresholds). Analysis tools are usually restricted to elaborate legacy codes which have been developed over a long period of time and are generally well-tested (e.g. Finite Elements). These do not however include any stochastic components and their alteration is impossible or ill-advised. Furthermore as the number of uncertainties and design variables grows, the problem quickly becomes computationally intractable. The present paper advocates the use of statistical learning in order to perform these tasks for any system of arbitrary complexity as long as a deterministic solver is available. The proposed computational framework consists of two components. Firstly advanced sampling techniques are employed in order to efficiently explore the dependence of the performance with respect to the uncertain and design variables. The proposed algorithm is directly parallelizable and attempts to maximize the amount of information extracted with the least possible number of calls to the deterministic solver. The output of this process is utilized by statistical classification procedures in order to derive the dependence of the performance
A statistical wisp model and pseudophysical approaches for interactive hairstyle generation.
Choe, Byoungwon; Ko, Hyeong-Seok
2005-01-01
This paper presents an interactive technique that produces static hairstyles by generating individual hair strands of the desired shape and color, subject to the presence of gravity and collisions. A variety of hairstyles can be generated by adjusting the wisp parameters, while the deformation is solved efficiently, accounting for the effects of gravity and collisions. Wisps are generated employing statistical approaches. As for hair deformation, we propose a method which is based on physical simulation concepts, but is simplified to efficiently solve the static shape of hair. On top of the statistical wisp model and the deformation solver, a constraint-based styler is proposed to model artificial features that oppose the natural flow of hair under gravity and hair elasticity, such as a hairpin. Our technique spans a wider range of human hairstyles than previously proposed methods and the styles generated by this technique are fairly realistic.
A statistical wisp model and pseudophysical approaches for interactive hairstyle generation.
Choe, Byoungwon; Ko, Hyeong-Seok
2005-01-01
This paper presents an interactive technique that produces static hairstyles by generating individual hair strands of the desired shape and color, subject to the presence of gravity and collisions. A variety of hairstyles can be generated by adjusting the wisp parameters, while the deformation is solved efficiently, accounting for the effects of gravity and collisions. Wisps are generated employing statistical approaches. As for hair deformation, we propose a method which is based on physical simulation concepts, but is simplified to efficiently solve the static shape of hair. On top of the statistical wisp model and the deformation solver, a constraint-based styler is proposed to model artificial features that oppose the natural flow of hair under gravity and hair elasticity, such as a hairpin. Our technique spans a wider range of human hairstyles than previously proposed methods and the styles generated by this technique are fairly realistic. PMID:15747639
NASA Astrophysics Data System (ADS)
Appelhans, Tim; Mwangomo, Ephraim; Otte, Insa; Detsch, Florian; Nauss, Thomas; Hemp, Andreas; Ndyamkama, Jimmy
2015-04-01
This study introduces the set-up and characteristics of a meteorological station network on the southern slopes of Mt. Kilimanjaro, Tanzania. The set-up follows a hierarchical approach covering an elevational as well as a land-use disturbance gradient. The network consists of 52 basic stations measuring ambient air temperature and above ground air humidity and 11 precipitation measurement sites. We provide in depth descriptions of various machine learning and classical geo-statistical methods used to fill observation gaps and extend the spatial coverage of the network to a total of 60 research sites. Performance statistics for these methods indicate that the presented data sets provide reliable measurements of the meteorological reality at Mt. Kilimanjaro. These data provide an excellent basis for ecological studies and are also of great value for regional atmospheric numerical modelling studies for which such comprehensive in-situ validation observations are rare, especially in tropical regions of complex terrain.
ERIC Educational Resources Information Center
McLoughlin, M. Padraig M. M.
2008-01-01
The author of this paper submits the thesis that learning requires doing; only through inquiry is learning achieved, and hence this paper proposes a programme of use of a modified Moore method in a Probability and Mathematical Statistics (PAMS) course sequence to teach students PAMS. Furthermore, the author of this paper opines that set theory…
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations.
Mougabure-Cueto, G; Sfara, V
2016-04-25
Dose-response relations can be obtained from systems at any structural level of biological matter, from the molecular to the organismic level. There are two types of approaches for analyzing dose-response curves: a deterministic approach, based on the law of mass action, and a statistical approach, based on the assumed probabilities distribution of phenotypic characters. Models based on the law of mass action have been proposed to analyze dose-response relations across the entire range of biological systems. The purpose of this paper is to discuss the principles that determine the dose-response relations. Dose-response curves of simple systems are the result of chemical interactions between reacting molecules, and therefore are supported by the law of mass action. In consequence, the shape of these curves is perfectly sustained by physicochemical features. However, dose-response curves of bioassays with quantal response are not explained by the simple collision of molecules but by phenotypic variations among individuals and can be interpreted as individual tolerances. The expression of tolerance is the result of many genetic and environmental factors and thus can be considered a random variable. In consequence, the shape of its associated dose-response curve has no physicochemical bearings; instead, they are originated from random biological variations. Due to the randomness of tolerance there is no reason to use deterministic equations for its analysis; on the contrary, statistical models are the appropriate tools for analyzing these dose-response relations. PMID:26952004
Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach.
Monti, Martin M
2011-01-01
Functional magnetic resonance imaging (fMRI) is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a general linear model (GLM) approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power, and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making.
Jacquin, Hugo; Shakhnovich, Eugene; Cocco, Simona; Monasson, Rémi
2016-01-01
Inverse statistical approaches to determine protein structure and function from Multiple Sequence Alignments (MSA) are emerging as powerful tools in computational biology. However the underlying assumptions of the relationship between the inferred effective Potts Hamiltonian and real protein structure and energetics remain untested so far. Here we use lattice protein model (LP) to benchmark those inverse statistical approaches. We build MSA of highly stable sequences in target LP structures, and infer the effective pairwise Potts Hamiltonians from those MSA. We find that inferred Potts Hamiltonians reproduce many important aspects of ‘true’ LP structures and energetics. Careful analysis reveals that effective pairwise couplings in inferred Potts Hamiltonians depend not only on the energetics of the native structure but also on competing folds; in particular, the coupling values reflect both positive design (stabilization of native conformation) and negative design (destabilization of competing folds). In addition to providing detailed structural information, the inferred Potts models used as protein Hamiltonian for design of new sequences are able to generate with high probability completely new sequences with the desired folds, which is not possible using independent-site models. Those are remarkable results as the effective LP Hamiltonians used to generate MSA are not simple pairwise models due to the competition between the folds. Our findings elucidate the reasons for the success of inverse approaches to the modelling of proteins from sequence data, and their limitations. PMID:27177270
Statistical Analysis of fMRI Time-Series: A Critical Review of the GLM Approach
Monti, Martin M.
2011-01-01
Functional magnetic resonance imaging (fMRI) is one of the most widely used tools to study the neural underpinnings of human cognition. Standard analysis of fMRI data relies on a general linear model (GLM) approach to separate stimulus induced signals from noise. Crucially, this approach relies on a number of assumptions about the data which, for inferences to be valid, must be met. The current paper reviews the GLM approach to analysis of fMRI time-series, focusing in particular on the degree to which such data abides by the assumptions of the GLM framework, and on the methods that have been developed to correct for any violation of those assumptions. Rather than biasing estimates of effect size, the major consequence of non-conformity to the assumptions is to introduce bias into estimates of the variance, thus affecting test statistics, power, and false positive rates. Furthermore, this bias can have pervasive effects on both individual subject and group-level statistics, potentially yielding qualitatively different results across replications, especially after the thresholding procedures commonly used for inference-making. PMID:21442013
Systems Thinking: An Approach for Advancing Workplace Information Literacy
ERIC Educational Resources Information Center
Somerville, Mary M.; Howard, Zaana
2008-01-01
As the importance of information literacy has gained increased recognition, so too have academic library professionals intensified their efforts to champion, activate, and advance these capabilities in others. To date, however, little attention has focused on advancing these essential competencies amongst practitioner advocates. This paper helps…
Evolving Approaches to Patients with Advanced Differentiated Thyroid Cancer
Sherman, Steven I.
2013-01-01
Advanced differentiated thyroid cancer (DTC), defined by clinical characteristics including gross extrathyroidal invasion, distant metastases, radioiodine (RAI) resistance, and avidity for 18-fluorodeoxyglucose (positron emission tomography-positive), is found in approximately 10–20% of patients with DTC. Standard therapy (surgery, RAI, TSH suppression with levothyroxine) is ineffective for many of these patients, as is standard chemotherapy. Our understanding of the molecular mechanisms leading to DTC and the transformation to advanced DTC has rapidly evolved over the past 15–20 years. Newer targeted therapy, specifically inhibitors of intracellular kinase signaling pathways, and cooperative multicenter clinical trials have dramatically changed the therapeutic landscape for patients with advanced DTC. In this review focusing on morbidities, molecules, and medicinals, we present a patient with advanced DTC, explore the genetics and molecular biology of advanced DTC, and review evolving therapies for these patients including multikinase inhibitors, selective kinase inhibitors, and combination therapies. PMID:23575762
ERIC Educational Resources Information Center
Touchton, Michael
2015-01-01
I administer a quasi-experiment using undergraduate political science majors in statistics classes to evaluate whether "flipping the classroom" (the treatment) alters students' applied problem-solving performance and satisfaction relative to students in a traditional classroom environment (the control). I also assess whether general…
a Statistical Dynamic Approach to Structural Evolution of Complex Capital Market Systems
NASA Astrophysics Data System (ADS)
Shao, Xiao; Chai, Li H.
As an important part of modern financial systems, capital market has played a crucial role on diverse social resource allocations and economical exchanges. Beyond traditional models and/or theories based on neoclassical economics, considering capital markets as typical complex open systems, this paper attempts to develop a new approach to overcome some shortcomings of the available researches. By defining the generalized entropy of capital market systems, a theoretical model and nonlinear dynamic equation on the operations of capital market are proposed from statistical dynamic perspectives. The US security market from 1995 to 2001 is then simulated and analyzed as a typical case. Some instructive results are discussed and summarized.
Aarabi, Mohammad Hadi; Kamalian, Aida; Mohajer, Bahram; Shandiz, Mahdi Shirin; Eqlimi, Ehsan; Shojaei, Ahmad; Safabakhsh, Hamidreza
2015-08-01
Parkinson's Disease (PD) is a progressive neurodegenerative disorder assumed to involve different areas of CNS and PNS. Thus, Diffusion Tensor Imaging (DTI) is used to examine the areas engaged in PD neurodegeneration. In the present study, we computed average tract length and fiber volume as a measure of white matter integrity and adopted Network Based Statistics (NBS) to conduct group analyses between age- and gender-matched PD patients and healthy control connectivity matrices. NBS is a powerful statistical tool that utilizes the presence of every link in connectivity matrices and controls family wise error rates (in weak sense). The major regions with significantly reduced interconnecting fiber volume or average tract length were cingulum, temporal lobe, frontal lobe, parahippocampus, hippocampus, olfactory lobe, and occipital lobe. PMID:26737248
Evolution of cometary perihelion distances in Oort cloud - Another statistical approach
NASA Astrophysics Data System (ADS)
Lago, B.; Cazenave, A.
1983-01-01
A statistical approach is used to study the evolution of the Oort cloud's perihelion distance distribution over the age of the solar system, under gravitational perturbations of random passing stars which are accounted for by means of an empirical relation between cometary perihelion distance and the closest approach comet-star distance. Perihelion distances initially smaller and greater than 2500 AU are considered, together with distant star-comet encounters. Estimates are given of the number of new comets entering into the planetary region, the number of comets escaping the sun's sphere of influence or lost by hyperbolic ejection, and the percentage of comet loss over the age of the solar system. Current and original cloud populations are deduced from these quantities, together with the corresponding cloud mass, for two formation scenarios.
A statistical approach to describe highly excited heavy and superheavy nuclei
NASA Astrophysics Data System (ADS)
Chen, Peng-Hui; Feng, Zhao-Qing; Li, Jun-Qing; Zhang, Hong-Fei
2016-09-01
A statistical approach based on the Weisskopf evaporation theory has been developed to describe the de-excitation process of highly excited heavy and superheavy nuclei, in particular for the proton-rich nuclei. The excited nucleus is cooled by evaporating γ-rays, light particles (neutrons, protons, α etc) in competition with binary fission, in which the structure effects (shell correction, fission barrier, particle separation energy) contribute to the processes. The formation of residual nuclei is evaluated via sequential emission of possible particles above the separation energies. The available data of fusion-evaporation excitation functions in the 28Si+198Pt reaction can be reproduced nicely within the approach. Supported by Major State Basic Research Development Program in China (2015CB856903), National Natural Science Foundation of China Projects (11175218, U1332207, 11475050, 11175074), and Youth Innovation Promotion Association of Chinese Academy of Sciences
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Astrophysics Data System (ADS)
Feigelson, Eric D.
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
Statistical approaches to forecast gamma dose rates by using measurements from the atmosphere.
Jeong, Hyo-Joon; Hwang, Won-Tae; Kim, Eun-Han; Han, Moon-Hee
2008-01-01
In this paper, the results obtained by inter-comparing several statistical techniques for estimating gamma dose rates, such as an exponential moving average model, a seasonal exponential smoothing model and an artificial neural networks model, are reported. Seven years of gamma dose rates data measured in Daejeon City, Korea, were divided into two parts to develop the models and validate the effectiveness of the generated predictions by the techniques mentioned above. Artificial neural networks model shows the best forecasting capability among the three statistical models. The reason why the artificial neural networks model provides a superior prediction to the other models would be its ability for a non-linear approximation. To replace the gamma dose rates when missing data for an environmental monitoring system occurs, the moving average model and the seasonal exponential smoothing model can be better because they are faster and easier for applicability than the artificial neural networks model. These kinds of statistical approaches will be helpful for a real-time control of radio emissions or for an environmental quality assessment.
Avalappampatty Sivasamy, Aneetha; Sundan, Bose
2015-01-01
The ever expanding communication requirements in today's world demand extensive and efficient network systems with equally efficient and reliable security features integrated for safe, confident, and secured communication and data transfer. Providing effective security protocols for any network environment, therefore, assumes paramount importance. Attempts are made continuously for designing more efficient and dynamic network intrusion detection models. In this work, an approach based on Hotelling's T2 method, a multivariate statistical analysis technique, has been employed for intrusion detection, especially in network environments. Components such as preprocessing, multivariate statistical analysis, and attack detection have been incorporated in developing the multivariate Hotelling's T2 statistical model and necessary profiles have been generated based on the T-square distance metrics. With a threshold range obtained using the central limit theorem, observed traffic profiles have been classified either as normal or attack types. Performance of the model, as evaluated through validation and testing using KDD Cup'99 dataset, has shown very high detection rates for all classes with low false alarm rates. Accuracy of the model presented in this work, in comparison with the existing models, has been found to be much better. PMID:26357668
A statistical approach to evaluate flood risk at the regional level: an application to Italy
NASA Astrophysics Data System (ADS)
Rossi, Mauro; Marchesini, Ivan; Salvati, Paola; Donnini, Marco; Guzzetti, Fausto; Sterlacchini, Simone; Zazzeri, Marco; Bonazzi, Alessandro; Carlesi, Andrea
2016-04-01
Floods are frequent and widespread in Italy, causing every year multiple fatalities and extensive damages to public and private structures. A pre-requisite for the development of mitigation schemes, including financial instruments such as insurance, is the ability to quantify their costs starting from the estimation of the underlying flood hazard. However, comprehensive and coherent information on flood prone areas, and estimates on the frequency and intensity of flood events, are not often available at scales appropriate for risk pooling and diversification. In Italy, River Basins Hydrogeological Plans (PAI), prepared by basin administrations, are the basic descriptive, regulatory, technical and operational tools for environmental planning in flood prone areas. Nevertheless, such plans do not cover the entire Italian territory, having significant gaps along the minor hydrographic network and in ungauged basins. Several process-based modelling approaches have been used by different basin administrations for the flood hazard assessment, resulting in an inhomogeneous hazard zonation of the territory. As a result, flood hazard assessments expected and damage estimations across the different Italian basin administrations are not always coherent. To overcome these limitations, we propose a simplified multivariate statistical approach for the regional flood hazard zonation coupled with a flood impact model. This modelling approach has been applied in different Italian basin administrations, allowing a preliminary but coherent and comparable estimation of the flood hazard and the relative impact. Model performances are evaluated comparing the predicted flood prone areas with the corresponding PAI zonation. The proposed approach will provide standardized information (following the EU Floods Directive specifications) on flood risk at a regional level which can in turn be more readily applied to assess flood economic impacts. Furthermore, in the assumption of an appropriate
Canadian Educational Approaches for the Advancement of Pharmacy Practice
Louizos, Christopher; Austin, Zubin
2014-01-01
Canadian faculties (schools) of pharmacy are actively engaged in the advancement and restructuring of their programs in response to the shift in pharmacy to pharmacists having/assuming an advanced practitioner role. Unfortunately, there is a paucity of evidence outlining optimal strategies for accomplishing this task. This review explores several educational changes proposed in the literature to aid in the advancement of pharmacy education such as program admission requirements, critical-thinking assessment and teaching methods, improvement of course content delivery, value of interprofessional education, advancement of practical experiential education, and mentorship strategies. Collectively, implementation of these improvements to pharmacy education will be crucial in determining the direction the profession will take. PMID:25258448
Keller, Jacob; Keller, Jacob Pearson; Homma, Kazuaki; Dallos, Peter
2013-01-01
Especially in the last decade or so, there have been dramatic advances in fluorescence-based imaging methods designed to measure a multitude of functions in living cells. Despite this, many of the methods used to analyze the resulting images are limited. Perhaps the most common mode of analysis is the choice of regions of interest (ROIs), followed by quantification of the signal contained therein in comparison with another "control" ROI. While this method has several advantages, such as flexibility and capitalization on the power of human visual recognition capabilities, it has the drawbacks of potential subjectivity and lack of precisely defined criteria for ROI selection. This can lead to analyses which are less precise or accurate than the data might allow for, and generally a regrettable loss of information. Herein, we explore the possibility of abandoning the use of conventional ROIs, and instead propose treating individual pixels as ROIs, such that all information can be extracted systematically with the various statistical cutoffs we discuss. As a test case for this approach, we monitored intracellular pH in cells transfected with the chloride/bicarbonate transporter slc26a3 using the ratiometric dye SNARF-5F under various conditions. We performed a parallel analysis using two different levels of stringency in conventional ROI analysis as well as the pixels-as-ROIs (PAR) approach, and found that pH differences between control and transfected cells were accentuated by ~50-100% by using the PAR approach. We therefore consider this approach worthy of adoption, especially in cases in which higher accuracy and precision are required.
Connectometry: A statistical approach harnessing the analytical potential of the local connectome.
Yeh, Fang-Cheng; Badre, David; Verstynen, Timothy
2016-01-15
Here we introduce the concept of the local connectome: the degree of connectivity between adjacent voxels within a white matter fascicle defined by the density of the diffusing spins. While most human structural connectomic analyses can be summarized as finding global connectivity patterns at either end of anatomical pathways, the analysis of local connectomes, termed connectometry, tracks the local connectivity patterns along the fiber pathways themselves in order to identify the subcomponents of the pathways that express significant associations with a study variable. This bottom-up analytical approach is made possible by reconstructing diffusion MRI data into a common stereotaxic space that allows for associating local connectomes across subjects. The substantial associations can then be tracked along the white matter pathways, and statistical inference is obtained using permutation tests on the length of coherent associations and corrected for multiple comparisons. Using two separate samples, with different acquisition parameters, we show how connectometry can capture variability within core white matter pathways in a statistically efficient manner and extract meaningful variability from white matter pathways, complements graph-theoretic connectomic measures, and is more sensitive than region-of-interest approaches.
A statistics-based approach to binary image registration with uncertainty analysis.
Simonson, Katherine M; Drescher, Steven M; Tanner, Franklin R
2007-01-01
A new technique is described for the registration of edge-detected images. While an extensive literature exists on the problem of image registration, few of the current approaches include a well-defined measure of the statistical confidence associated with the solution. Such a measure is essential for many autonomous applications, where registration solutions that are dubious (involving poorly focused images or terrain that is obscured by clouds) must be distinguished from those that are reliable (based on clear images of highly structured scenes). The technique developed herein utilizes straightforward edge pixel matching to determine the "best" among a class of candidate translations. A well-established statistical procedure, the McNemar test, is then applied to identify which other candidate solutions are not significantly worse than the best solution. This allows for the construction of confidence regions in the space of the registration parameters. The approach is validated through a simulation study and examples are provided of its application in numerous challenging scenarios. While the algorithm is limited to solving for two-dimensional translations, its use in validating solutions to higher-order (rigid body, affine) transformation problems is demonstrated.
Multivariate statistical and GIS-based approach to identify heavy metal sources in soils.
Facchinelli, A; Sacchi, E; Mallen, L
2001-01-01
The knowledge of the regional variability, the background values and the anthropic vs. natural origin for potentially harmful elements in soils is of critical importance to assess human impact and to fix guide values and quality standards. The present study was undertaken as a preliminary survey on soil contamination on a regional scale in Piemonte (NW Italy). The aims of the study were: (1) to determine average regional concentrations of some heavy metals (Cr, Co, Ni, Cu, Zn, Pb); (2) to find out their large-scale variability; (3) to define their natural or artificial origin; and (4) to identify possible non-point sources of contamination. Multivariate statistic approaches (Principal Component Analysis and Cluster Analysis) were adopted for data treatment, allowing the identification of three main factors controlling the heavy metal variability in cultivated soils. Geostatistics were used to construct regional distribution maps, to be compared with the geographical, geologic and land use regional database using GIS software. This approach, evidencing spatial relationships, proved very useful to the confirmation and refinement of geochemical interpretations of the statistical output. Cr, Co and Ni were associated with and controlled by parent rocks, whereas Cu together with Zn, and Pb alone were controlled by anthropic activities. The study indicates that background values and realistic mandatory guidelines are impossible to fix without an extensive data collection and without a correct geochemical interpretation of the data. PMID:11584630
NASA Astrophysics Data System (ADS)
Baran, Sándor; Möller, Annette
2016-06-01
Forecast ensembles are typically employed to account for prediction uncertainties in numerical weather prediction models. However, ensembles often exhibit biases and dispersion errors, thus they require statistical post-processing to improve their predictive performance. Two popular univariate post-processing models are the Bayesian model averaging (BMA) and the ensemble model output statistics (EMOS). In the last few years, increased interest has emerged in developing multivariate post-processing models, incorporating dependencies between weather quantities, such as for example a bivariate distribution for wind vectors or even a more general setting allowing to combine any types of weather variables. In line with a recently proposed approach to model temperature and wind speed jointly by a bivariate BMA model, this paper introduces an EMOS model for these weather quantities based on a bivariate truncated normal distribution. The bivariate EMOS model is applied to temperature and wind speed forecasts of the 8-member University of Washington mesoscale ensemble and the 11-member ALADIN-HUNEPS ensemble of the Hungarian Meteorological Service and its predictive performance is compared to the performance of the bivariate BMA model and a multivariate Gaussian copula approach, post-processing the margins with univariate EMOS. While the predictive skills of the compared methods are similar, the bivariate EMOS model requires considerably lower computation times than the bivariate BMA method.
NASA Astrophysics Data System (ADS)
Donner, Reik; Passow, Christian
2016-04-01
The appropriate statistical evaluation of recent changes in the occurrence of hydro-meteorological extreme events is of key importance for identifying trends in the behavior of climate extremes and associated impacts on ecosystems or technological infrastructures, as well as for validating the capability of models used for future climate scenarios to correctly represent such trends in the past decades. In this context, most recent studies have utilized conceptual approaches from extreme value theory based on parametric descriptions of the probability distribution functions of extremes. However, the application of such methods is faced with a few fundamental challenges: (1) The application of the most widely used approaches of generalized extreme value (GEV) or generalized Pareto (GP) distributions is based on assumptions the validity of which can often be hardly proven. (2) Due to the differentiation between extreme and normal values (peaks-over-threshold, block maxima), much information on the distribution of the variable of interest is not used at all by such methods, implying that the sample size of values effectively used for estimating the parameters of the GEV or GP distributions is largely limited for typical lengths of observational series. (3) The problem of parameter estimation is further enhanced by the variety of possibly statistical models mapping different aspects of temporal changes of extremes like seasonality or possibly non-linear trends. Reliably identifying the most appropriate model is a challenging task for the lengths of typical observational series. As an alternative to approaches based on extreme value theory, there have been a few attempts to transfer quantile regression approaches to statistically describing the time-dependence of climate extremes. In this context, a value exceeding a certain instantaneous percentile of the time-dependent probability distribution function of the data under study is considered to be an extreme event. In
NASA Astrophysics Data System (ADS)
Broothaerts, Nils; Verstraeten, Gert
2016-04-01
Reconstructing and quantifying human impact is an important step to understand human-environment interactions in the past. To fully understand the role of human impact in altering the environment during the Holocene, detailed reconstructions of the vegetation changes and quantitative measures of human impact on the landscape are needed. Statistical analysis of pollen data has recently been used to characterize vegetation changes and to extract semi-quantitative data on human impact. In this study, multivariate statistical analysis (cluster analysis and non-metric multidimensional scaling (NMDS)) of pollen data was used to reconstruct human induced land use changes in two contrasting environments: central Belgium and SW Turkey. For each region, pollen data from different study sites were integrated. The data from central Belgium shows the gradually increasing human impact from the Bronze Age onwards (ca. 3900 cal a BP), except for a temporary halt between 1900-1600 cal a BP, coupled with the Migration Period in Europe. Statistical analysis of pollen data from SW Turkey provides new integrated information on changing human impact through time in the Sagalassos territory, and shows that human impact was most intense during the Hellenistic and Roman Period (ca. 2200-1750 cal a BP) and decreased and changed in nature afterwards. In addition, regional vegetation estimates using the REVEALS model were made for each study site and were compared with the outcome of the statistical analysis of the pollen data. It shows that for some cases the statistical approach can be a more easily applicable alternative for the REVEALS model. Overall, the presented examples from two contrasting environments shows that cluster analysis and NMDS are useful tools to provide semi-quantitative insights in the temporal and spatial vegetation changes related to increasing human impact. Moreover, the technique can be used to compare and integrate pollen datasets from different study sites within
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in
An integrated statistical and hydraulic modelling approach for collective flood risk assessment
NASA Astrophysics Data System (ADS)
Lamb, Rob; Keef, Caroline; Tawn, Jonathan A.; Hankin, Barry; Dunning, Paul
2010-05-01
This paper presents a methodology for assessing collective flood risk based on a combination of two innovative models. The first is a multivariate statistical model for extremes of river flow or sea level, based on the conditional exceedance approach of Heffernan and Tawn (2004) and Keef et al (2009). This model is analogous to a generalised form of copula function in that it separates the joint distribution of a variable into its marginal characteristics and its dependence structure. The dependence structure is flexible in its description of the joint extremes, which has advantages for representing spatial dependence in data such as river flows. The second part of the methodology is a two-dimensional (2D) hydraulic floodplain inundation model that is applied using parallel processing technology to provide high resolution gridded flood depth data over large regions (Lamb et al., 2009). These depth grids can then be combined with a model for economic losses. We present an overview of the methodology and demonstrate through simulation studies how it can be applied to estimate the distribution function of the spatially aggregated economic losses from flooding over regions up to the scale of England and Wales, or greater. The results are also placed in the context of hydrological assessment of the probability and severity of notable historical flood events experiences in the British Isles. Heffernan J. E. and Tawn J. A. (2004) A conditional approach for multivariate extreme values (with discussion) J. R. Statist. Soc. B, 66 497-546 Keef, C., J. Tawn, and C. Svensson. (2009). Spatial risk assessment for extreme river flows. Applied Statistics 58,(5) pp 601-618 Lamb, R., Crossley, A., Waller, S. (2009) A fast 2D floodplain inundation model, Proceedings of the Institution of Civil Engineers: Water Management, 162, doi: 10.1680/wama.2009.162.1.1
A statistical approach for segregating cognitive task stages from multivariate fMRI BOLD time series
Demanuele, Charmaine; Bähner, Florian; Plichta, Michael M.; Kirsch, Peter; Tost, Heike; Meyer-Lindenberg, Andreas; Durstewitz, Daniel
2015-01-01
Multivariate pattern analysis can reveal new information from neuroimaging data to illuminate human cognition and its disturbances. Here, we develop a methodological approach, based on multivariate statistical/machine learning and time series analysis, to discern cognitive processing stages from functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) time series. We apply this method to data recorded from a group of healthy adults whilst performing a virtual reality version of the delayed win-shift radial arm maze (RAM) task. This task has been frequently used to study working memory and decision making in rodents. Using linear classifiers and multivariate test statistics in conjunction with time series bootstraps, we show that different cognitive stages of the task, as defined by the experimenter, namely, the encoding/retrieval, choice, reward and delay stages, can be statistically discriminated from the BOLD time series in brain areas relevant for decision making and working memory. Discrimination of these task stages was significantly reduced during poor behavioral performance in dorsolateral prefrontal cortex (DLPFC), but not in the primary visual cortex (V1). Experimenter-defined dissection of time series into class labels based on task structure was confirmed by an unsupervised, bottom-up approach based on Hidden Markov Models. Furthermore, we show that different groupings of recorded time points into cognitive event classes can be used to test hypotheses about the specific cognitive role of a given brain region during task execution. We found that whilst the DLPFC strongly differentiated between task stages associated with different memory loads, but not between different visual-spatial aspects, the reverse was true for V1. Our methodology illustrates how different aspects of cognitive information processing during one and the same task can be separated and attributed to specific brain regions based on information contained in
Ketamine induces anxiolytic effects in adult zebrafish: A multivariate statistics approach.
De Campos, Eduardo Geraldo; Bruni, Aline Thais; De Martinis, Bruno Spinosa
2015-10-01
Ketamine inappropriate use has been associated with serious consequences for human health. Anesthetic properties of ketamine are well-known, but its side effects are poorly described, including the effects on anxiety. In this context, animal models are a safe way to conduct this neurobehavioral research and zebrafish (Danio rerio) is an interesting model which has several advantages. The validation and interpretation of results of behavioral assays requires a suitable statistical approach, and the use of multivariate statistical methods has been little explored, especially in zebrafish behavioral models. Here, we investigated the anxiolytic-induced effects of ketamine in adult zebrafish, using Light-Dark Test and proposing the Multivariate Statistics methods (PCA, HCA and SIMCA) to analyze the results. In addition, we compared the processing of data to the one carried out by analysis of variance (ANOVA) ketamine produced significant concentration of exposure-dependent anxiolytic effects, increasing time in white area and number of crossings and decreasing latency to first access to white area. Average entry duration behavior resulted in a slight decrease from control to treatment groups, with an observed concentration-dependent increase among the exposed groups. PCA results indicated that two principal components represent 88.74% of all the system information. HCA and PCA results showed a higher similarity among control and treatment groups exposed to lower concentrations of ketamine and among treatment groups exposed to concentrations of 40 and 60 mg L(-1). In SIMCA results, interclasses distances were concentration of exposure-dependent increased and misclassifications and interclasses residues results also support these findings. These findings confirm the anxiolytic potential of ketamine and zebrafish sensibility to this drug. In summary, our study confirms that zebrafish and multivariate statistics data validation are an appropriate and viable behavioral model
Risk management for moisture related effects in dry manufacturing processes: a statistical approach.
Quiroz, Jorge; Strong, John; Zhang, Lanju
2016-03-01
A risk- and science-based approach to control the quality in pharmaceutical manufacturing includes a full understanding of how product attributes and process parameters relate to product performance through a proactive approach in formulation and process development. For dry manufacturing, where moisture content is not directly manipulated within the process, the variability in moisture of the incoming raw materials can impact both the processability and drug product quality attributes. A statistical approach is developed using individual raw material historical lots as a basis for the calculation of tolerance intervals for drug product moisture content so that risks associated with excursions in moisture content can be mitigated. The proposed method is based on a model-independent approach that uses available data to estimate parameters of interest that describe the population of blend moisture content values and which do not require knowledge of the individual blend moisture content values. Another advantage of the proposed tolerance intervals is that, it does not require the use of tabulated values for tolerance factors. This facilitates the implementation on any spreadsheet program like Microsoft Excel. A computational example is used to demonstrate the proposed method.
Comparing emerging and mature markets during times of crises: A non-extensive statistical approach
NASA Astrophysics Data System (ADS)
Namaki, A.; Koohi Lai, Z.; Jafari, G. R.; Raei, R.; Tehrani, R.
2013-07-01
One of the important issues in finance and economics for both scholars and practitioners is to describe the behavior of markets, especially during times of crises. In this paper, we analyze the behavior of some mature and emerging markets with a Tsallis entropy framework that is a non-extensive statistical approach based on non-linear dynamics. During the past decade, this technique has been successfully applied to a considerable number of complex systems such as stock markets in order to describe the non-Gaussian behavior of these systems. In this approach, there is a parameter q, which is a measure of deviation from Gaussianity, that has proved to be a good index for detecting crises. We investigate the behavior of this parameter in different time scales for the market indices. It could be seen that the specified pattern for q differs for mature markets with regard to emerging markets. The findings show the robustness of the stated approach in order to follow the market conditions over time. It is obvious that, in times of crises, q is much greater than in other times. In addition, the response of emerging markets to global events is delayed compared to that of mature markets, and tends to a Gaussian profile on increasing the scale. This approach could be very useful in application to risk and portfolio management in order to detect crises by following the parameter q in different time scales.
New advances in methodology for statistical tests useful in geostatistical studies
Borgman, L.E.
1988-05-01
Methodology for statistical procedures to perform tests of hypothesis pertaining to various aspects of geostatistical investigations has been slow in developing. The correlated nature of the data precludes most classical tests and makes the design of new tests difficult. Recent studies have led to modifications of the classical t test which allow for the intercorrelation. In addition, results for certain nonparametric tests have been obtained. The conclusions of these studies provide a variety of new tools for the geostatistician in deciding questions on significant differences and magnitudes.
Advances in statistical methods to map quantitative trait loci in outbred populations.
Hoeschele, I; Uimari, P; Grignola, F E; Zhang, Q; Gage, K M
1997-11-01
Statistical methods to map quantitative trait loci (QTL) in outbred populations are reviewed, extensions and applications to human and plant genetic data are indicated, and areas for further research are identified. Simple and computationally inexpensive methods include (multiple) linear regression of phenotype on marker genotypes and regression of squared phenotypic differences among relative pairs on estimated proportions of identity-by-descent at a locus. These methods are less suited for genetic parameter estimation in outbred populations but allow the determination of test statistic distributions via simulation or data permutation; however, further inferences including confidence intervals of QTL location require the use of Monte Carlo or bootstrap sampling techniques. A method which is intermediate in computational requirements is residual maximum likelihood (REML) with a covariance matrix of random QTL effects conditional on information from multiple linked markers. Testing for the number of QTLs on a chromosome is difficult in a classical framework. The computationally most demanding methods are maximum likelihood and Bayesian analysis, which take account of the distribution of multilocus marker-QTL genotypes on a pedigree and permit investigators to fit different models of variation at the QTL. The Bayesian analysis includes the number of QTLs on a chromosome as an unknown.
NASA Astrophysics Data System (ADS)
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
Statistical approaches to account for false-positive errors in environmental DNA samples.
Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid
2016-05-01
Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies.
NASA Astrophysics Data System (ADS)
Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe
2016-08-01
Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.
NASA Astrophysics Data System (ADS)
Fernández-González, Daniel; Martín-Duarte, Ramón; Ruiz-Bustinza, Íñigo; Mochón, Javier; González-Gasca, Carmen; Verdeja, Luis Felipe
2016-06-01
Blast furnace operators expect to get sinter with homogenous and regular properties (chemical and mechanical), necessary to ensure regular blast furnace operation. Blends for sintering also include several iron by-products and other wastes that are obtained in different processes inside the steelworks. Due to their source, the availability of such materials is not always consistent, but their total production should be consumed in the sintering process, to both save money and recycle wastes. The main scope of this paper is to obtain the least expensive iron ore blend for the sintering process, which will provide suitable chemical and mechanical features for the homogeneous and regular operation of the blast furnace. The systematic use of statistical tools was employed to analyze historical data, including linear and partial correlations applied to the data and fuzzy clustering based on the Sugeno Fuzzy Inference System to establish relationships among the available variables.
Recent treatment advances and novel therapeutic approaches in epilepsy
Serrano, Enrique
2015-01-01
The purpose of this article is to review recent advances in the treatment of epilepsy. It includes five antiepileptic drugs that have been recently added to the pharmacologic armamentarium and surgical techniques that have been developed in the last few years. Finally, we review ongoing research that may have a potential role in future treatments of epilepsy. PMID:26097734
Measuring Alumna Career Advancement: An Approach Based on Educational Expectations.
ERIC Educational Resources Information Center
Ben-Ur, Tamar; Rogers, Glen
Alverno College (Wisconsin), a women's liberal arts college, has developed an Alumni Career Level Classification (AACLC) scheme to measure alumna career advancement and demonstrate institutional accountability. This validation study was part of a larger longitudinal study of two entire cohorts of students entering the college in 1976 and 1977, of…
Optogenetic and Chemogenetic Approaches To Advance Monitoring Molecules.
McElligott, Zoé
2015-07-15
Fast-scan cyclic voltammetry (FSCV) is a high-resolution technique used to investigate neurotransmission in vitro, ex vivo, and in vivo. In this Viewpoint, I discuss how optogenetic and chemogenetic methods, when combined with FSCV, can impact and advance our understanding of neurotransmission and enable more detailed investigation of the roles of neurotransmitter systems in normal and disease states.
NASA Technical Reports Server (NTRS)
Kramer, Lynda J.; Busquets, Anthony M.
2000-01-01
A simulation experiment was performed to assess situation awareness (SA) and workload of pilots while monitoring simulated autoland operations in Instrument Meteorological Conditions with three advanced display concepts: two enhanced electronic flight information system (EFIS)-type display concepts and one totally synthetic, integrated pictorial display concept. Each concept incorporated sensor-derived wireframe runway and iconic depictions of sensor-detected traffic in different locations on the display media. Various scenarios, involving conflicting traffic situation assessments, main display failures, and navigation/autopilot system errors, were used to assess the pilots' SA and workload during autoland approaches with the display concepts. From the results, for each scenario, the integrated pictorial display concept provided the pilots with statistically equivalent or substantially improved SA over the other display concepts. In addition to increased SA, subjective rankings indicated that the pictorial concept offered reductions in overall pilot workload (in both mean ranking and spread) over the two enhanced EFIS-type display concepts. Out of the display concepts flown, the pilots ranked the pictorial concept as the display that was easiest to use to maintain situational awareness, to monitor an autoland approach, to interpret information from the runway and obstacle detecting sensor systems, and to make the decision to go around.
McManamay, Ryan A
2014-01-01
Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects of local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at
NASA Astrophysics Data System (ADS)
Mfumu Kihumba, Antoine; Vanclooster, Marnik
2013-04-01
Drinking water in Kinshasa, the capital of the Democratic Republic of Congo, is provided by extracting groundwater from the local aquifer, particularly in peripheral areas. The exploited groundwater body is mainly unconfined and located within a continuous detrital aquifer, primarily composed of sedimentary formations. However, the aquifer is subjected to an increasing threat of anthropogenic pollution pressure. Understanding the detailed origin of this pollution pressure is important for sustainable drinking water management in Kinshasa. The present study aims to explain the observed nitrate pollution problem, nitrate being considered as a good tracer for other pollution threats. The analysis is made in terms of physical attributes that are readily available using a statistical modelling approach. For the nitrate data, use was made of a historical groundwater quality assessment study, for which the data were re-analysed. The physical attributes are related to the topography, land use, geology and hydrogeology of the region. Prior to the statistical modelling, intrinsic and specific vulnerability for nitrate pollution was assessed. This vulnerability assessment showed that the alluvium area in the northern part of the region is the most vulnerable area. This area consists of urban land use with poor sanitation. Re-analysis of the nitrate pollution data demonstrated that the spatial variability of nitrate concentrations in the groundwater body is high, and coherent with the fragmented land use of the region and the intrinsic and specific vulnerability maps. For the statistical modeling use was made of multiple regression and regression tree analysis. The results demonstrated the significant impact of land use variables on the Kinshasa groundwater nitrate pollution and the need for a detailed delineation of groundwater capture zones around the monitoring stations. Key words: Groundwater , Isotopic, Kinshasa, Modelling, Pollution, Physico-chemical.
Kumar, Ramya; Lahann, Joerg
2016-07-01
The performance of polymer interfaces in biology is governed by a wide spectrum of interfacial properties. With the ultimate goal of identifying design parameters for stem cell culture coatings, we developed a statistical model that describes the dependence of brush properties on surface-initiated polymerization (SIP) parameters. Employing a design of experiments (DOE) approach, we identified operating boundaries within which four gel architecture regimes can be realized, including a new regime of associated brushes in thin films. Our statistical model can accurately predict the brush thickness and the degree of intermolecular association of poly[{2-(methacryloyloxy) ethyl} dimethyl-(3-sulfopropyl) ammonium hydroxide] (PMEDSAH), a previously reported synthetic substrate for feeder-free and xeno-free culture of human embryonic stem cells. DOE-based multifunctional predictions offer a powerful quantitative framework for designing polymer interfaces. For example, model predictions can be used to decrease the critical thickness at which the wettability transition occurs by simply increasing the catalyst quantity from 1 to 3 mol %.
Kumar, Ramya; Lahann, Joerg
2016-07-01
The performance of polymer interfaces in biology is governed by a wide spectrum of interfacial properties. With the ultimate goal of identifying design parameters for stem cell culture coatings, we developed a statistical model that describes the dependence of brush properties on surface-initiated polymerization (SIP) parameters. Employing a design of experiments (DOE) approach, we identified operating boundaries within which four gel architecture regimes can be realized, including a new regime of associated brushes in thin films. Our statistical model can accurately predict the brush thickness and the degree of intermolecular association of poly[{2-(methacryloyloxy) ethyl} dimethyl-(3-sulfopropyl) ammonium hydroxide] (PMEDSAH), a previously reported synthetic substrate for feeder-free and xeno-free culture of human embryonic stem cells. DOE-based multifunctional predictions offer a powerful quantitative framework for designing polymer interfaces. For example, model predictions can be used to decrease the critical thickness at which the wettability transition occurs by simply increasing the catalyst quantity from 1 to 3 mol %. PMID:27268965
Drought episodes over Greece as simulated by dynamical and statistical downscaling approaches
NASA Astrophysics Data System (ADS)
Anagnostopoulou, Christina
2016-04-01
Drought over the Greek region is characterized by a strong seasonal cycle and large spatial variability. Dry spells longer than 10 consecutive days mainly characterize the duration and the intensity of Greek drought. Moreover, an increasing trend of the frequency of drought episodes has been observed, especially during the last 20 years of the 20th century. Moreover, the most recent regional circulation models (RCMs) present discrepancies compared to observed precipitation, while they are able to reproduce the main patterns of atmospheric circulation. In this study, both a statistical and a dynamical downscaling approach are used to quantify drought episodes over Greece by simulating the Standardized Precipitation Index (SPI) for different time steps (3, 6, and 12 months). A statistical downscaling technique based on artificial neural network is employed for the estimation of SPI over Greece, while this drought index is also estimated using the RCM precipitation for the time period of 1961-1990. Overall, it was found that the drought characteristics (intensity, duration, and spatial extent) were well reproduced by the regional climate models for long term drought indices (SPI12) while ANN simulations are better for the short-term drought indices (SPI3).
Chemical entity recognition in patents by combining dictionary-based and statistical approaches.
Akhondi, Saber A; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F H; Hettne, Kristina M; van Mulligen, Erik M; Kors, Jan A
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small.Database URL: http://biosemantics.org/chemdner-patents. PMID:27141091
Chemical entity recognition in patents by combining dictionary-based and statistical approaches
Akhondi, Saber A.; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F.H.; Hettne, Kristina M.; van Mulligen, Erik M.; Kors, Jan A.
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small. Database URL: http://biosemantics.org/chemdner-patents PMID:27141091
Chemical entity recognition in patents by combining dictionary-based and statistical approaches.
Akhondi, Saber A; Pons, Ewoud; Afzal, Zubair; van Haagen, Herman; Becker, Benedikt F H; Hettne, Kristina M; van Mulligen, Erik M; Kors, Jan A
2016-01-01
We describe the development of a chemical entity recognition system and its application in the CHEMDNER-patent track of BioCreative 2015. This community challenge includes a Chemical Entity Mention in Patents (CEMP) recognition task and a Chemical Passage Detection (CPD) classification task. We addressed both tasks by an ensemble system that combines a dictionary-based approach with a statistical one. For this purpose the performance of several lexical resources was assessed using Peregrine, our open-source indexing engine. We combined our dictionary-based results on the patent corpus with the results of tmChem, a chemical recognizer using a conditional random field classifier. To improve the performance of tmChem, we utilized three additional features, viz. part-of-speech tags, lemmas and word-vector clusters. When evaluated on the training data, our final system obtained an F-score of 85.21% for the CEMP task, and an accuracy of 91.53% for the CPD task. On the test set, the best system ranked sixth among 21 teams for CEMP with an F-score of 86.82%, and second among nine teams for CPD with an accuracy of 94.23%. The differences in performance between the best ensemble system and the statistical system separately were small.Database URL: http://biosemantics.org/chemdner-patents.
Generalized Deam-Edwards approach to the statistical mechanics of randomly crosslinked systems
NASA Astrophysics Data System (ADS)
Xing, Xiangjun; Lu, Bing-Sui; Ye, Fangfu; Goldbart, Paul M.
2013-08-01
We address the statistical mechanics of randomly and permanently crosslinked networks. We develop a theoretical framework (vulcanization theory) which can be used to systematically analyze the correlation between the statistical properties of random networks and their histories of formation. Generalizing the original idea of Deam and Edwards, we consider an instantaneous crosslinking process, where all crosslinkers (modeled as Gaussian springs) are introduced randomly at once in an equilibrium liquid state, referred to as the preparation state. The probability that two functional sites are crosslinked by a spring exponentially decreases with their distance squared. After formally averaging over network connectivity, we obtained an effective theory with all degrees of freedom replicated 1 + n times. Two thermodynamic ensembles, the preparation ensemble and the measurement ensemble, naturally appear in this theory. The former describes the thermodynamic fluctuations in the state of preparation, while the latter describes the thermodynamic fluctuations in the state of measurement. We classify various correlation functions and discuss their physical significances. In particular, the memory correlation functions characterize how the properties of networks depend on their method of preparation, and are the hallmark properties of all randomly crosslinked materials. We clarify the essential difference between our approach and that of Deam-Edwards, and discuss the saddle-point order parameters and its physical significance. Finally we also discuss the connection between saddle-point approximation of vulcanization theory, and the classical theory of rubber elasticity as well as the neo-classical theory of nematic elastomers.
A risk-based approach to management of leachables utilizing statistical analysis of extractables.
Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M
2015-04-01
To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle. PMID:25294001
Wesonga, Ronald; Nabugoomu, Fabian
2016-01-01
The study derives a framework for assessing airport efficiency through evaluating optimal arrival and departure delay thresholds. Assumptions of airport efficiency measurements, though based upon minimum numeric values such as 15 min of turnaround time, cannot be extrapolated to determine proportions of delay-days of an airport. This study explored the concept of delay threshold to determine the proportion of delay-days as an expansion of the theory of delay and our previous work. Data-driven approach using statistical modelling was employed to a limited set of determinants of daily delay at an airport. For the purpose of testing the efficacy of the threshold levels, operational data for Entebbe International Airport were used as a case study. Findings show differences in the proportions of delay at departure (μ = 0.499; 95 % CI = 0.023) and arrival (μ = 0.363; 95 % CI = 0.022). Multivariate logistic model confirmed an optimal daily departure and arrival delay threshold of 60 % for the airport given the four probable thresholds {50, 60, 70, 80}. The decision for the threshold value was based on the number of significant determinants, the goodness of fit statistics based on the Wald test and the area under the receiver operating curves. These findings propose a modelling framework to generate relevant information for the Air Traffic Management relevant in planning and measurement of airport operational efficiency. PMID:27441145
Bajaj, Ishwar B; Saudagar, Parag S; Singhal, Rekha S; Pandey, Ashok
2006-09-01
Gellan gum, a high-molecular-weight anionic linear polysaccharide produced by pure-culture fermentation from Sphingomonas paucimobilis ATCC 31461, has elicited industrial interest in recent years as a high-viscosity biogum, a suspending agent, a gelling agent, and an agar substitute in microbial media. In this paper we report on the optimization of gellan gum production using a statistical approach. In the first step, the one factor-at-a-time method was used to investigate the effect of medium constituents such as carbon and nitrogen sources; subsequently, the intuitive analysis based on statistical calculations carried out using the L16 -orthogonal array method. The design for the L16 -orthogonal array was developed and analyzed using MINITAB 13.30 software. All the fermentation runs were carried out at 30+/-2 degrees C on a rotary orbital shaker at 180 rpm for 48 h. In the second step, the effects of amino acids and gellan precursors such as uridine-5'-diphospate (UDP) and adenosine-5'-diphospate (ADP) on the fermentative production of gellan gum were studied. Media containing 4% soluble starch, 0.025% yeast extract, 1.0 mM ADP and 0.05% tryptophan gave a maximum yield of 43.6 g l(-1) starch-free gellan gum, which was significantly higher than reported values in the literature.
A risk-based approach to management of leachables utilizing statistical analysis of extractables.
Stults, Cheryl L M; Mikl, Jaromir; Whelehan, Oliver; Morrical, Bradley; Duffield, William; Nagao, Lee M
2015-04-01
To incorporate quality by design concepts into the management of leachables, an emphasis is often put on understanding the extractable profile for the materials of construction for manufacturing disposables, container-closure, or delivery systems. Component manufacturing processes may also impact the extractable profile. An approach was developed to (1) identify critical components that may be sources of leachables, (2) enable an understanding of manufacturing process factors that affect extractable profiles, (3) determine if quantitative models can be developed that predict the effect of those key factors, and (4) evaluate the practical impact of the key factors on the product. A risk evaluation for an inhalation product identified injection molding as a key process. Designed experiments were performed to evaluate the impact of molding process parameters on the extractable profile from an ABS inhaler component. Statistical analysis of the resulting GC chromatographic profiles identified processing factors that were correlated with peak levels in the extractable profiles. The combination of statistically significant molding process parameters was different for different types of extractable compounds. ANOVA models were used to obtain optimal process settings and predict extractable levels for a selected number of compounds. The proposed paradigm may be applied to evaluate the impact of material composition and processing parameters on extractable profiles and utilized to manage product leachables early in the development process and throughout the product lifecycle.
NASA Astrophysics Data System (ADS)
Combes, Frédéric; Trescher, Maximilian; Piéchon, Frédéric; Fuchs, Jean-Noël
2016-10-01
We develop a theory for the analytic computation of the free energy of band insulators in the presence of a uniform and constant electric field. The two key ingredients are a perturbation-like expression of the Wannier-Stark energy spectrum of electrons and a modified statistical mechanics approach involving a local chemical potential in order to deal with the unbounded spectrum and impose the physically relevant electronic filling. At first order in the field, we recover the result of King-Smith, Vanderbilt, and Resta for the electric polarization in terms of a Zak phase—albeit at finite temperature—and, at second order, deduce a general formula for the electric susceptibility, or equivalently for the dielectric constant. Advantages of our method are the validity of the formalism both at zero and finite temperature and the easy computation of higher order derivatives of the free energy. We verify our findings on two different one-dimensional tight-binding models.
Three-dimensional building detection and modeling using a statistical approach.
Cord, M; Declercq, D
2001-01-01
In this paper, we address the problem of building reconstruction in high-resolution stereoscopic aerial imagery. We present a hierarchical strategy to detect and model buildings in urban sites, based on a global focusing process, followed by a local modeling. During the first step, we extract the building regions by exploiting to the full extent the depth information obtained with a new adaptive correlation stereo matching. In the modeling step, we propose a statistical approach, which is competitive to the sequential methods using segmentation and modeling. This parametric method is based on a multiplane model of the data, interpreted as a mixture model. From a Bayesian point of view the so-called augmentation of the model with indicator variables allows using stochastic algorithms to achieve both model parameter estimation and plane segmentation. We then report a Monte Carlo study of the performance of the stochastic algorithm on synthetic data, before displaying results on real data.
Rapp, J.B.
1991-01-01
Q-mode factor analysis was used to quantitate the distribution of the major aliphatic hydrocarbon (n-alkanes, pristane, phytane) systems in sediments from a variety of marine environments. The compositions of the pure end members of the systems were obtained from factor scores and the distribution of the systems within each sample was obtained from factor loadings. All the data, from the diverse environments sampled (estuarine (San Francisco Bay), fresh-water (San Francisco Peninsula), polar-marine (Antarctica) and geothermal-marine (Gorda Ridge) sediments), were reduced to three major systems: a terrestrial system (mostly high molecular weight aliphatics with odd-numbered-carbon predominance), a mature system (mostly low molecular weight aliphatics without predominance) and a system containing mostly high molecular weight aliphatics with even-numbered-carbon predominance. With this statistical approach, it is possible to assign the percentage contribution from various sources to the observed distribution of aliphatic hydrocarbons in each sediment sample. ?? 1991.
NASA Astrophysics Data System (ADS)
Haas, R.; Pinto, J. G.
2012-12-01
The occurrence of mid-latitude windstorms is related to strong socio-economic effects. For detailed and reliable regional impact studies, large datasets of high-resolution wind fields are required. In this study, a statistical downscaling approach in combination with dynamical downscaling is introduced to derive storm related gust speeds on a high-resolution grid over Europe. Multiple linear regression models are trained using reanalysis data and wind gusts from regional climate model simulations for a sample of 100 top ranking windstorm events. The method is computationally inexpensive and reproduces individual windstorm footprints adequately. Compared to observations, the results for Germany are at least as good as pure dynamical downscaling. This new tool can be easily applied to large ensembles of general circulation model simulations and thus contribute to a better understanding of the regional impact of windstorms based on decadal and climate change projections.
Strategists and Non-Strategists in Austrian Enterprises—Statistical Approaches
NASA Astrophysics Data System (ADS)
Duller, Christine
2011-09-01
The purpose of this work is to determine with a modern statistical approach which variables can indicate whether an arbitrary enterprise uses strategic management as basic business concept. "Strategic management is an ongoing process that evaluates and controls the business and the industries in which the company is involved; assesses its competitors and sets goals and strategies to meet all existing and potential competitors; and then reassesses each strategy annually or quarterly (i.e. regularly) to determine how it has been implemented and whether it has succeeded or needs replacement by a new strategy to meet changed circumstances, new technology, new competitors, a new economic environment or a new social, financial or political environment." [12] In Austria 70% to 80% of all enterprises can be classified as family firms. In literature the empirically untested hypothesis can be found that family firms tend to have less formalised management accounting systems than non-family enterprises. But it is unknown whether the use of strategic management accounting systems is influenced more by the fact of structure (family or non-family enterprise) or by the effect of size (number of employees). Therefore, the goal is to split up enterprises into two subgroups, namely strategists and non-strategists and to get information on the variables of influence (size, structure, branches, etc.). Two statistical approaches are used: On the one hand a classical cluster analysis is implemented to design two subgroups and on the other hand a latent class model is built up for this problem. After a description of the theoretical background first results of both strategies are compared.
Wall, Melanie M.; Larson, Nicole I.; Forsyth, Ann; Van Riper, David C.; Graham, Dan J.; Story, Mary T.; Neumark-Sztainer, Dianne R.
2012-01-01
Background Few studies have addressed the potential influence of neighborhood characteristics on adolescent obesity risk and findings have been inconsistent. Purpose Identify patterns among neighborhood food, physical activity, street/transportation, and socioeconomic characteristics and examine their associations with adolescent weight status using three statistical approaches. Methods Anthropometric measures were taken on 2,682 adolescents (53% female, mean age=14.5) from 20 Minneapolis/St. Paul, Minnesota schools in 2009–2010. Neighborhood environmental variables were measured using Geographic Information Systems data and by survey. Gender-stratified regressions related BMI z-scores and obesity to 1) separate neighborhood variables 2) composites formed using factor analysis and 3) clusters identified using spatial latent class analysis in 2012. Results Regressions on separate neighborhood variables found low percentage of parks/recreation and low perceived safety were associated with higher BMI z-scores in boys and girls. Factor analysis found five factors: away-from-home food and recreation accessibility, community disadvantage, green space, retail/transit density, and supermarket accessibility. The first two factors were associated with BMI z-score in girls but not in boys. Spatial latent class analysis identified six clusters with complex combinations of both positive and negative environmental influences. In boys, the cluster with highest obesity (29.8%) included low socioeconomics, parks/recreation, and safety; high restaurant and convenience store density; and nearby access to gyms, supermarkets, and many transit stops. Conclusions The mix of neighborhood-level barriers and facilitators of weight-related health behaviors leads to difficulties disentangling their associations with adolescent obesity; however, statistical approaches including factor and latent class analysis may provide useful means for addressing this complexity. PMID:22516505
Statistical downscaling of rainfall: a non-stationary and multi-resolution approach
NASA Astrophysics Data System (ADS)
Rashid, Md. Mamunur; Beecham, Simon; Chowdhury, Rezaul Kabir
2016-05-01
A novel downscaling technique is proposed in this study whereby the original rainfall and reanalysis variables are first decomposed by wavelet transforms and rainfall is modelled using the semi-parametric additive model formulation of Generalized Additive Model in Location, Scale and Shape (GAMLSS). The flexibility of the GAMLSS model makes it feasible as a framework for non-stationary modelling. Decomposition of a rainfall series into different components is useful to separate the scale-dependent properties of the rainfall as this varies both temporally and spatially. The study was conducted at the Onkaparinga river catchment in South Australia. The model was calibrated over the period 1960 to 1990 and validated over the period 1991 to 2010. The model reproduced the monthly variability and statistics of the observed rainfall well with Nash-Sutcliffe efficiency (NSE) values of 0.66 and 0.65 for the calibration and validation periods, respectively. It also reproduced well the seasonal rainfall over the calibration (NSE = 0.37) and validation (NSE = 0.69) periods for all seasons. The proposed model was better than the tradition modelling approach (application of GAMLSS to the original rainfall series without decomposition) at reproducing the time-frequency properties of the observed rainfall, and yet it still preserved the statistics produced by the traditional modelling approach. When downscaling models were developed with general circulation model (GCM) historical output datasets, the proposed wavelet-based downscaling model outperformed the traditional downscaling model in terms of reproducing monthly rainfall for both the calibration and validation periods.
NASA Astrophysics Data System (ADS)
Donner, Reik; Passow, Christian
2016-04-01
The appropriate statistical evaluation of recent changes in the occurrence of hydro-meteorological extreme events is of key importance for identifying trends in the behavior of climate extremes and associated impacts on ecosystems or technological infrastructures, as well as for validating the capability of models used for future climate scenarios to correctly represent such trends in the past decades. In this context, most recent studies have utilized conceptual approaches from extreme value theory based on parametric descriptions of the probability distribution functions of extremes. However, the application of such methods is faced with a few fundamental challenges: (1) The application of the most widely used approaches of generalized extreme value (GEV) or generalized Pareto (GP) distributions is based on assumptions the validity of which can often be hardly proven. (2) Due to the differentiation between extreme and normal values (peaks-over-threshold, block maxima), much information on the distribution of the variable of interest is not used at all by such methods, implying that the sample size of values effectively used for estimating the parameters of the GEV or GP distributions is largely limited for typical lengths of observational series. (3) The problem of parameter estimation is further enhanced by the variety of possibly statistical models mapping different aspects of temporal changes of extremes like seasonality or possibly non-linear trends. Reliably identifying the most appropriate model is a challenging task for the lengths of typical observational series. As an alternative to approaches based on extreme value theory, there have been a few attempts to transfer quantile regression approaches to statistically describing the time-dependence of climate extremes. In this context, a value exceeding a certain instantaneous percentile of the time-dependent probability distribution function of the data under study is considered to be an extreme event. In
Augustine, Swinburne A J; Simmons, Kaneatra J; Eason, Tarsha N; Griffin, Shannon M; Curioso, Clarissa L; Wymer, Larry J; Fout, G Shay; Grimm, Ann C; Oshima, Kevin H; Dufour, Al
2015-10-01
There are numerous pathogens that can be transmitted through water. Identifying and understanding the routes and magnitude of exposure or infection to these microbial contaminants are critical to assessing and mitigating risk. Conventional approaches of studying immunological responses to exposure or infection such as Enzyme-Linked Immunosorbent Assays (ELISAs) and other monoplex antibody-based immunoassays can be very costly, laborious, and consume large quantities of patient sample. A major limitation of these approaches is that they can only be used to measure one analyte at a time. Multiplex immunoassays provide the ability to study multiple pathogens simultaneously in microliter volumes of samples. However, there are several challenges that must be addressed when developing these multiplex immunoassays such as selection of specific antigens and antibodies, cross-reactivity, calibration, protein-reagent interferences, and the need for rigorous optimization of protein concentrations. In this study, a Design of Experiments (DOE) approach was used to optimize reagent concentrations for coupling selected antigens to Luminex™ xMAP microspheres for use in an indirect capture, multiplex immunoassay to detect human exposure or infection from pathogens that are potentially transmitted through water. Results from Helicobacter pylori, Campylobacter jejuni, Escherichia coli O157:H7, and Salmonella typhimurium singleplexes were used to determine the mean concentrations that would be applied to the multiplex assay. Cut-offs to differentiate between exposed and non-exposed individuals were determined using finite mixed modeling (FMM). The statistical approaches developed facilitated the detection of Immunoglobulin G (IgG) antibodies to H. pylori, C. jejuni, Toxoplasma gondii, hepatitis A virus, rotavirus and noroviruses (VA387 and Norwalk strains) in fifty-four diagnostically characterized plasma samples. Of the characterized samples, the detection rate was 87.5% for H
A three-dimensional statistical approach to improved image quality for multislice helical CT
Thibault, Jean-Baptiste; Sauer, Ken D.; Bouman, Charles A.; Hsieh, Jiang
2007-11-15
Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications.
A three-dimensional statistical approach to improved image quality for multislice helical CT.
Thibault, Jean-Baptiste; Sauer, Ken D; Bouman, Charles A; Hsieh, Jiang
2007-11-01
Multislice helical computed tomography scanning offers the advantages of faster acquisition and wide organ coverage for routine clinical diagnostic purposes. However, image reconstruction is faced with the challenges of three-dimensional cone-beam geometry, data completeness issues, and low dosage. Of all available reconstruction methods, statistical iterative reconstruction (IR) techniques appear particularly promising since they provide the flexibility of accurate physical noise modeling and geometric system description. In this paper, we present the application of Bayesian iterative algorithms to real 3D multislice helical data to demonstrate significant image quality improvement over conventional techniques. We also introduce a novel prior distribution designed to provide flexibility in its parameters to fine-tune image quality. Specifically, enhanced image resolution and lower noise have been achieved, concurrently with the reduction of helical cone-beam artifacts, as demonstrated by phantom studies. Clinical results also illustrate the capabilities of the algorithm on real patient data. Although computational load remains a significant challenge for practical development, superior image quality combined with advancements in computing technology make IR techniques a legitimate candidate for future clinical applications.
Abut, Fatih; Akay, Mehmet Fatih
2015-01-01
Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance. PMID:26346869
Abut, Fatih; Akay, Mehmet Fatih
2015-01-01
Maximal oxygen uptake (VO2max) indicates how many milliliters of oxygen the body can consume in a state of intense exercise per minute. VO2max plays an important role in both sport and medical sciences for different purposes, such as indicating the endurance capacity of athletes or serving as a metric in estimating the disease risk of a person. In general, the direct measurement of VO2max provides the most accurate assessment of aerobic power. However, despite a high level of accuracy, practical limitations associated with the direct measurement of VO2max, such as the requirement of expensive and sophisticated laboratory equipment or trained staff, have led to the development of various regression models for predicting VO2max. Consequently, a lot of studies have been conducted in the last years to predict VO2max of various target audiences, ranging from soccer athletes, nonexpert swimmers, cross-country skiers to healthy-fit adults, teenagers, and children. Numerous prediction models have been developed using different sets of predictor variables and a variety of machine learning and statistical methods, including support vector machine, multilayer perceptron, general regression neural network, and multiple linear regression. The purpose of this study is to give a detailed overview about the data-driven modeling studies for the prediction of VO2max conducted in recent years and to compare the performance of various VO2max prediction models reported in related literature in terms of two well-known metrics, namely, multiple correlation coefficient (R) and standard error of estimate. The survey results reveal that with respect to regression methods used to develop prediction models, support vector machine, in general, shows better performance than other methods, whereas multiple linear regression exhibits the worst performance.
NASA Astrophysics Data System (ADS)
Herschtal, A.; Foroudi, F.; Greer, P. B.; Eade, T. N.; Hindson, B. R.; Kron, T.
2012-05-01
Early approaches to characterizing errors in target displacement during a fractionated course of radiotherapy assumed that the underlying fraction-to-fraction variability in target displacement, known as the ‘treatment error’ or ‘random error’, could be regarded as constant across patients. More recent approaches have modelled target displacement allowing for differences in random error between patients. However, until recently it has not been feasible to compare the goodness of fit of alternate models of random error rigorously. This is because the large volumes of real patient data necessary to distinguish between alternative models have only very recently become available. This work uses real-world displacement data collected from 365 patients undergoing radical radiotherapy for prostate cancer to compare five candidate models for target displacement. The simplest model assumes constant random errors across patients, while other models allow for random errors that vary according to one of several candidate distributions. Bayesian statistics and Markov Chain Monte Carlo simulation of the model parameters are used to compare model goodness of fit. We conclude that modelling the random error as inverse gamma distributed provides a clearly superior fit over all alternatives considered. This finding can facilitate more accurate margin recipes and correction strategies.
Blind image quality assessment: a natural scene statistics approach in the DCT domain.
Saad, Michele A; Bovik, Alan C; Charrier, Christophe
2012-08-01
We develop an efficient, general-purpose, blind/noreference image quality assessment (NR-IQA) algorithm using a natural scene statistics (NSS) model of discrete cosine transform (DCT) coefficients. The algorithm is computationally appealing, given the availability of platforms optimized for DCT computation. The approach relies on a simple Bayesian inference model to predict image quality scores given certain extracted features. The features are based on an NSS model of the image DCT coefficients. The estimated parameters of the model are utilized to form features that are indicative of perceptual quality. These features are used in a simple Bayesian inference approach to predict quality scores. The resulting algorithm, which we name BLIINDS-II, requires minimal training and adopts a simple probabilistic model for score prediction. Given the extracted features from a test image, the quality score that maximizes the probability of the empirically determined inference model is chosen as the predicted quality score of that image. When tested on the LIVE IQA database, BLIINDS-II is shown to correlate highly with human judgments of quality, at a level that is competitive with the popular SSIM index.
Feron, Gilles; Ayed, Charfedinne; Qannari, El Mostafa; Courcoux, Philippe; Laboure, Hélène; Guichard, Elisabeth
2014-01-01
For human beings, the mouth is the first organ to perceive food and the different signalling events associated to food breakdown. These events are very complex and as such, their description necessitates combining different data sets. This study proposed an integrated approach to understand the relative contribution of main food oral processing events involved in aroma release during cheese consumption. In vivo aroma release was monitored on forty eight subjects who were asked to eat four different model cheeses varying in fat content and firmness and flavoured with ethyl propanoate and nonan-2-one. A multiblock partial least square regression was performed to explain aroma release from the different physiological data sets (masticatory behaviour, bolus rheology, saliva composition and flux, mouth coating and bolus moistening). This statistical approach was relevant to point out that aroma release was mostly explained by masticatory behaviour whatever the cheese and the aroma, with a specific influence of mean amplitude on aroma release after swallowing. Aroma release from the firmer cheeses was explained mainly by bolus rheology. The persistence of hydrophobic compounds in the breath was mainly explained by bolus spreadability, in close relation with bolus moistening. Resting saliva poorly contributed to the analysis whereas the composition of stimulated saliva was negatively correlated with aroma release and mostly for soft cheeses, when significant. PMID:24691625
NASA Astrophysics Data System (ADS)
Zakaria, Chahnez; Curé, Olivier; Salzano, Gabriella; Smaïli, Kamel
In Computer Supported Cooperative Work (CSCW), it is crucial for project leaders to detect conflicting situations as early as possible. Generally, this task is performed manually by studying a set of documents exchanged between team members. In this paper, we propose a full-fledged automatic solution that identifies documents, subjects and actors involved in relational conflicts. Our approach detects conflicts in emails, probably the most popular type of documents in CSCW, but the methods used can handle other text-based documents. These methods rely on the combination of statistical and ontological operations. The proposed solution is decomposed in several steps: (i) we enrich a simple negative emotion ontology with terms occuring in the corpus of emails, (ii) we categorize each conflicting email according to the concepts of this ontology and (iii) we identify emails, subjects and team members involved in conflicting emails using possibilistic description logic and a set of proposed measures. Each of these steps are evaluated and validated on concrete examples. Moreover, this approach's framework is generic and can be easily adapted to domains other than conflicts, e.g. security issues, and extended with operations making use of our proposed set of measures.
A protocol for classifying ecologically relevant marine zones, a statistical approach
NASA Astrophysics Data System (ADS)
Verfaillie, Els; Degraer, Steven; Schelfaut, Kristien; Willems, Wouter; Van Lancker, Vera
2009-06-01
Mapping ecologically relevant zones in the marine environment has become increasingly important. Biological data are however often scarce and alternatives are being sought in optimal classifications of abiotic variables. The concept of 'marine landscapes' is based on a hierarchical classification of geological, hydrographic and other physical data. This approach is however subject to many assumptions and subjective decisions. An objective protocol for zonation is being proposed here where abiotic variables are subjected to a statistical approach, using principal components analysis (PCA) and a cluster analysis. The optimal number of clusters (or zones) is being defined using the Calinski-Harabasz criterion. The methodology has been applied on datasets of the Belgian part of the North Sea (BPNS), a shallow sandy shelf environment with a sandbank-swale topography. The BPNS was classified into 8 zones that represent well the natural variability of the seafloor. The internal cluster consistency was validated with a split-run procedure, with more than 99% correspondence between the validation and the original dataset. The ecological relevance of 6 out of the 8 zones was demonstrated, using indicator species analysis. The proposed protocol, as exemplified for the BPNS, can easily be applied to other areas and provides a strong knowledge basis for environmental protection and management of the marine environment. A SWOT-analysis, showing the strengths, weaknesses, opportunities and threats of the protocol was performed.
Meng, Liang; Kramer, Mark A.; Middleton, Steven J.; Whittington, Miles A.; Eden, Uri T.
2014-01-01
A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data), but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach – linking statistical, computational, and experimental neuroscience – provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data. PMID:24465520
NASA Technical Reports Server (NTRS)
Yeh, Leehwa
1993-01-01
The phase-space-picture approach to quantum non-equilibrium statistical mechanics via the characteristic function of infinite-mode squeezed coherent states is introduced. We use quantum Brownian motion as an example to show how this approach provides an interesting geometrical interpretation of quantum non-equilibrium phenomena.
Arciuli, Joanne; Torkildsen, Janne von Koss
2012-01-01
Mastery of language can be a struggle for some children. Amongst those that succeed in achieving this feat there is variability in proficiency. Cognitive scientists remain intrigued by this variation. A now substantial body of research suggests that language acquisition is underpinned by a child’s capacity for statistical learning (SL). Moreover, a growing body of research has demonstrated that variability in SL is associated with variability in language proficiency. Yet, there is a striking lack of longitudinal data. To date, there has been no comprehensive investigation of whether a capacity for SL in young children is, in fact, associated with language proficiency in subsequent years. Here we review key studies that have led to the need for this longitudinal research. Advancing the language acquisition debate via longitudinal research has the potential to transform our understanding of typical development as well as disorders such as autism, specific language impairment, and dyslexia. PMID:22969746
Arciuli, Joanne; Torkildsen, Janne von Koss
2012-01-01
Mastery of language can be a struggle for some children. Amongst those that succeed in achieving this feat there is variability in proficiency. Cognitive scientists remain intrigued by this variation. A now substantial body of research suggests that language acquisition is underpinned by a child's capacity for statistical learning (SL). Moreover, a growing body of research has demonstrated that variability in SL is associated with variability in language proficiency. Yet, there is a striking lack of longitudinal data. To date, there has been no comprehensive investigation of whether a capacity for SL in young children is, in fact, associated with language proficiency in subsequent years. Here we review key studies that have led to the need for this longitudinal research. Advancing the language acquisition debate via longitudinal research has the potential to transform our understanding of typical development as well as disorders such as autism, specific language impairment, and dyslexia.
Comparison of statistical approaches to evaluate factors associated with metabolic syndrome.
Fekedulegn, Desta; Andrew, Michael; Violanti, John; Hartley, Tara; Charles, Luenda; Burchfiel, Cecil
2010-05-01
In statistical analyses, metabolic syndrome as a dependent variable is often utilized in a binary form (presence/absence) where the logistic regression model is used to estimate the odds ratio as the measure of association between health-related factors and metabolic syndrome. Since metabolic syndrome is a common outcome the interpretation of odds ratio as an approximation to prevalence or risk ratio is questionable as it may overestimate its intended target. In addition, dichotomizing a variable that could potentially be treated as discrete may lead to reduced statistical power. In this paper, the authors treat metabolic syndrome as a discrete outcome by defining it as the count of syndrome components. The goal of this study is to evaluate the usefulness of alternative generalized linear models for analysis of metabolic syndrome as a count outcome and compare the results with models that utilize the binary form. Empirical data were used to examine the association between depression and metabolic syndrome. Measures of association were calculated using two approaches; models that treat metabolic syndrome as a binary outcome (the logistic, log-binomial, Poisson, and the modified Poisson regression) and models that utilize metabolic syndrome as discrete/count data (the Poisson and the negative binomial regression). The method that treats metabolic syndrome as a count outcome (Poisson/negative binomial regression model) appears more sensitive in that it is better able to detect associations and hence can serve as an alternative to analyze metabolic syndrome as count dependent variable and provide an interpretable measure of association. PMID:20546380
NASA Astrophysics Data System (ADS)
Boeckli, L.; Brenning, A.; Gruber, S.; Noetzli, J.
2012-01-01
Estimates of permafrost distribution in mountain regions are important for the assessment of climate change effects on natural and human systems. In order to make permafrost analyses and the establishment of guidelines for e.g. construction or hazard assessment comparable and compatible between regions, one consistent and traceable model for the entire Alpine domain is required. For the calibration of statistical models, the scarcity of suitable and reliable information about the presence or absence of permafrost makes the use of large areas attractive due to the larger data base available. We present a strategy and method for modelling permafrost distribution of entire mountain regions and provide the results of statistical analyses and model calibration for the European Alps. Starting from an integrated model framework, two statistical sub-models are developed, one for debris-covered areas (debris model) and one for steep bedrock (rock model). They are calibrated using rock glacier inventories and rock surface temperatures. To support the later generalization to surface characteristics other than those available for calibration, so-called offset terms have been introduced into the model that allow doing this in a transparent and traceable manner. For the debris model a generalized linear mixed-effect model (GLMM) is used to predict the probability of a rock glacier being intact as opposed to relict. It is based on the explanatory variables mean annual air temperature (MAAT), potential incoming solar radiation (PISR) and the mean annual sum of precipitation (PRECIP), and achieves an excellent discrimination (area under the receiver-operating characteristic, AUROC = 0.91). Surprisingly, the probability of a rock glacier being intact is positively associated with increasing PRECIP for given MAAT and PISR conditions. The rock model is based on a linear regression and was calibrated with mean annual rock surface temperatures (MARST). The explanatory variables are MAAT
Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation
Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.
2000-01-01
In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less
Advanced numerical methods and software approaches for semiconductor device simulation
CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.
2000-03-23
In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.
NASA Astrophysics Data System (ADS)
Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy
2016-04-01
High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m-3), moderate (50-99 s m-3), high (100-149 s m-3) and very high (150 < n s m-3), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68).
Sadyś, Magdalena; Skjøth, Carsten Ambelas; Kennedy, Roy
2016-04-01
High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006-2010. These basidiospores are known to cause sensitization due to the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximum temperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0-49 s m(-3)), moderate (50-99 s m(-3)), high (100-149 s m(-3)) and very high (150 < n s m(-3)), could be designated. Despite some deviation in results obtained by artificial neural networks, authors have achieved a forecasting model, which was accurate (correlation between observed and predicted values varied from r s = 0.57 to r s = 0.68).
Using a Statistical Approach to Anticipate Leaf Wetness Duration Under Climate Change in France
NASA Astrophysics Data System (ADS)
Huard, F.; Imig, A. F.; Perrin, P.
2014-12-01
Leaf wetness plays a major role in the development of fungal plant diseases. Leaf wetness duration (LWD) above a threshold value is determinant for infection and can be seen as a good indicator of impact of climate on infection occurrence and risk. As LWD is not widely measured, several methods, based on physics and empirical approach, have been developed to estimate it from weather data. Many LWD statistical models do exist, but the lack of standard for measurements require reassessments. A new empirical LWD model, called MEDHI (Modèle d'Estimation de la Durée d'Humectation à l'Inra) was developed for french configuration for wetness sensors (angle : 90°, height : 50 cm). This deployment is different from what is usually recommended from constructors or authors in other countries (angle from 10 to 60°, height from 10 to 150 cm…). MEDHI is a decision support system based on hourly climatic conditions at time steps n and n-1 taking account relative humidity, rainfall and previously simulated LWD. Air temperature, relative humidity, wind speed, rain and LWD data from several sensors with 2 configurations were measured during 6 months in Toulouse and Avignon (South West and South East of France) to calibrate MEDHI. A comparison of empirical models : NHRH (RH threshold), DPD (dew point depression), CART (classification and regression tree analysis dependant on RH, wind speed and dew point depression) and MEDHI, using meteorological and LWD measurements obtained during 5 months in Toulouse, showed that the development of this new model MEHDI was definitely better adapted to French conditions. In the context of climate change, MEDHI was used for mapping the evolution of leaf wetness duration in France from 1950 to 2100 with the French regional climate model ALADIN under different Representative Concentration Pathways (RCPs) and using a QM (Quantile-Mapping) statistical downscaling method. Results give information on the spatial distribution of infection risks
Recent advances in bioprinting techniques: approaches, applications and future prospects.
Li, Jipeng; Chen, Mingjiao; Fan, Xianqun; Zhou, Huifang
2016-01-01
Bioprinting technology shows potential in tissue engineering for the fabrication of scaffolds, cells, tissues and organs reproducibly and with high accuracy. Bioprinting technologies are mainly divided into three categories, inkjet-based bioprinting, pressure-assisted bioprinting and laser-assisted bioprinting, based on their underlying printing principles. These various printing technologies have their advantages and limitations. Bioprinting utilizes biomaterials, cells or cell factors as a "bioink" to fabricate prospective tissue structures. Biomaterial parameters such as biocompatibility, cell viability and the cellular microenvironment strongly influence the printed product. Various printing technologies have been investigated, and great progress has been made in printing various types of tissue, including vasculature, heart, bone, cartilage, skin and liver. This review introduces basic principles and key aspects of some frequently used printing technologies. We focus on recent advances in three-dimensional printing applications, current challenges and future directions. PMID:27645770
A Trait-Based Approach to Advance Coral Reef Science.
Madin, Joshua S; Hoogenboom, Mia O; Connolly, Sean R; Darling, Emily S; Falster, Daniel S; Huang, Danwei; Keith, Sally A; Mizerek, Toni; Pandolfi, John M; Putnam, Hollie M; Baird, Andrew H
2016-06-01
Coral reefs are biologically diverse and ecologically complex ecosystems constructed by stony corals. Despite decades of research, basic coral population biology and community ecology questions remain. Quantifying trait variation among species can help resolve these questions, but progress has been hampered by a paucity of trait data for the many, often rare, species and by a reliance on nonquantitative approaches. Therefore, we propose filling data gaps by prioritizing traits that are easy to measure, estimating key traits for species with missing data, and identifying 'supertraits' that capture a large amount of variation for a range of biological and ecological processes. Such an approach can accelerate our understanding of coral ecology and our ability to protect critically threatened global ecosystems. PMID:26969335
A Computationally Based Approach to Homogenizing Advanced Alloys
Jablonski, P D; Cowen, C J
2011-02-27
We have developed a computationally based approach to optimizing the homogenization heat treatment of complex alloys. The Scheil module within the Thermo-Calc software is used to predict the as-cast segregation present within alloys, and DICTRA (Diffusion Controlled TRAnsformations) is used to model the homogenization kinetics as a function of time, temperature and microstructural scale. We will discuss this approach as it is applied to both Ni based superalloys as well as the more complex (computationally) case of alloys that solidify with more than one matrix phase as a result of segregation. Such is the case typically observed in martensitic steels. With these alloys it is doubly important to homogenize them correctly, especially at the laboratory scale, since they are austenitic at high temperature and thus constituent elements will diffuse slowly. The computationally designed heat treatment and the subsequent verification real castings are presented.
A Statistical Approach Reveals Designs for the Most Robust Stochastic Gene Oscillators
2016-01-01
The engineering of transcriptional networks presents many challenges due to the inherent uncertainty in the system structure, changing cellular context, and stochasticity in the governing dynamics. One approach to address these problems is to design and build systems that can function across a range of conditions; that is they are robust to uncertainty in their constituent components. Here we examine the parametric robustness landscape of transcriptional oscillators, which underlie many important processes such as circadian rhythms and the cell cycle, plus also serve as a model for the engineering of complex and emergent phenomena. The central questions that we address are: Can we build genetic oscillators that are more robust than those already constructed? Can we make genetic oscillators arbitrarily robust? These questions are technically challenging due to the large model and parameter spaces that must be efficiently explored. Here we use a measure of robustness that coincides with the Bayesian model evidence, combined with an efficient Monte Carlo method to traverse model space and concentrate on regions of high robustness, which enables the accurate evaluation of the relative robustness of gene network models governed by stochastic dynamics. We report the most robust two and three gene oscillator systems, plus examine how the number of interactions, the presence of autoregulation, and degradation of mRNA and protein affects the frequency, amplitude, and robustness of transcriptional oscillators. We also find that there is a limit to parametric robustness, beyond which there is nothing to be gained by adding additional feedback. Importantly, we provide predictions on new oscillator systems that can be constructed to verify the theory and advance design and modeling approaches to systems and synthetic biology. PMID:26835539
Extracting sparse signals from high-dimensional data: A statistical mechanics approach
NASA Astrophysics Data System (ADS)
Ramezanali, Mohammad
Sparse reconstruction algorithms aim to retrieve high-dimensional sparse signals from a limited amount of measurements under suitable conditions. As the number of variables go to infinity, these algorithms exhibit sharp phase transition boundaries where the sparse retrieval breaks down. Several sparse reconstruction algorithms are formulated as optimization problems. Few of the prominent ones among these have been analyzed in the literature by statistical mechanical methods. The function to be optimized plays the role of energy. The treatment involves finite temperature replica mean-field theory followed by the zero temperature limit. Although this approach has been successful in reproducing the algorithmic phase transition boundaries, the replica trick and the non-trivial zero temperature limit obscure the underlying reasons for the failure of the algorithms. In this thesis, we employ the "cavity method" to give an alternative derivation of the phase transition boundaries, working directly in the zero-temperature limit. This approach provides insight into the origin of the different terms in the mean field self-consistency equations. The cavity method naturally generates a local susceptibility which leads to an identity that clearly indicates the existence of two phases. The identity also gives us a novel route to the known parametric expressions for the phase boundary of the Basis Pursuit algorithm and to the new ones for the Elastic Net. These transitions being continuous (second order), we explore the scaling laws and critical exponents that are uniquely determined by the nature of the distribution of the density of the nonzero components of the sparse signal. Not only is the phase boundary of the Elastic Net different from that of the Basis Pursuit, we show that the critical behavior of the two algorithms are from different universality classes.
Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs
Wood, Andrew W; Leung, Lai R; Sridhar, V; Lettenmaier, D P
2004-01-01
Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregation (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to
Tapiovaara, M J; Wagner, R F
1993-01-01
A method of measuring the image quality of medical imaging equipment is considered within the framework of statistical decision theory. In this approach, images are regarded as random vectors and image quality is defined in the context of the image information available for performing a specified detection or discrimination task. The approach provides a means of measuring image quality, as related to the detection of an image detail of interest, without reference to the actual physical mechanisms involved in image formation and without separate measurements of signal transfer characteristics or image noise. The measurement does not, however, consider deterministic errors in the image; they need a separate evaluation for imaging modalities where they are of concern. The detectability of an image detail can be expressed in terms of the ideal observer's signal-to-noise ratio (SNR) at the decision level. Often a good approximation to this SNR can be obtained by employing sub-optimal observers, whose performance correlates well with the performance of human observers as well. In this paper the measurement of SNR is based on implementing algorithmic realizations of specified observers and analysing their responses while actually performing a specified detection task of interest. Three observers are considered: the ideal prewhitening matched filter, the non-prewhitening matched filter, and the DC-suppressing non-prewhitening matched filter. The construction of the ideal observer requires an impractical amount of data and computing, except for the most simple imaging situations. Therefore, the utilization of sub-optimal observers is advised and their performance in detecting a specified signal is discussed. Measurement of noise and SNR has been extended to include temporally varying images and dynamic imaging systems. PMID:8426870
Stellacci, A M; Castrignanò, A; Troccoli, A; Basso, B; Buttafuoco, G
2016-03-01
Hyperspectral data can provide prediction of physical and chemical vegetation properties, but data handling, analysis, and interpretation still limit their use. In this study, different methods for selecting variables were compared for the analysis of on-the-ground hyperspectral signatures of wheat grown under a wide range of nitrogen supplies. Spectral signatures were recorded at the end of stem elongation, booting, and heading stages in 100 georeferenced locations, using a 512-channel portable spectroradiometer operating in the 325-1075-nm range. The following procedures were compared: (i) a heuristic combined approach including lambda-lambda R(2) (LL R(2)) model, principal component analysis (PCA), and stepwise discriminant analysis (SDA); (ii) variable importance for projection (VIP) statistics derived from partial least square (PLS) regression (PLS-VIP); and (iii) multiple linear regression (MLR) analysis through maximum R-square improvement (MAXR) and stepwise algorithms. The discriminating capability of selected wavelengths was evaluated by canonical discriminant analysis. Leaf-nitrogen concentration was quantified on samples collected at the same locations and dates and used as response variable in regressive methods. The different methods resulted in differences in the number and position of the selected wavebands. Bands extracted through regressive methods were mostly related to response variable, as shown by the importance of the visible region for PLS and stepwise. Band selection techniques can be extremely useful not only to improve the power of predictive models but also for data interpretation or sensor design.
Stellacci, A M; Castrignanò, A; Troccoli, A; Basso, B; Buttafuoco, G
2016-03-01
Hyperspectral data can provide prediction of physical and chemical vegetation properties, but data handling, analysis, and interpretation still limit their use. In this study, different methods for selecting variables were compared for the analysis of on-the-ground hyperspectral signatures of wheat grown under a wide range of nitrogen supplies. Spectral signatures were recorded at the end of stem elongation, booting, and heading stages in 100 georeferenced locations, using a 512-channel portable spectroradiometer operating in the 325-1075-nm range. The following procedures were compared: (i) a heuristic combined approach including lambda-lambda R(2) (LL R(2)) model, principal component analysis (PCA), and stepwise discriminant analysis (SDA); (ii) variable importance for projection (VIP) statistics derived from partial least square (PLS) regression (PLS-VIP); and (iii) multiple linear regression (MLR) analysis through maximum R-square improvement (MAXR) and stepwise algorithms. The discriminating capability of selected wavelengths was evaluated by canonical discriminant analysis. Leaf-nitrogen concentration was quantified on samples collected at the same locations and dates and used as response variable in regressive methods. The different methods resulted in differences in the number and position of the selected wavebands. Bands extracted through regressive methods were mostly related to response variable, as shown by the importance of the visible region for PLS and stepwise. Band selection techniques can be extremely useful not only to improve the power of predictive models but also for data interpretation or sensor design. PMID:26922749
Reinhold, Klaus; Schielzeth, Holger
2015-01-01
Animals are faced with many choices and a very important one is the choice of a mating partner. Inter-individual differences in mating preferences have been studied for some time, but most studies focus on the location of the peak preference rather than on other aspects of preference functions. In this review, we discuss the role of variation in choosiness in inter-sexual selection. We define individual-level choosiness as the change in mating propensity in response to different stimulus signals. We illustrate general issues in estimating aspects of preference functions and discuss experimental setups for quantifying variation in choosiness with a focus on choices based on acoustic signals in insects. One important consideration is whether preferences are measured sequentially one stimulus at a time or in competitive multiple-choice setups; the suitability of these alternatives depends on the ecology of the study species. Furthermore, we discuss the usefulness of behavioural proxies for determining preference functions, which can be misleading if the proxies are not linearly related to mating propensity. Finally, we address statistical approaches, including the use of function-valued trait analysis, for studying choosiness. Most of the conclusions can be generalized beyond acoustic signals in insects and to choices in non-sexual contexts.
NASA Astrophysics Data System (ADS)
Seetha, D.; Velraj, G.
2015-10-01
The ancient materials characterization will bring back the more evidence of the ancient people life styles. In this study, the archaeological pottery shards recently excavated from Kodumanal, Erode District in Tamilnadu, South India were investigated. The experimental results enlighten us to the elemental and the mineral composition of the pottery shards. The FT-IR technique tells that the mineralogy and the firing temperature of the samples are less than 800 °C, in the oxidizing/reducing atmosphere and the XRD was used as a complementary technique for the mineralogy. A thorough scientific study of SEM-EDS with the help of statistical approach done to find the provenance of the selected pot shards has not yet been performed. EDS and XRF results revealed that the investigated samples have the elements O, Si, Al, Fe, Mn, Mg, Ca, Ti, K and Na are in different compositions. For establishing the provenance (same or different origin) of pottery samples, Al and Si concentration ratio as well as hierarchical cluster analysis (HCA) was used and the results are correlated.
Four-level atom dynamics and emission statistics using a quantum jump approach
NASA Astrophysics Data System (ADS)
Sandhya, S. N.
2007-01-01
Four-level atom dynamics is studied in a ladder system in the nine parameter space consisting of driving field strengths, detunings and decay constants, {Ω1,Ω2,Ω3,Δ1,Δ2,Δ3,Γ2,Γ3,Γ4} . One can selectively excite or induce two-level behavior between particular levels of ones choice by appropriately tuning the driving field strengths at three-photon resonance. The dynamics may be classified into two main regions of interest (i) small Ω2 coupling the ∣2⟩-∣3⟩ transition and (ii) large Ω2 . In case (i) one sees two-level behavior consisting of adjacent levels and in a particular region in the parameter space, there is an intermittent shelving of the electrons in one of the two subsystems. In case (ii) the levels consist of the ground state and the upper most level. Emission statistics is studied using the delay function approach in both the cases. In case (i), the behavior of the second order correlation function g2(t) , is similar to that of two-level emission for low Ω1 coupling the ∣1⟩-∣2⟩ transition, and the correlation increases with Ω1 for smaller time delays. While, in case (ii) when, in addition, Ω3 coupling the ∣3⟩-∣4⟩ transitionis kept low, g2(t) shows superpoissonian distribution, which may be attributed to three-photon processes.
Sorzano, C O S; Vargas, J; de la Rosa-Trevín, J M; Otón, J; Álvarez-Cabrera, A L; Abrishami, V; Sesmero, E; Marabini, R; Carazo, J M
2015-03-01
Cryo Electron Microscopy is a powerful Structural Biology technique, allowing the elucidation of the three-dimensional structure of biological macromolecules. In particular, the structural study of purified macromolecules -often referred as Single Particle Analysis(SPA)- is normally performed through an iterative process that needs a first estimation of the three-dimensional structure that is progressively refined using experimental data. It is well-known the local optimisation nature of this refinement, so that the initial choice of this first structure may substantially change the final result. Computational algorithms aiming to providing this first structure already exist. However, the question is far from settled and more robust algorithms are still needed so that the refinement process can be performed with sufficient guarantees. In this article we present a new algorithm that addresses the initial volume problem in SPA by setting it in a Weighted Least Squares framework and calculating the weights through a statistical approach based on the cumulative density function of different image similarity measures. We show that the new algorithm is significantly more robust than other state-of-the-art algorithms currently in use in the field. The algorithm is available as part of the software suite Xmipp (http://xmipp.cnb.csic.es) and Scipion (http://scipion.cnb.csic.es) under the name "Significant".
Heggeseth, Brianna; Harley, Kim; Warner, Marcella; Jewell, Nicholas; Eskenazi, Brenda
2015-01-01
It has been hypothesized that environmental exposures at key development periods such as in utero play a role in childhood growth and obesity. To investigate whether in utero exposure to endocrine-disrupting chemicals, dichlorodiphenyltrichloroethane (DDT) and its metabolite, dichlorodiphenyldichloroethane (DDE), is associated with childhood physical growth, we took a novel statistical approach to analyze data from the CHAMACOS cohort study. To model heterogeneity in the growth patterns, we used a finite mixture model in combination with a data transformation to characterize body mass index (BMI) with four groups and estimated the association between exposure and group membership. In boys, higher maternal concentrations of DDT and DDE during pregnancy are associated with a BMI growth pattern that is stable until about age five followed by increased growth through age nine. In contrast, higher maternal DDT exposure during pregnancy is associated with a flat, relatively stable growth pattern in girls. This study suggests that in utero exposure to DDT and DDE may be associated with childhood BMI growth patterns, not just BMI level, and both the magnitude of exposure and sex may impact the relationship.
Numerical study of chiral plasma instability within the classical statistical field theory approach
NASA Astrophysics Data System (ADS)
Buividovich, P. V.; Ulybyshev, M. V.
2016-07-01
We report on a numerical study of real-time dynamics of electromagnetically interacting chirally imbalanced lattice Dirac fermions within the classical statistical field theory approach. Namely, we perform exact simulations of the real-time quantum evolution of fermionic fields coupled to classical electromagnetic fields, which are in turn coupled to the vacuum expectation value of the fermionic electric current. We use Wilson-Dirac Hamiltonian for fermions, and noncompact action for the gauge field. In general, we observe that the backreaction of fermions on the electromagnetic field prevents the system from acquiring chirality imbalance. In the case of chirality pumping in parallel electric and magnetic fields, the electric field is screened by the produced on-shell fermions and the accumulation of chirality is hence stopped. In the case of evolution with initially present chirality imbalance, axial charge tends to transform to helicity of the electromagnetic field. By performing simulations on large lattices we show that in most cases this decay process is accompanied by the inverse cascade phenomenon, which transfers energy from short-wavelength to long-wavelength electromagnetic fields. In some simulations, however, we observe a very clear signature of inverse cascade for the helical magnetic fields that is not accompanied by the axial charge decay. This suggests that the relation between the inverse cascade and axial charge decay is not as straightforward as predicted by the simplest form of anomalous Maxwell equations.
Statistical Approaches to Detecting and Analyzing Tandem Repeats in Genomic Sequences
Anisimova, Maria; Pečerska, Julija; Schaper, Elke
2015-01-01
Tandem repeats (TRs) are frequently observed in genomes across all domains of life. Evidence suggests that some TRs are crucial for proteins with fundamental biological functions and can be associated with virulence, resistance, and infectious/neurodegenerative diseases. Genome-scale systematic studies of TRs have the potential to unveil core mechanisms governing TR evolution and TR roles in shaping genomes. However, TR-related studies are often non-trivial due to heterogeneous and sometimes fast evolving TR regions. In this review, we discuss these intricacies and their consequences. We present our recent contributions to computational and statistical approaches for TR significance testing, sequence profile-based TR annotation, TR-aware sequence alignment, phylogenetic analyses of TR unit number and order, and TR benchmarks. Importantly, all these methods explicitly rely on the evolutionary definition of a tandem repeat as a sequence of adjacent repeat units stemming from a common ancestor. The discussed work has a focus on protein TRs, yet is generally applicable to nucleic acid TRs, sharing similar features. PMID:25853125
Seetha, D; Velraj, G
2015-01-01
The ancient materials characterization will bring back the more evidence of the ancient people life styles. In this study, the archaeological pottery shards recently excavated from Kodumanal, Erode District in Tamilnadu, South India were investigated. The experimental results enlighten us to the elemental and the mineral composition of the pottery shards. The FT-IR technique tells that the mineralogy and the firing temperature of the samples are less than 800 °C, in the oxidizing/reducing atmosphere and the XRD was used as a complementary technique for the mineralogy. A thorough scientific study of SEM-EDS with the help of statistical approach done to find the provenance of the selected pot shards has not yet been performed. EDS and XRF results revealed that the investigated samples have the elements O, Si, Al, Fe, Mn, Mg, Ca, Ti, K and Na are in different compositions. For establishing the provenance (same or different origin) of pottery samples, Al and Si concentration ratio as well as hierarchical cluster analysis (HCA) was used and the results are correlated.
Heggeseth, Brianna; Harley, Kim; Warner, Marcella; Jewell, Nicholas; Eskenazi, Brenda
2015-01-01
It has been hypothesized that environmental exposures at key development periods such as in utero play a role in childhood growth and obesity. To investigate whether in utero exposure to endocrine-disrupting chemicals, dichlorodiphenyltrichloroethane (DDT) and its metabolite, dichlorodiphenyldichloroethane (DDE), is associated with childhood physical growth, we took a novel statistical approach to analyze data from the CHAMACOS cohort study. To model heterogeneity in the growth patterns, we used a finite mixture model in combination with a data transformation to characterize body mass index (BMI) with four groups and estimated the association between exposure and group membership. In boys, higher maternal concentrations of DDT and DDE during pregnancy are associated with a BMI growth pattern that is stable until about age five followed by increased growth through age nine. In contrast, higher maternal DDT exposure during pregnancy is associated with a flat, relatively stable growth pattern in girls. This study suggests that in utero exposure to DDT and DDE may be associated with childhood BMI growth patterns, not just BMI level, and both the magnitude of exposure and sex may impact the relationship. PMID:26125556
Lipid binding protein response to a bile acid library: a combined NMR and statistical approach.
Tomaselli, Simona; Pagano, Katiuscia; Boulton, Stephen; Zanzoni, Serena; Melacini, Giuseppe; Molinari, Henriette; Ragona, Laura
2015-11-01
Primary bile acids, differing in hydroxylation pattern, are synthesized from cholesterol in the liver and, once formed, can undergo extensive enzyme-catalysed glycine/taurine conjugation, giving rise to a complex mixture, the bile acid pool. Composition and concentration of the bile acid pool may be altered in diseases, posing a general question on the response of the carrier (bile acid binding protein) to the binding of ligands with different hydrophobic and steric profiles. A collection of NMR experiments (H/D exchange, HET-SOFAST, ePHOGSY NOESY/ROESY and (15) N relaxation measurements) was thus performed on apo and five different holo proteins, to monitor the binding pocket accessibility and dynamics. The ensemble of obtained data could be rationalized by a statistical approach, based on chemical shift covariance analysis, in terms of residue-specific correlations and collective protein response to ligand binding. The results indicate that the same residues are influenced by diverse chemical stresses: ligand binding always induces silencing of motions at the protein portal with a concomitant conformational rearrangement of a network of residues, located at the protein anti-portal region. This network of amino acids, which do not belong to the binding site, forms a contiguous surface, sensing the presence of the bound lipids, with a signalling role in switching protein-membrane interactions on and off.
Advanced Modular Power Approach to Affordable, Supportable Space Systems
NASA Technical Reports Server (NTRS)
Oeftering, Richard C.; Kimnach, Greg L.; Fincannon, James; Mckissock,, Barbara I.; Loyselle, Patricia L.; Wong, Edmond
2013-01-01
Recent studies of missions to the Moon, Mars and Near Earth Asteroids (NEA) indicate that these missions often involve several distinct separately launched vehicles that must ultimately be integrated together in-flight and operate as one unit. Therefore, it is important to see these vehicles as elements of a larger segmented spacecraft rather than separate spacecraft flying in formation. The evolution of large multi-vehicle exploration architecture creates the need (and opportunity) to establish a global power architecture that is common across all vehicles. The Advanced Exploration Systems (AES) Modular Power System (AMPS) project managed by NASA Glenn Research Center (GRC) is aimed at establishing the modular power system architecture that will enable power systems to be built from a common set of modular building blocks. The project is developing, demonstrating and evaluating key modular power technologies that are expected to minimize non-recurring development costs, reduce recurring integration costs, as well as, mission operational and support costs. Further, modular power is expected to enhance mission flexibility, vehicle reliability, scalability and overall mission supportability. The AMPS project not only supports multi-vehicle architectures but should enable multi-mission capability as well. The AMPS technology development involves near term demonstrations involving developmental prototype vehicles and field demonstrations. These operational demonstrations not only serve as a means of evaluating modular technology but also provide feedback to developers that assure that they progress toward truly flexible and operationally supportable modular power architecture.
Advances in a distributed approach for ocean model data interoperability
Signell, Richard P.; Snowden, Derrick P.
2014-01-01
An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.
Advancing Partnerships Towards an Integrated Approach to Oil Spill Response
NASA Astrophysics Data System (ADS)
Green, D. S.; Stough, T.; Gallegos, S. C.; Leifer, I.; Murray, J. J.; Streett, D.
2015-12-01
Oil spills can cause enormous ecological and economic devastation, necessitating application of the best science and technology available, and remote sensing is playing a growing critical role in the detection and monitoring of oil spills, as well as facilitating validation of remote sensing oil spill products. The FOSTERRS (Federal Oil Science Team for Emergency Response Remote Sensing) interagency working group seeks to ensure that during an oil spill, remote sensing assets (satellite/aircraft/instruments) and analysis techniques are quickly, effectively, appropriately, and seamlessly available to oil spills responders. Yet significant challenges remain for addressing oils spanning a vast range of chemical properties that may be spilled from the Tropics to the Arctic, with algorithms and scientific understanding needing advances to keep up with technology. Thus, FOSTERRS promotes enabling scientific discovery to ensure robust utilization of available technology as well as identifying technologies moving up the TRL (Technology Readiness Level). A recent FOSTERRS facilitated support activity involved deployment of the AVIRIS NG (Airborne Visual Infrared Imaging Spectrometer- Next Generation) during the Santa Barbara Oil Spill to validate the potential of airborne hyperspectral imaging to real-time map beach tar coverage including surface validation data. Many developing airborne technologies have potential to transition to space-based platforms providing global readiness.
Advances in Assays and Analytical Approaches for Botulinum Toxin Detection
Grate, Jay W.; Ozanich, Richard M.; Warner, Marvin G.; Bruckner-Lea, Cindy J.; Marks, James D.
2010-08-04
Methods to detect botulinum toxin, the most poisonous substance known, are reviewed. Current assays are being developed with two main objectives in mind: 1) to obtain sufficiently low detection limits to replace the mouse bioassay with an in vitro assay, and 2) to develop rapid assays for screening purposes that are as sensitive as possible while requiring an hour or less to process the sample an obtain the result. This review emphasizes the diverse analytical approaches and devices that have been developed over the last decade, while also briefly reviewing representative older immunoassays to provide background and context.
NASA Technical Reports Server (NTRS)
Burns, R. G.
1972-01-01
Criticism of a statistical approach used by Dasgupta (1972) in analyzing Snyder's (1959) chemical data for minerals from the Duluth Complex in Minnesota. Apart from obvious mathematical objections to citing correlation coefficients to four significant figures from Snyder's relatively inaccurate analytical data, several more fundamental criticisms are leveled at the statistical approach of Dasgupta. These relate to compositional zoning and disequilibrium in the minerals, inhomogeneities of the samples caused by inclusions and exsolved phases, measured site population data for the major cations in olivines and pyroxenes, and the importance of coupled substitutions in the crystal structures. It is concluded that the crystal field predictions of relative enrichments of Ni(2+) and Co(2+) ions in olivine and pyroxene structures have not been disproved by Dasgupta's statistical approach.
NASA Astrophysics Data System (ADS)
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
Waves and Wine: Advanced approaches for characterizing and exploiting micro-terroir
NASA Astrophysics Data System (ADS)
Hubbard, S. S.; Grote, K. R.; Freese, P.; Peterson, J. E.; Rubin, Y.
2012-12-01
uses a combination of advanced characterization techniques (including airborne imagery, microclimate, and surface geophysical data) with statistical approaches to identify vineyard zones that have fairly uniform soil, vegetation, and micrometeorological parameters. Obtained information is used in simple water balance models that can be used to design block-specific irrigation parameters. This effort has illustrated how straightforward numerical techniques and commercially available characterization approaches can be used to optimize block layout and to guide precision irrigation strategies, leading to optimized and uniform vegetation and winegrape characteristics within vineyard blocks. Recognition and incorporation of information of small scale variabilities into vineyard development and management practices could lead to winegrapes that better reflect the microterroir of the area. Advanced approaches, such as those described here, are expected to become increasingly important as available land and water resources continue to decrease, as spatially extensive datasets become less costly to collect and interpret, and as the public demand for high quality wine produced in environmentally friendly manner continues to increase.
Advances in Landslide Hazard Forecasting: Evaluation of Global and Regional Modeling Approach
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia B.; Adler, Robert; Hone, Yang; Kumar, Sujay; Peters-Lidard, Christa; Lerner-Lam, Arthur
2010-01-01
A prototype global satellite-based landslide hazard algorithm has been developed to identify areas that exhibit a high potential for landslide activity by combining a calculation of landslide susceptibility with satellite-derived rainfall estimates. A recent evaluation of this algorithm framework found that while this tool represents an important first step in larger-scale landslide forecasting efforts, it requires several modifications before it can be fully realized as an operational tool. The evaluation finds that the landslide forecasting may be more feasible at a regional scale. This study draws upon a prior work's recommendations to develop a new approach for considering landslide susceptibility and forecasting at the regional scale. This case study uses a database of landslides triggered by Hurricane Mitch in 1998 over four countries in Central America: Guatemala, Honduras, EI Salvador and Nicaragua. A regional susceptibility map is calculated from satellite and surface datasets using a statistical methodology. The susceptibility map is tested with a regional rainfall intensity-duration triggering relationship and results are compared to global algorithm framework for the Hurricane Mitch event. The statistical results suggest that this regional investigation provides one plausible way to approach some of the data and resolution issues identified in the global assessment, providing more realistic landslide forecasts for this case study. Evaluation of landslide hazards for this extreme event helps to identify several potential improvements of the algorithm framework, but also highlights several remaining challenges for the algorithm assessment, transferability and performance accuracy. Evaluation challenges include representation errors from comparing susceptibility maps of different spatial resolutions, biases in event-based landslide inventory data, and limited nonlandslide event data for more comprehensive evaluation. Additional factors that may improve
Fracture and electric current in the crust: a q-statistical approach
NASA Astrophysics Data System (ADS)
Cartwright-Taylor, A. L.; Vallianatos, F.; Sammonds, P. R.
2013-12-01
We have conducted room-temperature, triaxial compression experiments on samples of Carrara marble, recording concurrently acoustic and electric current signals emitted during deformation as well as mechanical loading information and ultrasonic wave velocities. Our results reveal that, in a non-piezoelectric rock under simulated crustal conditions, a measurable and increasing electric current (nA) is generated within the stressed sample in the region beyond (quasi-)linear elastic deformation; i.e. in the region of permanent deformation beyond the yield point of the material and in the presence of microcracking. This has implications for the earthquake preparation process. Our results extend to shallow crustal conditions previous observations of electric current signals in quartz-free rocks undergoing uniaxial deformation, supporting the idea of a universal electrification mechanism related to deformation; a number of which have been proposed. Confining pressure conditions of our slow strain rate experiments range from the purely brittle regime to the semi-brittle transition where cataclastic flow is the dominant deformation mechanism. Electric current evolution under these two confining pressures shows some markedly different features, implying the existence of a current-producing mechanism during both microfracture and frictional sliding, possibly related to crack localisation. In order to analyse these 'pressure-stimulated' electric currents, we adopt an entropy-based non-extensive statistical physics approach that is particularly suited to the study of fracture-related phenomena. In the presence of a long timescale (hours) external driving force (i.e. loading), the measured electric current exhibits transient, nonstationary behaviour with strong fluctuations over short timescales (seconds); calmer periods punctuated by bursts of strong activity. We find that the probability distribution of normalised electric current fluctuations over short time intervals (0.5s
A statistical modeling approach to computer-aided quantification of dental biofilm.
Mansoor, Awais; Patsekin, Valery; Scherl, Dale; Robinson, J Paul; Rajwa, Bartlomiej
2015-01-01
Biofilm is a formation of microbial material on tooth substrata. Several methods to quantify dental biofilm coverage have recently been reported in the literature, but at best they provide a semiautomated approach to quantification with significant input from a human grader that comes with the grader's bias of what is foreground, background, biofilm, and tooth. Additionally,human assessment indices limit the resolution of the quantification scale; most commercial scales use five levels of quantification for biofilm coverage (0%, 25%, 50%, 75%, and 100%). On the other hand, current state-of-the-art techniques in automatic plaque quantification fail to make their way into practical applications owing to their inability to incorporate human input to handle misclassifications. This paper proposes a new interactive method for biofilm quantification in Quantitative light-induced fluorescence(QLF) images of canine teeth that is independent of the perceptual bias of the grader. The method partitions a QLF image into segments of uniform texture and intensity called superpixels; every superpixel is statistically modeled as a realization of a single 2-D Gaussian Markov random field (GMRF) whose parameters are estimated; the superpixel is then assigned to one of three classes (background, biofilm, tooth substratum) based on the training set of data. The quantification results show a high degree of consistency and precision. At the same time, the proposed method gives pathologists full control to postprocess the automatic quantification by flipping misclassified superpixels to a different state (background,tooth, biofilm) with a single click, providing greater usability than simply marking the boundaries of biofilm and tooth as done by current state-of-the-art methods.
Ly, Cheng; Tranchina, Daniel
2009-02-01
In the probability density function (PDF) approach to neural network modeling, a common simplifying assumption is that the arrival times of elementary postsynaptic events are governed by a Poisson process. This assumption ignores temporal correlations in the input that sometimes have important physiological consequences. We extend PDF methods to models with synaptic event times governed by any modulated renewal process. We focus on the integrate-and-fire neuron with instantaneous synaptic kinetics and a random elementary excitatory postsynaptic potential (EPSP), A. Between presynaptic events, the membrane voltage, v, decays exponentially toward rest, while s, the time since the last synaptic input event, evolves with unit velocity. When a synaptic event arrives, v jumps by A, and s is reset to zero. If v crosses the threshold voltage, an action potential occurs, and v is reset to v(reset). The probability per unit time of a synaptic event at time t, given the elapsed time s since the last event, h(s, t), depends on specifics of the renewal process. We study how regularity of the train of synaptic input events affects output spike rate, PDF and coefficient of variation (CV) of the interspike interval, and the autocorrelation function of the output spike train. In the limit of a deterministic, clocklike train of input events, the PDF of the interspike interval converges to a sum of delta functions, with coefficients determined by the PDF for A. The limiting autocorrelation function of the output spike train is a sum of delta functions whose coefficients fall under a damped oscillatory envelope. When the EPSP CV, sigma A/mu A, is equal to 0.45, a CV for the intersynaptic event interval, sigma T/mu T = 0.35, is functionally equivalent to a deterministic periodic train of synaptic input events (CV = 0) with respect to spike statistics. We discuss the relevance to neural network simulations. PMID:19431264
Ström, Peter; Støer, Nathalie; Borthwick, Nicola; Dong, Tao; Hanke, Tomáš; Reilly, Marie
2016-08-01
To investigate in detail the effect of infection or vaccination on the human immune system, ELISpot assays are used to simultaneously test the immune response to a large number of peptides of interest. Scientists commonly use "peptide pools", where, instead of an individual peptide, a test well contains a group of peptides. Since the response from a well may be due to any or many of the peptides in the pool, pooled assays usually need to be followed by confirmatory assays of a number of individual peptides. We present a statistical method that enables estimation of individual peptide responses from pool responses using the Expectation Maximization (EM) algorithm for "incomplete data". We demonstrate the accuracy and precision of these estimates in simulation studies of ELISpot plates with 90 pools of 6 or 7 peptides arranged in three dimensions and three Mock wells for the estimation of background. In analysis of real pooled data from 6 subjects in a HIV-1 vaccine trial, where 199 peptides were arranged in 80 pools if size 9 or 10, our estimates were in very good agreement with the results from individual-peptide confirmatory assays. Compared to the classical approach, we could identify almost all the same peptides with high or moderate response, with less than half the number of confirmatory tests. Our method facilitates efficient use of the information available in pooled ELISpot data to avoid or reduce the need for confirmatory testing. We provide an easy-to-use free online application for implementing the method, where on uploading two spreadsheets with the pool design and pool responses, the user obtains the estimates of the individual peptide responses. PMID:27196788
NASA Astrophysics Data System (ADS)
Vicsek, Tamas
1997-03-01
It is demonstrated that a wide range of experimental results on biological motion can be successfully interpreted in terms of statistical physics motivated models taking into account the relevant microscopic details of motor proteins and allowing analytic solutions. Two important examples are considered, i) the motion of a single kinesin molecule along microtubules inside individual cells and ii) muscle contraction which is a macroscopic phenomenon due to the collective action of a large number of myosin heads along actin filaments. i) Recently individual two-headed kinesin molecules have been studied in in vitro motility assays revealing a number of their peculiar transport properties. Here we propose a simple and robust model for the kinesin stepping process with elastically coupled Brownian heads showing all of these properties. The analytic treatment of our model results in a very good fit to the experimental data and practically has no free parameters. ii) Myosin is an ATPase enzyme that converts the chemical energy stored in ATP molecules into mechanical work. During muscle contraction, the myosin cross-bridges attach to the actin filaments and exert force on them yielding a relative sliding of the actin and myosin filaments. In this paper we present a simple mechanochemical model for the cross-bridge interaction involving the relevant kinetic data and providing simple analytic solutions for the mechanical properties of muscle contraction, such as the force-velocity relationship or the relative number of the attached cross-bridges. So far the only analytic formula which could be fitted to the measured force-velocity curves has been the well known Hill equation containing parameters lacking clear microscopic origin. The main advantages of our new approach are that it explicitly connects the mechanical data with the kinetic data and the concentration of the ATP and ATPase products and as such it leads to new analytic solutions which agree extremely well with a
Recent Advances in Treatment Approaches of Mucopolysaccharidosis VI.
Giugliani, Roberto; Carvalho, Clarissa Gutiérrez; Herber, Silvani; de Camargo Pinto, Louise Lapagesse
2011-06-01
Mucopolysaccharidosis VI is caused by accumulation of the glycosaminoglycan dermatan sulfate in all tissues due to decreased activity of the enzyme arylsulfatase B. Patients exhibit multisystemic signs and symptoms in a chronic and progressive manner, especially with changes in the skeleton, cardiopulmonary system, cornea, skin, liver, spleen and meninges. Patients usually have normal intelligence. In the past, treatment of mucopolysaccharidoses was limited to palliative medical care. The outcome for affected patients improved with the introduction of new technologies as hematopoietic stem cell transplantation, relegated to specific situations after enzyme replacement therapy (ERT) became available. The specific ERT for MPS VI, galsulfase (Naglazyme®, Biomarin Pharmaceutical) was approved in 2005 by FDA and in 2006 by EMEA, and three clinical studies including 56 patients have evaluated the efficacy and safety. Long-term follow up data with patients treated up to 5 years showed that ERT is well tolerated and associated with sustained improvements in the patients' clinical condition. Intrathecal ERT may be considered in situations of high neurosurgical risk but still it is experimental in humans, as is intra-articular ERT. It is possible that the full impact of this therapy will only be demonstrated when patients are identified and treated soon after birth, as it was shown that early introduction of ERT produced immune tolerance and improved enzyme effectiveness in the cat model. New insights on the pathophysiology of MPS disorders are leading to alternative therapeutic approaches, as gene therapy, inflammatory response modulators and substrate reduction therapy.
Recent Advances in Treatment Approaches of Mucopolysaccharidosis VI.
Giugliani, Roberto; Carvalho, Clarissa Gutiérrez; Herber, Silvani; de Camargo Pinto, Louise Lapagesse
2011-06-01
Mucopolysaccharidosis VI is caused by accumulation of the glycosaminoglycan dermatan sulfate in all tissues due to decreased activity of the enzyme arylsulfatase B. Patients exhibit multisystemic signs and symptoms in a chronic and progressive manner, especially with changes in the skeleton, cardiopulmonary system, cornea, skin, liver, spleen and meninges. Patients usually have normal intelligence. In the past, treatment of mucopolysaccharidoses was limited to palliative medical care. The outcome for affected patients improved with the introduction of new technologies as hematopoietic stem cell transplantation, relegated to specific situations after enzyme replacement therapy (ERT) became available. The specific ERT for MPS VI, galsulfase (Naglazyme®, Biomarin Pharmaceutical) was approved in 2005 by FDA and in 2006 by EMEA, and three clinical studies including 56 patients have evaluated the efficacy and safety. Long-term follow up data with patients treated up to 5 years showed that ERT is well tolerated and associated with sustained improvements in the patients' clinical condition. Intrathecal ERT may be considered in situations of high neurosurgical risk but still it is experimental in humans, as is intra-articular ERT. It is possible that the full impact of this therapy will only be demonstrated when patients are identified and treated soon after birth, as it was shown that early introduction of ERT produced immune tolerance and improved enzyme effectiveness in the cat model. New insights on the pathophysiology of MPS disorders are leading to alternative therapeutic approaches, as gene therapy, inflammatory response modulators and substrate reduction therapy. PMID:21506914
NASA Astrophysics Data System (ADS)
Crucifix, Michel; Wilkinson, Richard; Carson, Jake; Preston, Simon; Alemeida, Carlos; Rougier, Jonathan
2013-04-01
The existence of an action of astronomical forcing on the Pleistocene climate is almost undisputed. However, quantifying this action is not straightforward. In particular, the phenomenon of deglaciation is generally interpreted as a manifestation of instability, which is typical of non-linear systems. As a consequence, explaining the Pleistocene climate record as the addition of an astronomical contribution and noise-as often done using harmonic analysis tools-is potentially deceptive. Rather, we advocate a methodology in which non-linear stochastic dynamical systems are calibrated on the Pleistocene climate record. The exercise, though, requires careful statistical reasoning and state-of-the-art techniques. In fact, the problem has been judged to be mathematically 'intractable and unsolved' and some pragmatism is justified. In order to illustrate the methodology we consider one dynamical system that potentially captures four dynamical features of the Pleistocene climate : the existence of a saddle-node bifurcation in at least one of its slow components, a time-scale separation between a slow and a fast component, the action of astronomical forcing, and the existence a stochastic contribution to the system dynamics. This model is obviously not the only possible representation of Pleistocene dynamics, but it encapsulates well enough both our theoretical and empirical knowledge into a very simple form to constitute a valid starting point. The purpose of this poster is to outline the practical challenges in calibrating such a model on paleoclimate observations. Just as in time series analysis, there is no one single and universal test or criteria that would demonstrate the validity of an approach. Several methods exist to calibrate the model and judgement develops by the confrontation of the results of the different methods. In particular, we consider here the Kalman filter variants, the Particle Monte-Carlo Markov Chain, and two other variants of Sequential Monte
NASA Astrophysics Data System (ADS)
Otero, Noelia; Butler, Tim; Sillmann, Jana
2015-04-01
Air pollution has become a serious problem in many industrialized and densely-populated urban areas due to its negative effects on human health, damages agricultural crops and ecosystems. The concentration of air pollutants is the result of several factors, including emission sources, lifetime and spatial distribution of the pollutants, atmospheric properties and interactions, wind speed and direction, and topographic features. Episodes of air pollution are often associated with stationary or slowly migrating anticyclonic (high-pressure) systems that reduce advection, diffusion, and deposition of atmospheric pollutants. Certain weather conditions facilitate the concentration of pollutants, such as the incidence of light winds that contributes to the increasing of stagnation episodes affecting air quality. Therefore, the atmospheric circulation plays an important role in air quality conditions that are affected by both, synoptic and local scale processes. This study assesses the influence of the large-scale circulation along with meteorological conditions on tropospheric ozone in Europe. The frequency of weather types (WTs) is examined under a novel approach, which is based on an automated version of the Lamb Weather Types catalog (Jenkinson and Collison, 1977). Here, we present an implementation of such classification point-by-point over the European domain. Moreover, the analysis uses a new grid-averaged climatology (1°x1°) of daily surface ozone concentrations from observations of individual sites that matches the resolution of global models (Schnell,et al., 2014). Daily frequency of WTs and meteorological conditions are combined in a multiple regression approach for investigating the influence on ozone concentrations. Different subsets of predictors are examined within multiple linear regression models (MLRs) for each grid cell in order to identify the best regression model. Several statistical metrics are applied for estimating the robustness of the
Yang, Jinzhong; Woodward, Wendy A.; Reed, Valerie K.; Strom, Eric A.; Perkins, George H.; Tereffe, Welela; Buchholz, Thomas A.; Zhang, Lifei; Balter, Peter; Court, Laurence E.; Li, X. Allen; Dong, Lei
2014-05-01
Purpose: To develop a new approach for interobserver variability analysis. Methods and Materials: Eight radiation oncologists specializing in breast cancer radiation therapy delineated a patient's left breast “from scratch” and from a template that was generated using deformable image registration. Three of the radiation oncologists had previously received training in Radiation Therapy Oncology Group consensus contouring for breast cancer atlas. The simultaneous truth and performance level estimation algorithm was applied to the 8 contours delineated “from scratch” to produce a group consensus contour. Individual Jaccard scores were fitted to a beta distribution model. We also applied this analysis to 2 or more patients, which were contoured by 9 breast radiation oncologists from 8 institutions. Results: The beta distribution model had a mean of 86.2%, standard deviation (SD) of ±5.9%, a skewness of −0.7, and excess kurtosis of 0.55, exemplifying broad interobserver variability. The 3 RTOG-trained physicians had higher agreement scores than average, indicating that their contours were close to the group consensus contour. One physician had high sensitivity but lower specificity than the others, which implies that this physician tended to contour a structure larger than those of the others. Two other physicians had low sensitivity but specificity similar to the others, which implies that they tended to contour a structure smaller than the others. With this information, they could adjust their contouring practice to be more consistent with others if desired. When contouring from the template, the beta distribution model had a mean of 92.3%, SD ± 3.4%, skewness of −0.79, and excess kurtosis of 0.83, which indicated a much better consistency among individual contours. Similar results were obtained for the analysis of 2 additional patients. Conclusions: The proposed statistical approach was able to measure interobserver variability quantitatively and to
Statistical downscaling of daily precipitation: A two-step probabilistic approach
NASA Astrophysics Data System (ADS)
Haas, R.; Born, K.
2010-09-01
Downscaling of climate data is an important issue in order to obtain high-resolution data desired for most applications in meteorology and hydrology and to gain a better understanding of local climate variability. Statistical downscaling transforms data from large to local scale by relating punctual climate observations, climate model outputs and high-resolution surface data. In this study, a probabilistic downscaling approach is applied on precipitation data from the subtropical mountain environment of the High Atlas in Morocco. The observations were collected within the GLOWA project IMPETUS West Africa. The considered area is characterized by strong NW-SE gradients both of altitude and of precipitation. The method consists of two steps. In order to interpolate between observational sites, the first step applies Multiple Linear Regression (MLR) on observed data taking local topographic information into account. The dependent variable (predictand) is estimated using different explanatory variables (predictors): height, latitude, longitude, slope, aspect, or gradients of height in zonal and meridional direction. For a predictand like temperature, which follows approximately a normal distribution, this method is appropriate. The development of transfer functions for precipitation is challenging, because the empirical distribution is heavily skewed due to many days with marginal or zero amounts and few extreme events. Because an application of MLR on observed values yields partly negative rainfall amounts, a probabilistic approach is utilized. At this, MLR is applied on parameters of a theoretical distribution (e.g. Weibull), which is fit to empirical distributions of precipitation amounts. In the second step, a transfer function between distributions of large-scale predictors, e.g. climate model or reanalysis data, and of local observations is derived. This is achieved by an equal probability mapping between cumulative distributions functions (CDFs) of large
A group-theoretic approach to constructions of non-relativistic spin-statistics
NASA Astrophysics Data System (ADS)
Harrison, J. M.; Robbins, J. M.
2000-11-01
We give a group-theoretical generalization of Berry and Robbins' treatment of identical particles with spin. The original construction, which leads to the correct spin-statistics relation, is seen to arise from particular irreducible representations—the totally symmetric representations—of the group SU(4). Here we calculate the exchange signs and corresponding statistics for all irreducible representations of SU(4).
A Constructivist Approach in a Blended E-Learning Environment for Statistics
ERIC Educational Resources Information Center
Poelmans, Stephan; Wessa, Patrick
2015-01-01
In this study, we report on the students' evaluation of a self-constructed constructivist e-learning environment for statistics, the compendium platform (CP). The system was built to endorse deeper learning with the incorporation of statistical reproducibility and peer review practices. The deployment of the CP, with interactive workshops and…
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
NASA Astrophysics Data System (ADS)
Salman, Ahmad; Lapidot, Itshak; Pomerantz, Ami; Tsror, Leah; Shufan, Elad; Moreh, Raymond; Mordechai, Shaul; Huleihel, Mahmoud
2012-01-01
The early diagnosis of phytopathogens is of a great importance; it could save large economical losses due to crops damaged by fungal diseases, and prevent unnecessary soil fumigation or the use of fungicides and bactericides and thus prevent considerable environmental pollution. In this study, 18 isolates of three different fungi genera were investigated; six isolates of Colletotrichum coccodes, six isolates of Verticillium dahliae and six isolates of Fusarium oxysporum. Our main goal was to differentiate these fungi samples on the level of isolates, based on their infrared absorption spectra obtained using the Fourier transform infrared-attenuated total reflection (FTIR-ATR) sampling technique. Advanced statistical and mathematical methods: principal component analysis (PCA), linear discriminant analysis (LDA), and k-means were applied to the spectra after manipulation. Our results showed significant spectral differences between the various fungi genera examined. The use of k-means enabled classification between the genera with a 94.5% accuracy, whereas the use of PCA [3 principal components (PCs)] and LDA has achieved a 99.7% success rate. However, on the level of isolates, the best differentiation results were obtained using PCA (9 PCs) and LDA for the lower wavenumber region (800-1775 cm-1), with identification success rates of 87%, 85.5%, and 94.5% for Colletotrichum, Fusarium, and Verticillium strains, respectively.
NASA Astrophysics Data System (ADS)
Mazzitello, Karina I.; Candia, Julián
2012-12-01
In every country, public and private agencies allocate extensive funding to collect large-scale statistical data, which in turn are studied and analyzed in order to determine local, regional, national, and international policies regarding all aspects relevant to the welfare of society. One important aspect of that process is the visualization of statistical data with embedded geographical information, which most often relies on archaic methods such as maps colored according to graded scales. In this work, we apply nonstandard visualization techniques based on physical principles. We illustrate the method with recent statistics on homicide rates in Brazil and their correlation to other publicly available data. This physics-based approach provides a novel tool that can be used by interdisciplinary teams investigating statistics and model projections in a variety of fields such as economics and gross domestic product research, public health and epidemiology, sociodemographics, political science, business and marketing, and many others.
ERIC Educational Resources Information Center
Remsburg, Alysa J.; Harris, Michelle A.; Batzli, Janet M.
2014-01-01
How can science instructors prepare students for the statistics needed in authentic inquiry labs? We designed and assessed four instructional modules with the goals of increasing student confidence, appreciation, and performance in both experimental design and data analysis. Using extensions from a just-in-time teaching approach, we introduced…
Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...
NASA Astrophysics Data System (ADS)
Stein, Thorwald; Hogan, Robin; Hanley, Kirsty; Clark, Peter; Halliwell, Carol; Lean, Humphrey; Nicol, John; Plant, Robert
2016-04-01
National weather services increasingly use convection-permitting simulations to assist in their operational forecasts. The skill in forecasting rainfall from convection is much improved in such simulations compared to global models that rely on parameterisation schemes, but it is less obvious if and how increased model resolution or more advanced mixing and microphysics schemes improve the physical representation of convective storms. Here, we present a novel statistical approach using high-resolution radar data to evaluate the morphology, dynamics, and evolution of convective storms over southern England. In the DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) we have used an innovative track-and-scan approach to target individual storms with the Chilbolton radar, which measures cloud and precipitation at scales less than 300m out to 100km. These radar observations provide three-dimensional storm volumes and estimates of updraft core strength and sizes at adequate scales to test high-resolution models. For two days of interest, we have run the Met Office forecast model at its operational configuration (1.5km grid length) and at grid lengths of 500m, 200m, and 100m. Radar reflectivity and Doppler winds were simulated from the model cloud and wind output for a like-with-like comparison against the radar observations. Our results show that although the 1.5km simulation produces similar domain-averaged rainfall as the other simulations, the majority of rainfall is produced from storms that are a factor 1.5-2 larger than observed as well as longer lived, while the updrafts of these storms are an order of magnitude greater than estimated from observations. We generally find improvements as model resolution increases, although our results depend strongly on the mixing-length parameter in the model turbulence scheme. Our findings highlight the promising role of high-resolution radar data and observational strategies targeting individual storms
Chen, W M; Deng, H W
2001-07-01
Transmission disequilibrium test (TDT) is a nuclear family-based analysis that can test linkage in the presence of association. It has gained extensive attention in theoretical investigation and in practical application; in both cases, the accuracy and generality of the power computation of the TDT are crucial. Despite extensive investigations, previous approaches for computing the statistical power of the TDT are neither accurate nor general. In this paper, we develop a general and highly accurate approach to analytically compute the power of the TDT. We compare the results from our approach with those from several other recent papers, all against the results obtained from computer simulations. We show that the results computed from our approach are more accurate than or at least the same as those from other approaches. More importantly, our approach can handle various situations, which include (1) families that consist of one or more children and that have any configuration of affected and nonaffected sibs; (2) families ascertained through the affection status of parent(s); (3) any mixed sample with different types of families in (1) and (2); (4) the marker locus is not a disease susceptibility locus; and (5) existence of allelic heterogeneity. We implement this approach in a user-friendly computer program: TDT Power Calculator. Its applications are demonstrated. The approach and the program developed here should be significant for theoreticians to accurately investigate the statistical power of the TDT in various situations, and for empirical geneticists to plan efficient studies using the TDT.
ERIC Educational Resources Information Center
Petocz, Peter; Sowey, Eric
2012-01-01
The term "data snooping" refers to the practice of choosing which statistical analyses to apply to a set of data after having first looked at those data. Data snooping contradicts a fundamental precept of applied statistics, that the scheme of analysis is to be planned in advance. In this column, the authors shall elucidate the statistical…
Teaching Statistics Using Classic Psychology Research: An Activities-Based Approach
ERIC Educational Resources Information Center
Holmes, Karen Y.; Dodd, Brett A.
2012-01-01
In this article, we discuss a collection of active learning activities derived from classic psychology studies that illustrate the appropriate use of descriptive and inferential statistics. (Contains 2 tables.)
Assessment of rockfall susceptibility by integrating statistical and physically-based approaches
NASA Astrophysics Data System (ADS)
Frattini, Paolo; Crosta, Giovanni; Carrara, Alberto; Agliardi, Federico
In Val di Fassa (Dolomites, Eastern Italian Alps) rockfalls constitute the most significant gravity-induced natural disaster that threatens both the inhabitants of the valley, who are few, and the thousands of tourists who populate the area in summer and winter. To assess rockfall susceptibility, we developed an integrated statistical and physically-based approach that aimed to predict both the susceptibility to onset and the probability that rockfalls will attain specific reaches. Through field checks and multi-temporal aerial photo-interpretation, we prepared a detailed inventory of both rockfall source areas and associated scree-slope deposits. Using an innovative technique based on GIS tools and a 3D rockfall simulation code, grid cells pertaining to the rockfall source-area polygons were classified as active or inactive, based on the state of activity of the associated scree-slope deposits. The simulation code allows one to link each source grid cell with scree deposit polygons by calculating the trajectory of each simulated launch of blocks. By means of discriminant analysis, we then identified the mix of environmental variables that best identifies grid cells with low or high susceptibility to rockfalls. Among these variables, structural setting, land use, and morphology were the most important factors that led to the initiation of rockfalls. We developed 3D simulation models of the runout distance, intensity and frequency of rockfalls, whose source grid cells corresponded either to the geomorphologically-defined source polygons ( geomorphological scenario) or to study area grid cells with slope angle greater than an empirically-defined value of 37° ( empirical scenario). For each scenario, we assigned to the source grid cells an either fixed or variable onset susceptibility; the latter was derived from the discriminant model group (active/inactive) membership probabilities. Comparison of these four models indicates that the geomorphological scenario with
NASA Astrophysics Data System (ADS)
Shech, Elay
2015-09-01
This paper looks at the nature of idealizations and representational structures appealed to in the context of the fractional quantum Hall effect, specifically, with respect to the emergence of anyons and fractional statistics. Drawing on an analogy with the Aharonov-Bohm effect, it is suggested that the standard approach to the effects—(what we may call) the topological approach to fractional statistics—relies essentially on problematic idealizations that need to be revised in order for the theory to be explanatory. An alternative geometric approach is outlined and endorsed. Roles for idealizations in science, as well as consequences for the debate revolving around so-called essential idealizations, are discussed.
Transfer Kinetics at the Aqueous/Non-Aqueous Phase Liquid Interface. A Statistical Mechanic Approach
NASA Astrophysics Data System (ADS)
Doss, S. K.; Ezzedine, S.; Ezzedine, S.; Ziagos, J. P.; Hoffman, F.; Gelinas, R. J.
2001-05-01
Many modeling efforts in the literature use a first-order, linear-driving-force model to represent the chemical dissolution process at the non-aqueous/aqueous phase liquid (NAPL/APL) interface. In other words, NAPL to APL phase flux is assumed to be equal to the difference between the solubility limit and the "bulk aqueous solution" concentrations times a mass transfer coefficient. Under such assumptions, a few questions are raised: where, in relation to a region of pure NAPL, does the "bulk aqueous solution" regime begin and how does it behave? The answers are assumed to be associated with an arbitrary, predetermined boundary layer, which separates the NAPL from the surrounding solution. The mass transfer rate is considered to be, primarily, limited by diffusion of the component through the boundary layer. In fact, compositional models of interphase mass transfer usually assume that a local equilibrium is reached between phases. Representing mass flux as a rate-limiting process is equivalent to assuming diffusion through a stationary boundary layer with an instantaneous local equilibrium and linear concentration profile. Some environmental researchers have enjoyed success explaining their data using chemical engineering-based correlations. Correlations are strongly dependent on the experimental conditions employed. A universally applicable theory for NAPL dissolution in natural systems does not exist. These correlations are usually expressed in terms of the modified Sherwood number as a function of Reynolds, Peclet, and Schmidt numbers. The Sherwood number may be interpreted as the ratio between the grain size and the thickness of the Nernst stagnant film. In the present study, we show that transfer kinetics at the NAPL/APL interface under equilibrium conditions disagree with approaches based on the Nernst stagnant film concept. It is unclear whether local equilibrium assumptions used in current models are suitable for all situations.A statistical mechanic
Fieira, Eva; Delgado, Maria; Mendez, Lucía; Fernandez, Ricardo; de la Torre, Mercedes
2014-01-01
Objectives Conventional video-assisted thoracoscopic (VATS) lobectomy for advanced lung cancer is a feasible and safe surgery in experienced centers. The aim of this study is to assess the feasibility of uniportal VATS approach in the treatment of advanced non-small cell lung cancer (NSCLC) and compare the perioperative outcomes and survival with those in early-stage tumors operated through the uniportal approach. Methods From June 2010 to December 2012, we performed 163 uniportal VATS major pulmonary resections. Only NSCLC cases were included in this study (130 cases). Patients were divided into two groups: (A) early stage and (B) advanced cases (>5 cm, T3 or T4, or tumors requiring neoadjuvant treatment). A descriptive and retrospective study was performed, comparing perioperative outcomes and survival obtained in both groups. A survival analysis was performed with Kaplan-Meier curves and the log-rank test was used to compare survival between patients with early and advanced stages. Results A total of 130 cases were included in the study: 87 (A) vs. 43 (B) patients (conversion rate 1.1 vs. 6.5%, P=0.119). Mean global age was 64.9 years and 73.8% were men. The patient demographic data was similar in both groups. Upper lobectomies (A, 52 vs. B, 21 patients) and anatomic segmentectomies (A, 4 vs. B, 0) were more frequent in group A while pneumonectomy was more frequent in B (A, 1 vs. B, 6 patients). Surgical time was longer (144.9±41.3 vs. 183.2±48.9, P<0.001), and median number of lymph nodes (14 vs. 16, P=0.004) were statistically higher in advanced cases. Median number of nodal stations (5 vs. 5, P=0.165), days of chest tube (2 vs. 2, P=0.098), HOS (3 vs. 3, P=0.072), and rate of complications (17.2% vs. 14%, P=0.075) were similar in both groups. One patient died on the 58th postoperative day. The 30-month survival rate was 90% for the early stage group and 74% for advanced cases Conclusions Uniportal VATS lobectomy for advanced cases of NSCLC is a safe and
Scheffer, Hester J. Melenhorst, Marleen C. A. M.; Vogel, Jantien A.; Tilborg, Aukje A. J. M. van; Nielsen, Karin Kazemier, Geert; Meijerink, Martijn R.
2015-06-15
Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively.
Statistics of beam-driven waves in plasmas with ambient fluctuations: Reduced-parameter approach
Tyshetskiy, Yu.; Cairns, I. H.; Robinson, P. A.
2008-09-15
A reduced-parameter (RP) model of quasilinear wave-plasma interactions is used to analyze statistical properties of beam-driven waves in plasmas with ambient density fluctuations. The probability distribution of wave energies in such a system is shown to have a relatively narrow peak just above the thermal wave level, and a power-law tail at high energies, the latter becoming progressively more evident for increasing characteristic amplitude of the ambient fluctuations. To better understand the physics behind these statistical features of the waves, a simplified model of stochastically driven thermal waves is developed on the basis of the RP model. An approximate analytic solution for stationary statistical distribution of wave energies W is constructed, showing a good agreement with that of the original RP model. The 'peak' and 'tail' features of the wave energy distribution are shown to be a result of contributions of two groups of wave clumps: those subject to either very slow or very fast random variations of total wave growth rate (due to fluctuations of ambient plasma density), respectively. In the case of significant ambient plasma fluctuations, the overall wave energy distribution is shown to have a clear power-law tail at high energies, P(W){proportional_to}W{sup -{alpha}}, with nontrivial exponent 1<{alpha}<2, while for weak fluctuations it is close to the lognormal distribution predicted by pure stochastic growth theory. The model's wave statistics resemble the statistics of plasma waves observed by the Ulysses spacecraft in some interplanetary type III burst sources. This resemblance is discussed qualitatively, and it is suggested that the stochastically driven thermal waves might be a candidate for explaining the power-law tails in the observed wave statistics without invoking mechanisms such as self-organized criticality or nonlinear wave collapse.
NASA Technical Reports Server (NTRS)
Keegan, W. B.
1974-01-01
In order to produce cost effective environmental test programs, the test specifications must be realistic and to be useful, they must be available early in the life of a program. This paper describes a method for achieving such specifications for subsystems by utilizing the results of a statistical analysis of data acquired at subsystem mounting locations during system level environmental tests. The paper describes the details of this statistical analysis. The resultant recommended levels are a function of the subsystems' mounting location in the spacecraft. Methods of determining this mounting 'zone' are described. Recommendations are then made as to which of the various problem areas encountered should be pursued further.
ERIC Educational Resources Information Center
Keefe, Francis J.; And Others
1992-01-01
Reviews and highlights recent research advances and future research directions concerned with behavioral and cognitive-behavioral approaches to chronic pain. Reviews assessment research on studies of social context of pain, relationship of chronic pain to depression, cognitive variables affecting pain, and comprehensive assessment measures.…
Exploring Advanced Piano Students' Approaches to Sight-Reading
ERIC Educational Resources Information Center
Zhukov, Katie
2014-01-01
The ability to read music fluently is fundamental for undergraduate music study yet the training of sight-reading is often neglected. This study compares approaches to sight-reading and accompanying by students with extensive sight-reading experience to those with limited experience, and evaluates the importance of this skill to advanced pianists…
Evaluating New Approaches to Teaching of Sight-Reading Skills to Advanced Pianists
ERIC Educational Resources Information Center
Zhukov, Katie
2014-01-01
This paper evaluates three teaching approaches to improving sight-reading skills against a control in a large-scale study of advanced pianists. One hundred pianists in four equal groups participated in newly developed training programmes (accompanying, rhythm, musical style and control), with pre- and post-sight-reading tests analysed using…
Kruscha, Alexandra; Lindner, Benjamin
2016-08-01
We consider a homogeneous population of stochastic neurons that are driven by weak common noise (stimulus). To capture and analyze the joint firing events within the population, we introduce the partial synchronous output of the population. This is a time series defined by the events that at least a fixed fraction γ∈[0,1] of the population fires simultaneously within a small time interval. For this partial synchronous output we develop two analytical approaches to the correlation statistics. In the Gaussian approach we represent the synchronous output as a nonlinear transformation of the summed population activity and approximate the latter by a Gaussian process. In the combinatorial approach the synchronous output is represented by products of box-filtered spike trains of the single neurons. In both approaches we use linear-response theory to derive approximations for statistical measures that hold true for weak common noise. In particular, we calculate the mean value and power spectrum of the synchronous output and the cross-spectrum between synchronous output and common noise. We apply our results to the leaky integrate-and-fire neuron model and compare them to numerical simulations. The combinatorial approach is shown to provide a more accurate description of the statistics for small populations, whereas the Gaussian approximation yields compact formulas that work well for a sufficiently large population size. In particular, in the Gaussian approximation all statistical measures reveal a symmetry in the synchrony threshold γ around the mean value of the population activity. Our results may contribute to a better understanding of the role of coincidence detection in neural signal processing.
NASA Astrophysics Data System (ADS)
Kruscha, Alexandra; Lindner, Benjamin
2016-08-01
We consider a homogeneous population of stochastic neurons that are driven by weak common noise (stimulus). To capture and analyze the joint firing events within the population, we introduce the partial synchronous output of the population. This is a time series defined by the events that at least a fixed fraction γ ∈[0 ,1 ] of the population fires simultaneously within a small time interval. For this partial synchronous output we develop two analytical approaches to the correlation statistics. In the Gaussian approach we represent the synchronous output as a nonlinear transformation of the summed population activity and approximate the latter by a Gaussian process. In the combinatorial approach the synchronous output is represented by products of box-filtered spike trains of the single neurons. In both approaches we use linear-response theory to derive approximations for statistical measures that hold true for weak common noise. In particular, we calculate the mean value and power spectrum of the synchronous output and the cross-spectrum between synchronous output and common noise. We apply our results to the leaky integrate-and-fire neuron model and compare them to numerical simulations. The combinatorial approach is shown to provide a more accurate description of the statistics for small populations, whereas the Gaussian approximation yields compact formulas that work well for a sufficiently large population size. In particular, in the Gaussian approximation all statistical measures reveal a symmetry in the synchrony threshold γ around the mean value of the population activity. Our results may contribute to a better understanding of the role of coincidence detection in neural signal processing.
Kruscha, Alexandra; Lindner, Benjamin
2016-08-01
We consider a homogeneous population of stochastic neurons that are driven by weak common noise (stimulus). To capture and analyze the joint firing events within the population, we introduce the partial synchronous output of the population. This is a time series defined by the events that at least a fixed fraction γ∈[0,1] of the population fires simultaneously within a small time interval. For this partial synchronous output we develop two analytical approaches to the correlation statistics. In the Gaussian approach we represent the synchronous output as a nonlinear transformation of the summed population activity and approximate the latter by a Gaussian process. In the combinatorial approach the synchronous output is represented by products of box-filtered spike trains of the single neurons. In both approaches we use linear-response theory to derive approximations for statistical measures that hold true for weak common noise. In particular, we calculate the mean value and power spectrum of the synchronous output and the cross-spectrum between synchronous output and common noise. We apply our results to the leaky integrate-and-fire neuron model and compare them to numerical simulations. The combinatorial approach is shown to provide a more accurate description of the statistics for small populations, whereas the Gaussian approximation yields compact formulas that work well for a sufficiently large population size. In particular, in the Gaussian approximation all statistical measures reveal a symmetry in the synchrony threshold γ around the mean value of the population activity. Our results may contribute to a better understanding of the role of coincidence detection in neural signal processing. PMID:27627347
Ball, Robert; Horne, Dale; Izurieta, Hector; Sutherland, Andrea; Walderhaug, Mark; Hsu, Henry
2011-05-01
The public health community faces increasing demands for improving vaccine safety while simultaneously increasing the number of vaccines available to prevent infectious diseases. The passage of the US Food and Drug Administration (FDA) Amendment Act of 2007 formalized the concept of life-cycle management of the risks and benefits of vaccines, from early clinical development through many years of use in large numbers of people. Harnessing scientific and technologic advances is necessary to improve vaccine-safety evaluation. The Office of Biostatistics and Epidemiology in the Center for Biologics Evaluation and Research is working to improve the FDA's ability to monitor vaccine safety by improving statistical, epidemiologic, and risk-assessment methods, gaining access to new sources of data, and exploring the use of genomics data. In this article we describe the current approaches, new resources, and future directions that the FDA is taking to improve the evaluation of vaccine safety.
Piloting a Blended Approach to Teaching Statistics in a College of Education: Lessons Learned
ERIC Educational Resources Information Center
Xu, Yonghong Jade; Meyer, Katrina A.; Morgan, Dianne
2008-01-01
This study investigated the performance of graduate students enrolled in introductory statistics courses. The course in Fall 2005 was delivered in a traditional face-to-face manner and the same course in Fall 2006 was blended by using an online commercial tutoring system (ALEKS) and making attendance of several face-to-face classes optional. There…
ERIC Educational Resources Information Center
König, Johannes
2015-01-01
The study aims at developing and exploring a novel video-based assessment that captures classroom management expertise (CME) of teachers and for which statistical results are provided. CME measurement is conceptualized by using four video clips that refer to typical classroom management situations in which teachers are heavily challenged…
ERIC Educational Resources Information Center
Mackenzie, Helen; Tolley, Harry; Croft, Tony; Grove, Michael; Lawson, Duncan
2016-01-01
This article explores the perspectives of three senior managers in higher education institutions in England regarding their mathematics and statistics support provision. It does so by means of a qualitative case study that draws upon the writing of Ronald Barnett about the identity of an "ecological" university, along with metaphors…
Integrating Real-Life Data Analysis in Teaching Descriptive Statistics: A Constructivist Approach
ERIC Educational Resources Information Center
Libman, Zipora
2010-01-01
This article looks at a process of integrating real-life data investigation in a course on descriptive statistics. Referring to constructivist perspectives, this article suggests a look at the potential of inculcating alternative teaching methods that encourage students to take a more active role in their own learning and participate in the…
The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical, and statistical perspectives (Pleil et al. 2014; Sobus et al. 2011...
ERIC Educational Resources Information Center
Vinay, Jean-Paul
1980-01-01
States that sound translation practice rests on a clear understanding of linguistic and cultural constraints combined with a punctilious search for equivalences. Describes various levels of constraint and proposes a statistical investigation of the degree of freedom left to translators, suggesting that most of the translated text results from…
A unifying approach for food webs, phylogeny, social networks, and statistics.
Chiu, Grace S; Westveld, Anton H
2011-09-20
A food web consists of nodes, each consisting of one or more species. The role of each node as predator or prey determines the trophic relations that weave the web. Much effort in trophic food web research is given to understand the connectivity structure, or the nature and degree of dependence among nodes. Social network analysis (SNA) techniques--quantitative methods commonly used in the social sciences to understand network relational structure--have been used for this purpose, although postanalysis effort or biological theory is still required to determine what natural factors contribute to the feeding behavior. Thus, a conventional SNA alone provides limited insight into trophic structure. Here we show that by using novel statistical modeling methodologies to express network links as the random response of within- and internode characteristics (predictors), we gain a much deeper understanding of food web structure and its contributing factors through a unified statistical SNA. We do so for eight empirical food webs: Phylogeny is shown to have nontrivial influence on trophic relations in many webs, and for each web trophic clustering based on feeding activity and on feeding preference can differ substantially. These and other conclusions about network features are purely empirical, based entirely on observed network attributes while accounting for biological information built directly into the model. Thus, statistical SNA techniques, through statistical inference for feeding activity and preference, provide an alternative perspective of trophic clustering to yield comprehensive insight into food web structure.
Bruni, Aline Thaís; Velho, Jesus Antonio; Ferreira, Arthur Serra Lopes; Tasso, Maria Júlia; Ferrari, Raíssa Santos; Yoshida, Ricardo Luís; Dias, Marcos Salvador; Leite, Vitor Barbanti Pereira
2014-08-01
This study uses statistical techniques to evaluate reports on suicide scenes; it utilizes 80 reports from different locations in Brazil, randomly collected from both federal and state jurisdictions. We aimed to assess a heterogeneous group of cases in order to obtain an overall perspective of the problem. We evaluated variables regarding the characteristics of the crime scene, such as the detected traces (blood, instruments and clothes) that were found and we addressed the methodology employed by the experts. A qualitative approach using basic statistics revealed a wide distribution as to how the issue was addressed in the documents. We examined a quantitative approach involving an empirical equation and we used multivariate procedures to validate the quantitative methodology proposed for this empirical equation. The methodology successfully identified the main differences in the information presented in the reports, showing that there is no standardized method of analyzing evidences.
Geographic differences in approach to advanced gastric cancer: Is there a standard approach?
Kim, Richard; Tan, Ann; Choi, Minsig; El-Rayes, Bassel F
2013-11-01
Gastric cancer is one of the leading causes of cancer related deaths worldwide. Regional differences in gastric cancer are evident between Asian and Western societies with respect to etiology, prevalence, clinicopathologic features as well as treatment pattern of the disease. For patients with advanced gastric cancer (AGC), chemotherapy has been found to improve survival and quality of life compared to best supportive care alone. But contrast to other tumors such as colon or pancreatic cancer, there are regional differences in outcome in gastric cancer. Various geographic/ethnic, biology and treatment strategies may contribute to these differences. In the first line setting, cisplatin and fluoropyrimidine based therapies remain the backbone of treatment for advanced gastric cancer in Asian and Western patients, although there is preference for S1 in Asia and 5FU in the West. A third agent may be added in patients with good performance status. Recent trials from Asia and Europe demonstrate an advantage for second line chemotherapy. Irinotecan and taxanes are the most commonly used agents. The introduction of trastuzumab into the frontline therapy of AGC has ushered the age of targeted therapy and personalized medicine in this disease. In this article, we will review the various first and second line chemotherapy regimens in AGC, taking into account regional differences including potential biomarkers.
Rodrigo, C.; Rodrigo, M.; Dunne, K.; Morgan, L.
1998-07-01
Typical wetland creations are based on sizable surface water input provided by stream diversion or large surface water run-on inputs to enhance the success of the establishing the wetland hydrology. However, not all landscapes provide sizable hydrological inputs from these sources. This paper presents a case history and statistical approach adopted to model groundwater for a wetland created in a landscape position which does not allow for the use of surface water inputs.
A new approach to the statistical treatment of 2D-maps in proteomics using fuzzy logic.
Marengo, Emilio; Robotti, Elisa; Gianotti, Valentina; Righetti, Pier Giorgio
2003-01-01
A new approach to the statistical treatment of 2D-maps has been developed. This method is based on the use of fuzzy logic and allows to take into consideration the typical low reproducibility of 2D-maps. In this approach the signal corresponding to the presence of proteins on the 2D-maps is substituted with probability functions, centred on the signal itself. The standard deviation of the bidimensional gaussian probability function employed to blur the signal allows to assign different uncertainties to the two electrophoretic dimensions. The effect of changing the standard deviation and the digitalisation resolution are investigated. PMID:12650579
Valverde-Asenjo, Inmaculada; García-Montero, Luis G; Quintana, Asunción; Velázquez, Javier
2009-03-01
Calcareous amendments are being used in Tuber melanosporum truffle plantations in attempts to eradicate Tuber brumale. However, there are no studies available which provide soil analysis and statistical data on this topic. We studied 77 soil samples to compare the values for carbonates, pH and total organic carbon in T. brumale truffières with the values for T. melanosporum truffières on contaminated farms and in natural areas. Statistical analyses indicate that the concentrations of active carbonate and total carbonate in the soil are significantly higher in T. brumale truffières than in T. melanosporum truffières, but that there are no significant differences in pH and total organic carbon. We conclude that liming would not suppress T. brumale ectomycorrhizas in contaminated T. melanosporum farms, and calcareous amendments do not therefore seem be a means of eradicating T. brumale in these farms.
Multivariate statistical approach to a data set of dioxin and furan contaminations in human milk
Lindstrom, G.U.M.; Sjostrom, M.; Swanson, S.E. ); Furst, P.; Kruger, C.; Meemken, H.A.; Groebel, W. )
1988-05-01
The levels of chlorinated dibenzodioxins, PCDDs, and dibenzofurans, PCDFs, in human milk have been of great concern after the discovery of the toxic 2,3,7,8-substituted isomers in milk of European origin. As knowledge of environmental contamination of human breast milk increases, questions will continue to be asked about possible risks from breast feeding. Before any recommendations can be made, there must be knowledge of contaminant levels in mothers' breast milk. Researchers have measured PCB and 17 different dioxins and furans in human breast milk samples. To date the data has only been analyzed by univariate and bivariate statistical methods. However to extract as much information as possible from this data set, multivariate statistical methods must be used. Here the authors present a multivariate analysis where the relationships between the polychlorinated compounds and the personalia of the mothers have been studied. For the data analysis partial least squares (PLS) modelling has been used.
Statistical Approach for Determining the Onsets/Durations of ENSO Cycle Extremes
NASA Technical Reports Server (NTRS)
Wilson, Robert M.
1999-01-01
During the interval of 1950-mid 1998, some 16 El Nino and 10 La Nina have been identified on the basis of sea surface temperature in the Nino 3.4 region, these 26 events representing the extremes of the quasi-periodic ENSO cycle. Statistical aspects of these events are examined. Surprisingly, the durations of El Nino and La Nina appear to be strongly bifurcated into shorter and longer duration classes, as do the recurrence periods of El Nino. Moreover, the duration of an El Nino appears to provide a statistically meaningful indication as to when to expect the next onset of El Nino. Because the last El Nino had its onset in April 1997 and was of longer duration, onset of the next El Nino, probably, will not occur until after February 2000.
Harrison, Jay M; Breeze, Matthew L; Berman, Kristina H; Harrigan, George G
2013-03-01
Bayesian approaches to evaluation of crop composition data allow simpler interpretations than traditional statistical significance tests. An important advantage of Bayesian approaches is that they allow formal incorporation of previously generated data through prior distributions in the analysis steps. This manuscript describes key steps to ensure meaningful and transparent selection and application of informative prior distributions. These include (i) review of previous data in the scientific literature to form the prior distributions, (ii) proper statistical model specification and documentation, (iii) graphical analyses to evaluate the fit of the statistical model to new study data, and (iv) sensitivity analyses to evaluate the robustness of results to the choice of prior distribution. The validity of the prior distribution for any crop component is critical to acceptance of Bayesian approaches to compositional analyses and would be essential for studies conducted in a regulatory setting. Selection and validation of prior distributions for three soybean isoflavones (daidzein, genistein, and glycitein) and two oligosaccharides (raffinose and stachyose) are illustrated in a comparative assessment of data obtained on GM and non-GM soybean seed harvested from replicated field sites at multiple locations in the US during the 2009 growing season. PMID:23261475
Harrison, Jay M; Breeze, Matthew L; Berman, Kristina H; Harrigan, George G
2013-03-01
Bayesian approaches to evaluation of crop composition data allow simpler interpretations than traditional statistical significance tests. An important advantage of Bayesian approaches is that they allow formal incorporation of previously generated data through prior distributions in the analysis steps. This manuscript describes key steps to ensure meaningful and transparent selection and application of informative prior distributions. These include (i) review of previous data in the scientific literature to form the prior distributions, (ii) proper statistical model specification and documentation, (iii) graphical analyses to evaluate the fit of the statistical model to new study data, and (iv) sensitivity analyses to evaluate the robustness of results to the choice of prior distribution. The validity of the prior distribution for any crop component is critical to acceptance of Bayesian approaches to compositional analyses and would be essential for studies conducted in a regulatory setting. Selection and validation of prior distributions for three soybean isoflavones (daidzein, genistein, and glycitein) and two oligosaccharides (raffinose and stachyose) are illustrated in a comparative assessment of data obtained on GM and non-GM soybean seed harvested from replicated field sites at multiple locations in the US during the 2009 growing season.
NASA Astrophysics Data System (ADS)
Nelson, Dianna N.
Despite its spatial confines, the east Pacific basin is one of the most active basins in the world for tropical cyclone (TC) genesis. While the TCs that form in the basin have important implications for Central America and the southwestern U.S., relatively little research (compared to other tropical basins) has been done on eastern Pacific tropical cyclogenesis. The present study uses two statistical techniques---linear discriminant analysis (LDA) and a Bayesian probabilistic model to identify those variables that are associated with the development of nascent vortices in the east Pacific and uses them to predict tropical storm formation for lead times out to 48 hours. All nascent vortices that last for a minimum of 48 hours and form during the 2001--2009 "peak" hurricane seasons (July-September) are considered in the study. An initial set of 27 spatially averaged variables is considered as potential predictors for the statistical models. Results from both the LDA algorithm and Bayes probabilistic model show that a number of predictors improve the forecast skill of both models. These predictors include the 900hPa relative vorticity, latitude of the vortex, 900hPa deformation fields, 900hPa-500hPa relative humidity, 900hPa zonal wind, and the 900hPa-200hPa equatorward vertical shear of the meridional wind. Using the aforementioned predictors as a basis, a composite-based conceptual model for environmental elements that favor tropical storm formation is constructed to explain the physical mechanisms of the process. In conjunction with the statistical and composite-based models, Tropical Storm Gil (a 2007 storm that is accurately forecast by the statistical models) and Tropical Storm Enrique (a 2009 case that is poorly forecast) are used to highlight the strengths and weaknesses of each model.
Tasaki, Hal
2016-04-29
Based on quantum statistical mechanics and microscopic quantum dynamics, we prove Planck's and Kelvin's principles for macroscopic systems in a general and realistic setting. We consider a hybrid quantum system that consists of the thermodynamic system, which is initially in thermal equilibrium, and the "apparatus" which operates on the former, and assume that the whole system evolves autonomously. This provides a satisfactory derivation of the second law for macroscopic systems.
Tasaki, Hal
2016-04-29
Based on quantum statistical mechanics and microscopic quantum dynamics, we prove Planck's and Kelvin's principles for macroscopic systems in a general and realistic setting. We consider a hybrid quantum system that consists of the thermodynamic system, which is initially in thermal equilibrium, and the "apparatus" which operates on the former, and assume that the whole system evolves autonomously. This provides a satisfactory derivation of the second law for macroscopic systems. PMID:27176507
Changes in Wave Climate from a Multi-model Global Statistical projection approach.
NASA Astrophysics Data System (ADS)
Camus, Paula; Menendez, Melisa; Perez, Jorge; Losada, Inigo
2016-04-01
Despite their outstanding relevance in coastal impacts related to climate change (i.e. inundation, global beach erosion), ensemble products of global wave climate projections from the new Representative Concentration Pathways (RCPs) described by the IPCC are rather limited. This work shows a global study of changes in wave climate under several scenarios in which a new statistical method is applied. The method is based on the statistical relationship between meteorological conditions over the geographical area of wave generation (predictor) and the resulting wave characteristics for a particular location (predictand). The atmospheric input variables used in the statistical method are sea level pressure anomalies and gradients over the spatial and time scales information characterized by ESTELA maps (Perez et al. 2014). ESTELA provides a characterization of the area of wave influence of any particular ocean location worldwide, which includes contour lines of wave energy and isochrones of travel time in that area. Principal components is then applied over the sea level pressure information of the ESTELA region in order to define a multi-regression statistical model based on several data mining techniques. Once the multi-regression technique is defined and validated from historical information of atmospheric reanalysis (predictor) and wave hindcast (predictand) this method has been applied by using more than 35 Global Climate Models from CMIP5 to estimate changes in several parameters of the sea state (e.g. significant wave height, peak period) at seasonal and annual scale during the last decades of 21st century. The uncertainty of the estimated wave climate changes in the ensemble is also provided and discussed.
Fernández-Llamazares, Alvaro; Belmonte, Jordina; Delgado, Rosario; De Linares, Concepción
2014-04-01
Airborne pollen records are a suitable indicator for the study of climate change. The present work focuses on the role of annual pollen indices for the detection of bioclimatic trends through the analysis of the aerobiological spectra of 11 taxa of great biogeographical relevance in Catalonia over an 18-year period (1994-2011), by means of different parametric and non-parametric statistical methods. Among others, two non-parametric rank-based statistical tests were performed for detecting monotonic trends in time series data of the selected airborne pollen types and we have observed that they have similar power in detecting trends. Except for those cases in which the pollen data can be well-modeled by a normal distribution, it is better to apply non-parametric statistical methods to aerobiological studies. Our results provide a reliable representation of the pollen trends in the region and suggest that greater pollen quantities are being liberated to the atmosphere in the last years, specially by Mediterranean taxa such as Pinus, Total Quercus and Evergreen Quercus, although the trends may differ geographically. Longer aerobiological monitoring periods are required to corroborate these results and survey the increasing levels of certain pollen types that could exert an impact in terms of public health.
A statistical approach to estimate O3 uptake of ponderosa pine in a mediterranean climate.
Grulke, N E; Preisler, H K; Fan, C C; Retzlaff, W A
2002-01-01
In highly polluted sites, stomatal behavior is sluggish with respect to light, vapor pressure deficit, and internal CO2 concentration (Ci) and poorly described by existing models. Statistical models were developed to estimate stomatal conductance (gs) of 40-year-old ponderosa pine at three sites differing in pollutant exposure for the purpose of calculating O3 uptake. Gs was estimated using julian day, hour of day, pre-dawn xylem potential and photosynthetic photon flux density (PPFD). The median difference between estimated and observed field gs did not exceed 10 mmol H2O m(-2) s(-1), and estimated gs within 95% confidence intervals. 03 uptake was calculated from hourly estimated gs, hourly O3 concentration, and a constant to correct for the difference in diffusivity between water vapor and 03. The simulation model TREGRO was also used to calculate the cumulative 03 uptake at all three sites. 03 uptake estimated by the statistical model was higher than that simulated by TREGRO because gas exchange rates were proportionally higher. O3 exposure and uptake were significantly correlated (r2>0.92), because O3 exposure and gs were highly correlated in both statistical and simulation models. PMID:12152824
ATWS Analysis with an Advanced Boiling Curve Approach within COBRA 3-CP
Gensler, A.; Knoll, A.; Kuehnel, K.
2007-07-01
In 2005 the German Reactor Safety Commission issued specific requirements on core coolability demonstration for PWR ATWS (anticipated transients without scram). Thereupon AREVA NP performed detailed analyses for all German PWRs. For a German KONVOI plant the results of an ATWS licensing analysis are presented. The plant dynamic behavior is calculated with NLOOP, while the hot channel analysis is performed with the thermal hydraulic computer code COBRA 3-CP. The application of the fuel rod model included in COBRA 3-CP is essential for this type of analysis. Since DNB (departure from nucleate boiling) occurs, the advanced post DNB model (advanced boiling curve approach) of COBRA 3-CP is used. The results are compared with those gained with the standard BEEST model. The analyzed ATWS case is the emergency power case 'loss of main heat sink with station service power supply unavailable'. Due to the decreasing coolant flow rate during the transient the core attains film boiling conditions. The results of the hot channel analysis strongly depend on the performance of the boiling curve model. The BEEST model is based on pool boiling conditions whereas typical PWR conditions - even in most transients - are characterized by forced flow for which the advanced boiling curve approach is particularly suitable. Compared with the BEEST model the advanced boiling curve approach in COBRA 3-CP yields earlier rewetting, i.e. a shorter period in film boiling. Consequently, the fuel rod cladding temperatures, that increase significantly due to film boiling, drop back earlier and the high temperature oxidation is significantly diminished. The Baker-Just-Correlation was used to calculate the value of equivalent cladding reacted (ECR), i.e. the reduction of cladding thickness due to corrosion throughout the transient. Based on the BEEST model the ECR value amounts to 0.4% whereas the advanced boiling curve only leads to an ECR value of 0.2%. Both values provide large margins to the 17
Karim, Mohammad Ehsanul; Gustafson, Paul; Petkau, John; Tremlett, Helen
2016-08-15
In time-to-event analyses of observational studies of drug effectiveness, incorrect handling of the period between cohort entry and first treatment exposure during follow-up may result in immortal time bias. This bias can be eliminated by acknowledging a change in treatment exposure status with time-dependent analyses, such as fitting a time-dependent Cox model. The prescription time-distribution matching (PTDM) method has been proposed as a simpler approach for controlling immortal time bias. Using simulation studies and theoretical quantification of bias, we compared the performance of the PTDM approach with that of the time-dependent Cox model in the presence of immortal time. Both assessments revealed that the PTDM approach did not adequately address immortal time bias. Based on our simulation results, another recently proposed observational data analysis technique, the sequential Cox approach, was found to be more useful than the PTDM approach (Cox: bias = -0.002, mean squared error = 0.025; PTDM: bias = -1.411, mean squared error = 2.011). We applied these approaches to investigate the association of β-interferon treatment with delaying disability progression in a multiple sclerosis cohort in British Columbia, Canada (Long-Term Benefits and Adverse Effects of Beta-Interferon for Multiple Sclerosis (BeAMS) Study, 1995-2008). PMID:27455963
Bean, Heather D; Pleil, Joachim D; Hill, Jane E
2015-02-01
The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical and statistical perspectives. A second concern is how the environment interacts with human systems biology, what the variability is in "normal" subjects, and how such biological observations might be reconstructed to infer external stressors. In this article, we report on recent research presentations from a symposium at the 248th American Chemical Society meeting held in San Francisco, 10-14 August 2014, that focused on providing some insight into these important issues.
Bean, Heather D; Pleil, Joachim D; Hill, Jane E
2015-02-01
The broad topic of biomarker research has an often-overlooked component: the documentation and interpretation of the surrounding chemical environment and other meta-data, especially from visualization, analytical and statistical perspectives. A second concern is how the environment interacts with human systems biology, what the variability is in "normal" subjects, and how such biological observations might be reconstructed to infer external stressors. In this article, we report on recent research presentations from a symposium at the 248th American Chemical Society meeting held in San Francisco, 10-14 August 2014, that focused on providing some insight into these important issues. PMID:25444302
Firing statistics and correlations in spiking neurons: a level-crossing approach.
Badel, Laurent
2011-10-01
We present a time-dependent level-crossing theory for linear dynamical systems perturbed by colored Gaussian noise. We apply these results to approximate the firing statistics of conductance-based integrate-and-fire neurons receiving excitatory and inhibitory Poissonian inputs. Analytical expressions are obtained for three key quantities characterizing the neuronal response to time-varying inputs: the mean firing rate, the linear response to sinusoidally modulated inputs, and the pairwise spike correlation for neurons receiving correlated inputs. The theory yields tractable results that are shown to accurately match numerical simulations and provides useful tools for the analysis of interconnected neuronal populations.
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-01-01
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-01-01
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
Drug-excipient compatibility testing using a high-throughput approach and statistical design.
Wyttenbach, Nicole; Birringer, Christian; Alsenz, Jochem; Kuentz, Martin
2005-01-01
The aim of our research was to develop a miniaturized high throughput drug-excipient compatibility test. Experiments were planned and evaluated using statistical experimental design. Binary mixtures of a drug, acetylsalicylic acid, or fluoxetine hydrochloride, and of excipients commonly used in solid dosage forms were prepared at a ratio of approximately 1:100 in 96-well microtiter plates. Samples were exposed to different temperature (40 degrees C/ 50 degrees C) and humidity (10%/75%) for different time (1 week/4 weeks), and chemical drug degradation was analyzed using a fast gradient high pressure liquid chromatography (HPLC). Categorical statistical design was applied to identify the effects and interactions of time, temperature, humidity, and excipient on drug degradation. Acetylsalicylic acid was least stable in the presence of magnesium stearate, dibasic calcium phosphate, or sodium starch glycolate. Fluoxetine hydrochloride exhibited a marked degradation only with lactose. Factor-interaction plots revealed that the relative humidity had the strongest effect on the drug excipient blends tested. In conclusion, the developed technique enables fast drug-excipient compatibility testing and identification of interactions. Since only 0.1 mg of drug is needed per data point, fast rational preselection of the pharmaceutical additives can be performed early in solid dosage form development.
Multivariate statistical approach to estimate mixing proportions for unknown end members
Valder, Joshua F.; Long, Andrew J.; Davis, Arden D.; Kenner, Scott J.
2012-01-01
A multivariate statistical method is presented, which includes principal components analysis (PCA) and an end-member mixing model to estimate unknown end-member hydrochemical compositions and the relative mixing proportions of those end members in mixed waters. PCA, together with the Hotelling T2 statistic and a conceptual model of groundwater flow and mixing, was used in selecting samples that best approximate end members, which then were used as initial values in optimization of the end-member mixing model. This method was tested on controlled datasets (i.e., true values of estimates were known a priori) and found effective in estimating these end members and mixing proportions. The controlled datasets included synthetically generated hydrochemical data, synthetically generated mixing proportions, and laboratory analyses of sample mixtures, which were used in an evaluation of the effectiveness of this method for potential use in actual hydrological settings. For three different scenarios tested, correlation coefficients (R2) for linear regression between the estimated and known values ranged from 0.968 to 0.993 for mixing proportions and from 0.839 to 0.998 for end-member compositions. The method also was applied to field data from a study of end-member mixing in groundwater as a field example and partial method validation.
NASA Astrophysics Data System (ADS)
Reese, Erik D.; Kawahara, H.; Kitayama, T.; Sasaki, S.; Suto, Y.
2009-01-01
Motivated by cosmological hydrodynamic simulations, the intracluster medium (ICM) inhomogeneity of galaxy clusters is modeled statistically with a lognormal model for density inhomogeneity. Through mock observations of synthetic clusters the relationship between density inhomogeneities and that of the X-ray surface brightness has been developed. This enables one to infer the statistical properties of the fluctuations of the underlying three-dimensional density distribution of real galaxy clusters from X-ray observations. We explore inhomogeneity in the intracluster medium by applying the above methodology to Chandra observations of a sample of nearby galaxy clusters. We also consider extensions of the model, including Poissonian effects and compare this hybrid lognormal-Poisson model to the nearby cluster Chandra data. EDR gratefully acknowledges support from JSPS (Japan Society for the Promotion of Science) Postdoctoral Fellowhip for Foreign Researchers award P07030. HK is supported by Grands-in-Aid for JSPS of Science Fellows. This work is also supported by Grant-in-Aid for Scientific research of Japanese Ministry of Education, Culture, Sports, Science and Technology (Nos. 20.10466, 19.07030, 16340053, 20340041, and 20540235) and by JSPS Core-to-Core Program "International Research Network for Dark Energy".
Statistical approaches to nonstationary EEGs for the detection of slow vertex responses.
Fujikake, M; Ninomija, S P; Fujita, H
1989-06-01
A slow vertex response (SVR) is an electric auditory evoked response used for an objective hearing power test. One of the aims of an objective hearing power test is to find infants whose hearing is less than that of normal infants. Early medical treatment is important for infants with a loss of hearing so that they do not have retarded growth. To measure SVRs, we generally use the averaged summation method of an electroencephalogram (EEG), because the signal-to-noise ratio (SVR to EEG and etc.) is very poor. To increase the reliability and stability of measured SVRs, and at the same time, to make the burden of testing light, it is necessary to device an effective measurement method of SVR. Two factors must be considered: (1) SVR waveforms change following the changes of EEGs caused by sleeping and (2) EEGs are considered as nonstationary data in prolonged measurement. In this paper, five statistical methods are used on two different models; a stationary model and a nonstationary model. Through the comparison of waves obtained by each method, we will clarify the statistical characteristics of the original data (EEGs including SVRs), and consider the conditions that effect the measurement method of an SVR. PMID:2794816
Statistical approach to measure the efficacy of anthelmintic treatment on horse farms.
Vidyashankar, A N; Kaplan, R M; Chan, S
2007-12-01
Resistance to anthelmintics in gastrointestinal nematodes of livestock is a serious problem and appropriate methods are required to identify and quantify resistance. However, quantification and assessment of resistance depend on an accurate measure of treatment efficacy, and current methodologies fail to properly address the issue. The fecal egg count reduction test (FECRT) is the practical gold standard for measuring anthelmintic efficacy on farms, but these types of data are fraught with high variability that greatly impacts the accuracy of inference on efficacy. This paper develops a statistical model to measure, assess, and evaluate the efficacy of the anthelmintic treatment on horse farms as determined by FECRT. Novel robust bootstrap methods are developed to analyse the data and are compared to other suggested methods in the literature in terms of Type I error and power. The results demonstrate that the bootstrap methods have an optimal Type I error rate and high power to detect differences between the presumed and true efficacy without the need to know the true distribution of pre-treatment egg counts. Finally, data from multiple farms are studied and statistical models developed that take into account between-farm variability. Our analysis establishes that if inter-farm variability is not taken into account, misleading conclusions about resistance can be made.
NASA Astrophysics Data System (ADS)
Chandrasekaran, A.; Ravisankar, R.; Harikrishnan, N.; Satapathy, K. K.; Prasad, M. V. R.; Kanagasabapathy, K. V.
2015-02-01
Anthropogenic activities increase the accumulation of heavy metals in the soil environment. Soil pollution significantly reduces environmental quality and affects the human health. In the present study soil samples were collected at different locations of Yelagiri Hills, Tamilnadu, India for heavy metal analysis. The samples were analyzed for twelve selected heavy metals (Mg, Al, K, Ca, Ti, Fe, V, Cr, Mn, Co, Ni and Zn) using energy dispersive X-ray fluorescence (EDXRF) spectroscopy. Heavy metals concentration in soil were investigated using enrichment factor (EF), geo-accumulation index (Igeo), contamination factor (CF) and pollution load index (PLI) to determine metal accumulation, distribution and its pollution status. Heavy metal toxicity risk was assessed using soil quality guidelines (SQGs) given by target and intervention values of Dutch soil standards. The concentration of Ni, Co, Zn, Cr, Mn, Fe, Ti, K, Al, Mg were mainly controlled by natural sources. Multivariate statistical methods such as correlation matrix, principal component analysis and cluster analysis were applied for the identification of heavy metal sources (anthropogenic/natural origin). Geo-statistical methods such as kirging identified hot spots of metal contamination in road areas influenced mainly by presence of natural rocks.
Process simulation and statistical approaches for validating waste form qualification models
Kuhn, W.L.; Toland, M.R.; Pulsipher, B.A.
1989-05-01
This report describes recent progress toward one of the principal objectives of the Nuclear Waste Treatment Program (NWTP) at the Pacific Northwest Laboratory (PNL): to establish relationships between vitrification process control and glass product quality. during testing of a vitrification system, it is important to show that departures affecting the product quality can be sufficiently detected through process measurements to prevent an unacceptable canister from being produced. Meeting this goal is a practical definition of a successful sampling, data analysis, and process control strategy. A simulation model has been developed and preliminarily tested by applying it to approximate operation of the West Valley Demonstration Project (WVDP) vitrification system at West Valley, New York. Multivariate statistical techniques have been identified and described that can be applied to analyze large sets of process measurements. Information on components, tanks, and time is then combined to create a single statistic through which all of the information can be used at once to determine whether the process has shifted away from a normal condition.
Zaman, A.A.; McNally, T.W.; Fricke, A.L.
1998-01-01
Vapor-liquid equilibria and boiling point elevation of slash pine kraft black liquors over a wide range of solid concentrations (up to 85% solids) has been studied. The liquors are from a statistically designed pulping experiment for pulping slash pine in a pilot scale digester with four cooking variables of effective alkali, sulfidity, cooking time, and cooking temperature. It was found that boiling point elevation of black liquors is pressure dependent, and this dependency is more significant at higher solids concentrations. The boiling point elevation data at different solids contents (at a fixed pressure) were correlated to the dissolved solids (S/(1 {minus} S)) in black liquor. Due to the solubility limit of some of the salts in black liquor, a change in the slope of the boiling point elevation as a function of the dissolved solids was observed at a concentration of around 65% solids. An empirical method was developed to describe the boiling point elevation of each liquor as a function of pressure and solids mass fraction. The boiling point elevation of slash pine black liquors was correlated quantitatively to the pulping variables, using different statistical procedures. These predictive models can be applied to determine the boiling point rise (and boiling point) of slash pine black liquors at processing conditions from the knowledge of pulping variables. The results are presented, and their utility is discussed.
Solar granulation and statistical crystallography: A modeling approach using size-shape relations
NASA Technical Reports Server (NTRS)
Noever, D. A.
1994-01-01
The irregular polygonal pattern of solar granulation is analyzed for size-shape relations using statistical crystallography. In contrast to previous work which has assumed perfectly hexagonal patterns for granulation, more realistic accounting of cell (granule) shapes reveals a broader basis for quantitative analysis. Several features emerge as noteworthy: (1) a linear correlation between number of cell-sides and neighboring shapes (called Aboav-Weaire's law); (2) a linear correlation between both average cell area and perimeter and the number of cell-sides (called Lewis's law and a perimeter law, respectively) and (3) a linear correlation between cell area and squared perimeter (called convolution index). This statistical picture of granulation is consistent with a finding of no correlation in cell shapes beyond nearest neighbors. A comparative calculation between existing model predictions taken from luminosity data and the present analysis shows substantial agreements for cell-size distributions. A model for understanding grain lifetimes is proposed which links convective times to cell shape using crystallographic results.
F.R. Carrillo-Pedroza; A. Davalos Sanchez; M. Soria-Aguilar; E.T. Pecina Trevino
2009-07-15
The removal of pyritic sulfur from a Mexican sub-bituminous coal in nitric, sulfuric, and hydrochloric acid solutions was investigated. The effect of the type and concentration of acid, in the presence of hydrogen peroxide and ozone as oxidants, in a temperature range of 20-60{sup o}C, was studied. The relevant factors in pyrite dissolution were determined by means of the statistical analysis of variance and optimized by the response surface method. Kinetic models were also evaluated, showing that the dissolution of pyritic sulfur follows the kinetic model of the shrinking core model, with diffusion through the solid product of the reaction as the controlling stage. The results of statistical analysis indicate that the use of ozone as an oxidant improves the pyrite dissolution because, at 0.25 M HNO{sub 3} or H{sub 2}SO{sub 4} at 20{sup o}C and 0.33 g/h O{sub 3}, the obtained dissolution is similar to that of 1 M H{sub 2}O{sub 2} and 1 M HNO{sub 3} or H{sub 2}SO{sub 4} at 40{sup o}C. 42 refs., 9 figs., 3 tabs.
Yan, Ma; Alejandro, Gonzalez Della Valle; Hui, Zhang; Tu, X M
2010-03-15
Cronbach coefficient alpha (CCA) is a classic measure of item internal consistency of an instrument and is used in a wide range of behavioral, biomedical, psychosocial, and health-care-related research. Methods are available for making inference about one CCA or multiple CCAs from correlated outcomes. However, none of the existing approaches effectively address missing data. As longitudinal study designs become increasingly popular and complex in modern-day clinical studies, missing data have become a serious issue, and the lack of methods to systematically address this problem has hampered the progress of research in the aforementioned fields. In this paper, we develop a novel approach to tackle the complexities involved in addressing missing data (at the instrument level due to subject dropout) within a longitudinal data setting. The approach is illustrated with both clinical and simulated data.
NASA Astrophysics Data System (ADS)
Tyagi, Payal; Marruzzo, Alessia; Pagnani, Andrea; Antenucci, Fabrizio; Leuzzi, Luca
2016-07-01
We implement a pseudolikelihood approach with l1 and l2 regularizations as well as the recently introduced pseudolikelihood with decimation procedure to the inverse problem in continuous spin models on arbitrary networks, with arbitrarily disordered couplings. Performances of the approaches are tested against data produced by Monte Carlo numerical simulations and compared also to previously studied fully connected mean-field-based inference techniques. The results clearly show that the best network reconstruction is obtained through the decimation scheme, which also allows us to make the inference down to lower temperature regimes. Possible applications to phasor models for light propagation in random media are proposed and discussed.
Gomez-Ramirez, Jaime; Sanz, Ricardo
2013-09-01
One of the most important scientific challenges today is the quantitative and predictive understanding of biological function. Classical mathematical and computational approaches have been enormously successful in modeling inert matter, but they may be inadequate to address inherent features of biological systems. We address the conceptual and methodological obstacles that lie in the inverse problem in biological systems modeling. We introduce a full Bayesian approach (FBA), a theoretical framework to study biological function, in which probability distributions are conditional on biophysical information that physically resides in the biological system that is studied by the scientist.
DEVELOPMENT OF AN ADVANCED APPROACH FOR NEXT-GENERATION INTEGRATED RESERVOIR CHARACTERIZATION
Scott R. Reeves
2005-04-01
Accurate, high-resolution, three-dimensional (3D) reservoir characterization can provide substantial benefits for effective oilfield management. By doing so, the predictive reliability of reservoir flow models, which are routinely used as the basis for investment decisions involving hundreds of millions of dollars and designed to recover millions of barrels of oil, can be significantly improved. Even a small improvement in incremental recovery for high-value assets can result in important contributions to bottom-line profitability. Today's standard practice for developing a 3D reservoir description is to use seismic inversion techniques. These techniques make use of geostatistics and other stochastic methods to solve the inverse problem, i.e., to iteratively construct a likely geologic model and then upscale and compare its acoustic response to that actually observed in the field. This method has several inherent flaws, such as: (1) The resulting models are highly non-unique; multiple equiprobable realizations are produced, meaning (2) The results define a distribution of possible outcomes; the best they can do is quantify the uncertainty inherent in the modeling process, and (3) Each realization must be run through a flow simulator and history matched to assess it's appropriateness, and therefore (4) The method is labor intensive and requires significant time to complete a field study; thus it is applied to only a small percentage of oil and gas producing assets. A new approach to achieve this objective was first examined in a Department of Energy (DOE) study performed by Advanced Resources International (ARI) in 2000/2001. The goal of that study was to evaluate whether robust relationships between data at vastly different scales of measurement could be established using virtual intelligence (VI) methods. The proposed workflow required that three specific relationships be established through use of artificial neural networks (ANN's): core-to-log, log
A Statistical Ontology-Based Approach to Ranking for Multiword Search
ERIC Educational Resources Information Center
Kim, Jinwoo
2013-01-01
Keyword search is a prominent data retrieval method for the Web, largely because the simple and efficient nature of keyword processing allows a large amount of information to be searched with fast response. However, keyword search approaches do not formally capture the clear meaning of a keyword query and fail to address the semantic relationships…
Graph-based and statistical approaches for detecting spectrally variable target materials
NASA Astrophysics Data System (ADS)
Ziemann, Amanda K.; Theiler, James
2016-05-01
In discriminating target materials from background clutter in hyperspectral imagery, one must contend with variability in both. Most algorithms focus on the clutter variability, but for some materials there is considerable variability in the spectral signatures of the target. This is especially the case for solid target materials, whose signatures depend on morphological properties (particle size, packing density, etc.) that are rarely known a priori. In this paper, we investigate detection algorithms that explicitly take into account the diversity of signatures for a given target. In particular, we investigate variable target detectors when applied to new representations of the hyperspectral data: a manifold learning based approach, and a residual based approach. The graph theory and manifold learning based approach incorporates multiple spectral signatures of the target material of interest; this is built upon previous work that used a single target spectrum. In this approach, we first build an adaptive nearest neighbors (ANN) graph on the data and target spectra, and use a biased locally linear embedding (LLE) transformation to perform nonlinear dimensionality reduction. This biased transformation results in a lower-dimensional representation of the data that better separates the targets from the background. The residual approach uses an annulus based computation to represent each pixel after an estimate of the local background is removed, which suppresses local backgrounds and emphasizes the target-containing pixels. We will show detection results in the original spectral space, the dimensionality-reduced space, and the residual space, all using subspace detectors: ranked spectral angle mapper (rSAM), subspace adaptive matched filter (ssAMF), and subspace adaptive cosine/coherence estimator (ssACE). Results of this exploratory study will be shown on a ground-truthed hyperspectral image with variable target spectra and both full and mixed pixel targets.
NASA Astrophysics Data System (ADS)
Yang, Shuyu; King, Philip; Corona, Enrique; Wilson, Mark P.; Aydin, Kaan; Mitra, Sunanda; Soliz, Peter; Nutter, Brian S.; Kwon, Young H.
2003-05-01
Feature extraction is a critical preprocessing step, which influences the outcome of the entire process of developing significant metrics for medical image evaluation. The purpose of this paper is firstly to compare the effect of an optimized statistical feature extraction methodology to a well designed combination of point operations for feature extraction at the preprocessing stage of retinal images for developing useful diagnostic metrics for retinal diseases such as glaucoma and diabetic retinopathy. Segmentation of the extracted features allow us to investigate the effect of occlusion induced by these features on generating stereo disparity mapping and 3-D visualization of the optic cup/disc. Segmentation of blood vessels in the retina also has significant application in generating precise vessel diameter metrics in vascular diseases such as hypertension and diabetic retinopathy for monitoring progression of retinal diseases.
A copula approach on the dynamics of statistical dependencies in the US stock market
NASA Astrophysics Data System (ADS)
Münnix, Michael C.; Schäfer, Rudi
2011-11-01
We analyze the statistical dependence structure of the S&P 500 constituents in the 4-year period from 2007 to 2010 using intraday data from the New York Stock Exchange’s TAQ database. Instead of using a given parametric copula with a predetermined shape, we study the empirical pairwise copula directly. We find that the shape of this copula resembles the Gaussian copula to some degree, but exhibits a stronger tail dependence, for both correlated and anti-correlated extreme events. By comparing the tail dependence dynamically to the market’s average correlation level as a commonly used quantity we disclose the average level of error of the Gaussian copula, which is implied in the calculation of many correlation coefficients.
Mohajeri, Leila; Aziz, Hamidi Abdul; Isa, Mohamed Hasnain; Zahed, Mohammad Ali
2010-02-01
This work studied the bioremediation of weathered crude oil (WCO) in coastal sediment samples using central composite face centered design (CCFD) under response surface methodology (RSM). Initial oil concentration, biomass, nitrogen and phosphorus concentrations were used as independent variables (factors) and oil removal as dependent variable (response) in a 60 days trial. A statistically significant model for WCO removal was obtained. The coefficient of determination (R(2)=0.9732) and probability value (P<0.0001) demonstrated significance for the regression model. Numerical optimization based on desirability function were carried out for initial oil concentration of 2, 16 and 30 g per kg sediment and 83.13, 78.06 and 69.92 per cent removal were observed respectively, compare to 77.13, 74.17 and 69.87 per cent removal for un-optimized results. PMID:19773160
NASA Astrophysics Data System (ADS)
Noo, Frédéric; Wunderlich, Adam; Heuscher, Dominic; Schmitt, Katharina; Yu, Zhicong
2013-03-01
Task-based image quality assessment is a valuable methodology for development, optimization and evaluation of new image formation processes in x-ray computed tomography (CT), as well as in other imaging modalities. A simple way to perform such an assessment is through the use of two (or more) alternative forced choice (AFC) experiments. In this paper, we are interested in drawing statistical inference from outcomes of multiple AFC experiments that are obtained using multiple readers as well as multiple cases. We present a non-parametric covariance estimator for this problem. Then, we illustrate its usefulness with a practical example involving x-ray CT simulations. The task for this example is classification between presence or absence of one lesion with unknown location within a given object. This task is used for comparison of three standard image reconstruction algorithms in x-ray CT using four human observers.
NASA Astrophysics Data System (ADS)
Taylor, Bettina B.; Taylor, Marc H.; Dinter, Tilman; Bracher, Astrid
2013-06-01
Phycobiliproteins are a family of water-soluble pigment proteins that play an important role as accessory or antenna pigments and absorb in the green part of the light spectrum poorly used by chlorophyll a. The phycoerythrins (PEs) are one of four types of phycobiliproteins that are generally distinguished based on their absorption properties. As PEs are water soluble, they are generally not captured with conventional pigment analysis. Here we present a statistical model based on in situ measurements of three transatlantic cruises which allows us to derive relative PE concentration from standardized hyperspectral underwater radiance measurements (Lu). The model relies on Empirical Orthogonal Function (EOF) analysis of Lu spectra and, subsequently, a Generalized Linear Model with measured PE concentrations as the response variable and EOF loadings as predictor variables. The method is used to predict relative PE concentrations throughout the water column and to calculate integrated PE estimates based on those profiles.
Quantum-statistical T-matrix approach to line broadening of hydrogen in dense plasmas
Lorenzen, Sonja; Wierling, August; Roepke, Gerd; Reinholz, Heidi; Zammit, Mark C.; Fursa, Dmitry V.; Bray, Igor
2010-10-29
The electronic self-energy {Sigma}{sup e} is an important input in a quantum-statistical theory for spectral line profile calculations. It describes the influence of plasma electrons on bound state properties. In dense plasmas, the effect of strong, i.e. close, electron-emitter collisions can be considered by three-particle T-matrix diagrams. These digrams are approximated with the help of an effective two-particle T-matrix, which is obtained from convergent close-coupling calculations with Debye screening. A comparison with other theories is carried out for the 2p level of hydrogen at k{sub B}T = 1 eV and n{sub e} = 2{center_dot}10{sup 23} m{sup -3}, and results are given for n{sub e} = 1{center_dot}10{sup 25} m{sup -3}.
Advances in Proteomics Data Analysis and Display Using an Accurate Mass and Time Tag Approach
Zimmer, Jennifer S.D.; Monroe, Matthew E.; Qian, Wei-Jun; Smith, Richard D.
2007-01-01
Proteomics has recently demonstrated utility in understanding cellular processes on the molecular level as a component of systems biology approaches and for identifying potential biomarkers of various disease states. The large amount of data generated by utilizing high efficiency (e.g., chromatographic) separations coupled to high mass accuracy mass spectrometry for high-throughput proteomics analyses presents challenges related to data processing, analysis, and display. This review focuses on recent advances in nanoLC-FTICR-MS-based proteomics approaches and the accompanying data processing tools that have been developed to display and interpret the large volumes of data being produced. PMID:16429408
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues.
Whole vertebral bone segmentation method with a statistical intensity-shape model based approach
NASA Astrophysics Data System (ADS)
Hanaoka, Shouhei; Fritscher, Karl; Schuler, Benedikt; Masutani, Yoshitaka; Hayashi, Naoto; Ohtomo, Kuni; Schubert, Rainer
2011-03-01
An automatic segmentation algorithm for the vertebrae in human body CT images is presented. Especially we focused on constructing and utilizing 4 different statistical intensity-shape combined models for the cervical, upper / lower thoracic and lumbar vertebrae, respectively. For this purpose, two previously reported methods were combined: a deformable model-based initial segmentation method and a statistical shape-intensity model-based precise segmentation method. The former is used as a pre-processing to detect the position and orientation of each vertebra, which determines the initial condition for the latter precise segmentation method. The precise segmentation method needs prior knowledge on both the intensities and the shapes of the objects. After PCA analysis of such shape-intensity expressions obtained from training image sets, vertebrae were parametrically modeled as a linear combination of the principal component vectors. The segmentation of each target vertebra was performed as fitting of this parametric model to the target image by maximum a posteriori estimation, combined with the geodesic active contour method. In the experimental result by using 10 cases, the initial segmentation was successful in 6 cases and only partially failed in 4 cases (2 in the cervical area and 2 in the lumbo-sacral). In the precise segmentation, the mean error distances were 2.078, 1.416, 0.777, 0.939 mm for cervical, upper and lower thoracic, lumbar spines, respectively. In conclusion, our automatic segmentation algorithm for the vertebrae in human body CT images showed a fair performance for cervical, thoracic and lumbar vertebrae.
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-21
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developedmore » proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ∼ 2°, than those from the three empirical models with averaged errors > ∼ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.« less
Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.
Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H
2013-12-01
Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues. PMID:24267876
3D geometry analysis of the medial meniscus--a statistical shape modeling approach.
Vrancken, A C T; Crijns, S P M; Ploegmakers, M J M; O'Kane, C; van Tienen, T G; Janssen, D; Buma, P; Verdonschot, N
2014-10-01
The geometry-dependent functioning of the meniscus indicates that detailed knowledge on 3D meniscus geometry and its inter-subject variation is essential to design well functioning anatomically shaped meniscus replacements. Therefore, the aim of this study was to quantify 3D meniscus geometry and to determine whether variation in medial meniscus geometry is size- or shape-driven. Also we performed a cluster analysis to identify distinct morphological groups of medial menisci and assessed whether meniscal geometry is gender-dependent. A statistical shape model was created, containing the meniscus geometries of 35 subjects (20 females, 15 males) that were obtained from MR images. A principal component analysis was performed to determine the most important modes of geometry variation and the characteristic changes per principal component were evaluated. Each meniscus from the original dataset was then reconstructed as a linear combination of principal components. This allowed the comparison of male and female menisci, and a cluster analysis to determine distinct morphological meniscus groups. Of the variation in medial meniscus geometry, 53.8% was found to be due to primarily size-related differences and 29.6% due to shape differences. Shape changes were most prominent in the cross-sectional plane, rather than in the transverse plane. Significant differences between male and female menisci were only found for principal component 1, which predominantly reflected size differences. The cluster analysis resulted in four clusters, yet these clusters represented two statistically different meniscal shapes, as differences between cluster 1, 2 and 4 were only present for principal component 1. This study illustrates that differences in meniscal geometry cannot be explained by scaling only, but that different meniscal shapes can be distinguished. Functional analysis, e.g. through finite element modeling, is required to assess whether these distinct shapes actually influence
Ants in a Labyrinth: A Statistical Mechanics Approach to the Division of Labour
Richardson, Thomas Owen; Christensen, Kim; Franks, Nigel Rigby; Jensen, Henrik Jeldtoft; Sendova-Franks, Ana Blagovestova
2011-01-01
Division of labour (DoL) is a fundamental organisational principle in human societies, within virtual and robotic swarms and at all levels of biological organisation. DoL reaches a pinnacle in the insect societies where the most widely used model is based on variation in response thresholds among individuals, and the assumption that individuals and stimuli are well-mixed. Here, we present a spatially explicit model of DoL. Our model is inspired by Pierre de Gennes' 'Ant in a Labyrinth' which laid the foundations of an entire new field in statistical mechanics. We demonstrate the emergence, even in a simplified one-dimensional model, of a spatial patterning of individuals and a right-skewed activity distribution, both of which are characteristics of division of labour in animal societies. We then show using a two-dimensional model that the work done by an individual within an activity bout is a sigmoidal function of its response threshold. Furthermore, there is an inverse relationship between the overall stimulus level and the skewness of the activity distribution. Therefore, the difference in the amount of work done by two individuals with different thresholds increases as the overall stimulus level decreases. Indeed, spatial fluctuations of task stimuli are minimised at these low stimulus levels. Hence, the more unequally labour is divided amongst individuals, the greater the ability of the colony to maintain homeostasis. Finally, we show that the non-random spatial distribution of individuals within biological and social systems could be caused by indirect (stigmergic) interactions, rather than direct agent-to-agent interactions. Our model links the principle of DoL with principles in the statistical mechanics and provides testable hypotheses for future experiments. PMID:21541019
Mapping permeability in low-resolution micro-CT images: A multiscale statistical approach
NASA Astrophysics Data System (ADS)
Botha, Pieter W. S. K.; Sheppard, Adrian P.
2016-06-01
We investigate the possibility of predicting permeability in low-resolution X-ray microcomputed tomography (µCT). Lower-resolution whole core images give greater sample coverage and are therefore more representative of heterogeneous systems; however, the lower resolution causes connecting pore throats to be represented by intermediate gray scale values and limits information on pore system geometry, rendering such images inadequate for direct permeability simulation. We present an imaging and computation workflow aimed at predicting absolute permeability for sample volumes that are too large to allow direct computation. The workflow involves computing permeability from high-resolution µCT images, along with a series of rock characteristics (notably open pore fraction, pore size, and formation factor) from spatially registered low-resolution images. Multiple linear regression models correlating permeability to rock characteristics provide a means of predicting and mapping permeability variations in larger scale low-resolution images. Results show excellent agreement between permeability predictions made from 16 and 64 µm/voxel images of 25 mm diameter 80 mm tall core samples of heterogeneous sandstone for which 5 µm/voxel resolution is required to compute permeability directly. The statistical model used at the lowest resolution of 64 µm/voxel (similar to typical whole core image resolutions) includes open pore fraction and formation factor as predictor characteristics. Although binarized images at this resolution do not completely capture the pore system, we infer that these characteristics implicitly contain information about the critical fluid flow pathways. Three-dimensional permeability mapping in larger-scale lower resolution images by means of statistical predictions provides input data for subsequent permeability upscaling and the computation of effective permeability at the core scale.
NASA Astrophysics Data System (ADS)
Chen, Yue; Cunningham, Gregory; Henderson, Michael
2016-09-01
This study aims to statistically estimate the errors in local magnetic field directions that are derived from electron directional distributions measured by Los Alamos National Laboratory geosynchronous (LANL GEO) satellites. First, by comparing derived and measured magnetic field directions along the GEO orbit to those calculated from three selected empirical global magnetic field models (including a static Olson and Pfitzer 1977 quiet magnetic field model, a simple dynamic Tsyganenko 1989 model, and a sophisticated dynamic Tsyganenko 2001 storm model), it is shown that the errors in both derived and modeled directions are at least comparable. Second, using a newly developed proxy method as well as comparing results from empirical models, we are able to provide for the first time circumstantial evidence showing that derived magnetic field directions should statistically match the real magnetic directions better, with averaged errors < ˜ 2°, than those from the three empirical models with averaged errors > ˜ 5°. In addition, our results suggest that the errors in derived magnetic field directions do not depend much on magnetospheric activity, in contrast to the empirical field models. Finally, as applications of the above conclusions, we show examples of electron pitch angle distributions observed by LANL GEO and also take the derived magnetic field directions as the real ones so as to test the performance of empirical field models along the GEO orbits, with results suggesting dependence on solar cycles as well as satellite locations. This study demonstrates the validity and value of the method that infers local magnetic field directions from particle spin-resolved distributions.
Vickers, Andrew J
2005-01-01
Analysis of variance (ANOVA) is a statistical method that is widely used in the psychosomatic literature to analyze the results of randomized trials, yet ANOVA does not provide an estimate for the difference between groups, the key variable of interest in a randomized trial. Although the use of ANOVA is frequently justified on the grounds that a trial incorporates more than two groups, the hypothesis tested by ANOVA for these trials--"Are all groups equivalent?"--is often scientifically uninteresting. Regression methods are not only applicable to trials with many groups, but can be designed to address specific questions arising from the study design. ANOVA is also frequently used for trials with repeated measures, but the consequent reporting of "group effects," "time effects," and "time-by-group interactions" is a distraction from statistics of clinical and scientific value. Given that ANOVA is easily misapplied in the analysis of randomized trials, alternative approaches such as regression methods should be considered in preference.
Mean-field approach for a statistical mechanical model of proteins
NASA Astrophysics Data System (ADS)
Bruscolini, Pierpaolo; Cecconi, Fabio
2003-07-01
We study the thermodynamical properties of a topology-based model proposed by Galzitskaya and Finkelstein for the description of protein folding. We devise and test three different mean-field approaches for the model, that simplify the treatment without spoiling the description. The validity of the model and its mean-field approximations is checked by applying them to the β-hairpin fragment of the immunoglobulin-binding protein (GB1) and making a comparison with available experimental data and simulation results. Our results indicate that this model is a rather simple and reasonably good tool for interpreting folding experimental data, provided the parameters of the model are carefully chosen. The mean-field approaches substantially recover all the relevant exact results and represent reliable alternatives to the Monte Carlo simulations.
Stern, Adi; Doron-Faigenboim, Adi; Erez, Elana; Martz, Eric; Bacharach, Eran; Pupko, Tal
2007-07-01
Biologically significant sites in a protein may be identified by contrasting the rates of synonymous (K(s)) and non-synonymous (K(a)) substitutions. This enables the inference of site-specific positive Darwinian selection and purifying selection. We present here Selecton version 2.2 (http://selecton.bioinfo.tau.ac.il), a web server which automatically calculates the ratio between K(a) and K(s) (omega) at each site of the protein. This ratio is graphically displayed on each site using a color-coding scheme, indicating either positive selection, purifying selection or lack of selection. Selecton implements an assembly of different evolutionary models, which allow for statistical testing of the hypothesis that a protein has undergone positive selection. Specifically, the recently developed mechanistic-empirical model is introduced, which takes into account the physicochemical properties of amino acids. Advanced options were introduced to allow maximal fine tuning of the server to the user's specific needs, including calculation of statistical support of the omega values, an advanced graphic display of the protein's 3-dimensional structure, use of different genetic codes and inputting of a pre-built phylogenetic tree. Selecton version 2.2 is an effective, user-friendly and freely available web server which implements up-to-date methods for computing site-specific selection forces, and the visualization of these forces on the protein's sequence and structure.
NASA Astrophysics Data System (ADS)
Fang, N. Z.; Gao, S.
2015-12-01
Challenges of fully considering the complexity among spatially and temporally varied rainfall always exist in flood frequency analysis. Conventional approaches that simplify the complexity of spatiotemporal interactions generally undermine their impacts on flood risks. A previously developed stochastic storm generator called Dynamic Moving Storms (DMS) aims to address the highly-dependent nature of precipitation field: spatial variability, temporal variability, and movement of the storm. The authors utilize a multivariate statistical approach based on DMS to estimate the occurrence probability or frequency of extreme storm events. Fifteen years of radar rainfall data is used to generate a large number of synthetic storms as basis for statistical assessment. Two parametric retrieval algorithms are developed to recognize rain cells and track storm motions respectively. The resulted parameters are then used to establish probability density functions (PDFs), which are fitted to parametric distribution functions for further Monte Carlo simulations. Consequently, over 1,000,000 synthetic storms are generated based on twelve retrieved parameters for integrated risk assessment and ensemble forecasts. Furthermore, PDFs for parameters are used to calculate joint probabilities based on 2-dimensional Archimedean-Copula functions to determine the occurrence probabilities of extreme events. The approach is validated on the Upper Trinity River watershed and the generated results are compared with those from traditional rainfall frequency studies (i.e. Intensity-Duration-Frequency curves, and Areal Reduction Factors).
Pietrogrand, M C; Coll, P; Sternberg, R; Szopa, C; Navarro-Gonzalez, R; Vidal-Madjar, C; Dondi, F
2001-12-21
To study Titan, the largest moon of Saturn, laboratory simulation experiments have been performed to obtain analogues of Titan's aerosols (named tholins) using different energy sources. Tholins, which have been demonstrated to represent aerosols in Titan's haze layers, are a complex mixture, resulting from the chemical evolution of several hydrocarbons and nitriles. Their chromatographic analysis yields complex chromatograms, which require the use of mathematical procedures to extract from them all the information they contain. Two different chemometric approaches (the Fourier analysis approach and the statistical model of peak overlapping) have been successfully applied to pyrolysis-GC-MS chromatogram of a tholin sample. Fundamental information on the mixture's chemical composition (number of components, m) and on the separation system performance (separation efficiency, sigma) can be easily estimated: the excellent correspondence between the data calculated by the two independent procedures proves the reliability of the statistical approaches in characterizing a tholin chromatogram. Moreover, the plot of autocorrelation function contains, in a simplified form, all the information on the retention pattern: retention recursivities can be easily singled out and related to specific molecular structure variations. Therefore, the autocorrelation function (ACF) plot constitutes a simplified fingerprint of the pyrolysis products of tholins, which can be used as a powerful tool to characterize a tholin sample.
Prediction of free air space in initial composting mixtures by a statistical design approach.
Soares, Micaela A R; Quina, Margarida J; Quinta-Ferreira, Rosa
2013-10-15
Free air space (FAS) is a physical parameter that can play an important role in composting processes to maintain favourable aerobic conditions. Aiming to predict the FAS of initial composting mixtures, specific materials proportions ranged from 0 to 1 were tested for a case study comprising industrial potato peel, which is characterized by low air void volume, thus requiring additional components for its composting. The characterization and prediction of FAS for initial mixtures involving potato peel, grass clippings and rice husks (set A) or sawdust (set B) was accomplished by means of an augmented simplex-centroid mixture design approach. The experimental data were fitted to second order Scheffé polynomials. Synergistic or antagonistic effects of mixture proportions in the FAS response were identified from the surface and response trace plots in the FAS response. Moreover, a good agreement was achieved between the model predictions and supplementary experimental data. Moreover, theoretical and empirical approaches for estimating FAS available in literature were compared with the predictions generated by the mixture design approach. This study demonstrated that the mixture design methodology can be a valuable tool to predict the initial FAS of composting mixtures, specifically in making adjustments to improve composting processes containing primarily potato peel.
A Bayesian Statistical Approach for Improving Scoring Functions for Protein-Ligand Interactions
NASA Astrophysics Data System (ADS)
Grinter, Sam Z.; Zou, Xiaoqin
2013-03-01
Even with large training sets, knowledge-based scoring functions face the inevitable problem of sparse data. In this work, we present a novel approach for handing the sparse data problem, which is based on estimating the inaccuracy caused by sparse count data in a potential of mean force (PMF). Our new scoring function, STScore, uses a consensus approach to combine a PMF with a simple force-field-based potential (FFP), where the relative weight given to the PMF and FFP is a function of their estimated inaccuracies. This weighting scheme implies that less weight will be given to the PMF for any pairs or distances that occur rarely in the training data, thus providing a natural way to deal with the sparse data problem. Simultaneously, by providing the FFP as a substitute, the method provides an improved approximation of the interactions between rare chemical groups, which tend to be excluded or reduced in influence by purely PMF-based approaches. Using several common test sets for protein-ligand interaction studies, we demonstrate that this sparse data method effectively combines the PMF and FFP, exceeding the performance of either potential alone, and is competitive with other commonly-used sparse data methods.
Yamazawa, Akira; Date, Yasuhiro; Ito, Keijiro; Kikuchi, Jun
2014-03-01
Microbial ecosystems are typified by diverse microbial interactions and competition. Consequently, the microbial networks and metabolic dynamics of bioprocesses catalyzed by these ecosystems are highly complex, and their visualization is regarded as essential to bioengineering technology and innovation. Here we describe a means of visualizing the variants in a microbial community and their metabolic profiles. The approach enables previously unidentified bacterial functions in the ecosystems to be elucidated. We investigated the anaerobic bioremediation of chlorinated ethene in a soil column experiment as a case study. Microbial community and dechlorination profiles in the ecosystem were evaluated by denaturing gradient gel electrophoresis (DGGE) fingerprinting and gas chromatography, respectively. Dechlorination profiles were obtained from changes in dechlorination by microbial community (evaluated by data mining methods). Individual microbes were then associated with their dechlorination profiles by heterogenous correlation analysis. Our correlation-based visualization approach enables deduction of the roles and functions of bacteria in the dechlorination of chlorinated ethenes. Because it estimates functions and relationships between unidentified microbes and metabolites in microbial ecosystems, this approach is proposed as a control-logic tool by which to understand complex microbial processes. PMID:24095212
Probability, Information and Statistical Physics
NASA Astrophysics Data System (ADS)
Kuzemsky, A. L.
2016-03-01
In this short survey review we discuss foundational issues of the probabilistic approach to information theory and statistical mechanics from a unified standpoint. Emphasis is on the inter-relations between theories. The basic aim is tutorial, i.e. to carry out a basic introduction to the analysis and applications of probabilistic concepts to the description of various aspects of complexity and stochasticity. We consider probability as a foundational concept in statistical mechanics and review selected advances in the theoretical understanding of interrelation of the probability, information and statistical description with regard to basic notions of statistical mechanics of complex systems. It includes also a synthesis of past and present researches and a survey of methodology. The purpose of this terse overview is to discuss and partially describe those probabilistic methods and approaches that are used in statistical mechanics with the purpose of making these ideas easier to understanding and to apply.
Anazaw, K; Ohmori, L H
2001-11-01
Many hydrochemical studies on chemical formation of shallow ground water have been reported as results of water-rock interaction, and contamination of paleo-brine or human activities, whereas the preliminary formation of precipitation source in the recharged region has not been established yet. The purpose of this research work is to clarify the geochemical process of water formation from a water source unpolluted by seawater or human activity. Norikura volcano, located in western part of central Japan provided a suitable source for this research purpose, and hence chemical compositions of water samples from the summit and the mountainside area of Norikura volcano were determined. Most samples in the summit area showed very low electrical conductivity, and lower than 12 microS/cm. On the basis of the chemical compositions, principal component analysis (PCA) and factor analysis (FA), such as kinds of multivariate statistical techniques were used to extract geochemical factors affecting hydrochemical process. As a result, three factors were extracted. The first factor showed high loading on K+, Ca2+, SO2 and SiO2, and this factor was interpreted due to influence of the chemical interaction between acidic precipitated water and rocks. The second factor showed high loading on Na+ and Cl-, and it was assumed to be an influence of seawater salt. The third factor showed loading on NO3-, and it was interpreted to be caused by biochemical effect of vegetation. The proportionate contributions of these factors to the evolution of water chemical composition were found to be 45%, 20%, and 10% for factors 1, 2 and 3, respectively. The same exploration at the mountainside of Norikura volcano revealed that the chemical variances of the non-geothermal water samples were highly influenced by water-rock interactions. The silicate dissolution showed 45% contribution for all chemical variances, while the adsorption of Ca2+ and Mg2+ by precipitation or ion exchange showed 20
Forecast of natural aquifer discharge using a data-driven, statistical approach.
Boggs, Kevin G; Van Kirk, Rob; Johnson, Gary S; Fairley, Jerry P
2014-01-01
In the Western United States, demand for water is often out of balance with limited water supplies. This has led to extensive water rights conflict and litigation. A tool that can reliably forecast natural aquifer discharge months ahead of peak water demand could help water practitioners and managers by providing advanced knowledge of potential water-right mitigation requirements. The timing and magnitude of natural aquifer discharge from the Eastern Snake Plain Aquifer (ESPA) in southern Idaho is accurately forecast 4 months ahead of the peak water demand, which occurs annually in July. An ARIMA time-series model with exogenous predictors (ARIMAX model) was used to develop the forecast. The ARIMAX model fit to a set of training data was assessed using Akaike's information criterion to select the optimal model that forecasts aquifer discharge, given the previous year's discharge and values of the predictor variables. Model performance was assessed by application of the model to a validation subset of data. The Nash-Sutcliffe efficiency for model predictions made on the validation set was 0.57. The predictor variables used in our forecast represent the major recharge and discharge components of the ESPA water budget, including variables that reflect overall water supply and important aspects of water administration and management. Coefficients of variation on the regression coefficients for streamflow and irrigation diversions were all much less than 0.5, indicating that these variables are strong predictors. The model with the highest AIC weight included streamflow, two irrigation diversion variables, and storage.
NASA Astrophysics Data System (ADS)
Norton, Chase W.; Chu, Pao-Shin; Schroeder, Thomas A.
2011-09-01
A statistical model based on nonlinear artificial neural networks is used to downscale daily extreme precipitation events in Oahu, Hawaii, from general circulation model (GCM) outputs and projected into the future. From a suite of GCMs and their emission scenarios, two tests recommended by the International Panel on Climate Change are conducted and the ECHAM5 A2 is selected as the most appropriate one for downscaling precipitation extremes for Oahu. The skill of the neural network model is highest in drier, leeward regions where orographic uplifting has less influence on daily extreme precipitation. The trained model is used with the ECHAM5 forced by emissions from the A2 scenario to simulate future daily precipitation on Oahu. A BCa bootstrap resampling method is used to provide 95% confidence intervals of the storm frequency and intensity for all three data sets (actual observations, downscaled GCM output from the present-day climate, and downscaled GCM output for future climate). Results suggest a tendency for increased frequency of heavy rainfall events but a decrease in rainfall intensity during the next 30 years (2011-2040) for the southern shoreline of Oahu.
NASA Astrophysics Data System (ADS)
Platz, R.; Stapp, C.; Hanselka, H.
2011-08-01
Fatigue cracks in light-weight shell or panel structures may lead to major failures when used for sealing or load-carrying purposes. This paper describes investigations into the potential of piezoelectric actuator patches that are applied to the surface of an already cracked thin aluminum panel to actively reduce the propagation of fatigue cracks. With active reduction of fatigue crack propagation, uncertainties in the cracked structure's strength, which always remain present even when the structure is used under damage tolerance conditions, e.g. airplane fuselages, could be lowered. The main idea is to lower the cyclic stress intensity factor near the crack tip with actively induced mechanical compression forces using thin low voltage piezoelectric actuator patches applied to the panel's surface. With lowering of the cyclic stress intensity, the rate of crack propagation in an already cracked thin aluminum panel will be reduced significantly. First, this paper discusses the proper placement and alignment of thin piezoelectric actuator patches near the crack tip to induce the mechanical compression forces necessary for reduction of crack propagation by numerical simulations. Second, the potential for crack propagation reduction will be investigated statistically by an experimental sample test examining three cases: a cracked aluminum host structure (i) without, (ii) with but passive, and (iii) with activated piezoelectric actuator patches. It will be seen that activated piezoelectric actuator patches lead to a significant reduction in crack propagation.
A statistical approach to estimate the LYAPUNOV spectrum in disc brake squeal
NASA Astrophysics Data System (ADS)
Oberst, S.; Lai, J. C. S.
2015-01-01
The estimation of squeal propensity of a brake system from the prediction of unstable vibration modes using the linear complex eigenvalue analysis (CEA) in the frequency domain has its fair share of successes and failures. While the CEA is almost standard practice for the automotive industry, time domain methods and the estimation of LYAPUNOV spectra have not received much attention in brake squeal analyses. One reason is the challenge in estimating the true LYAPUNOV exponents and their discrimination against spurious ones in experimental data. A novel method based on the application of the ECKMANN-RUELLE matrices is proposed here to estimate LYAPUNOV exponents by using noise in a statistical procedure. It is validated with respect to parameter variations and dimension estimates. By counting the number of non-overlapping confidence intervals for LYAPUNOV exponent distributions obtained by moving a window of increasing size over bootstrapped same-length estimates of an observation function, a dispersion measure's width is calculated and fed into a BAYESIAN beta-binomial model. Results obtained using this method for benchmark models of white and pink noise as well as the classical HENON map indicate that true LYAPUNOV exponents can be isolated from spurious ones with high confidence. The method is then applied to accelerometer and microphone data obtained from brake squeal tests. Estimated LYAPUNOV exponents indicate that the pad's out-of-plane vibration behaves quasi-periodically on the brink to chaos while the microphone's squeal signal remains periodic.
A hybrid finite element - statistical energy analysis approach to robust sound transmission modeling
NASA Astrophysics Data System (ADS)
Reynders, Edwin; Langley, Robin S.; Dijckmans, Arne; Vermeir, Gerrit
2014-09-01
When considering the sound transmission through a wall in between two rooms, in an important part of the audio frequency range, the local response of the rooms is highly sensitive to uncertainty in spatial variations in geometry, material properties and boundary conditions, which have a wave scattering effect, while the local response of the wall is rather insensitive to such uncertainty. For this mid-frequency range, a computationally efficient modeling strategy is adopted that accounts for this uncertainty. The partitioning wall is modeled deterministically, e.g. with finite elements. The rooms are modeled in a very efficient, nonparametric stochastic way, as in statistical energy analysis. All components are coupled by means of a rigorous power balance. This hybrid strategy is extended so that the mean and variance of the sound transmission loss can be computed as well as the transition frequency that loosely marks the boundary between low- and high-frequency behavior of a vibro-acoustic component. The method is first validated in a simulation study, and then applied for predicting the airborne sound insulation of a series of partition walls of increasing complexity: a thin plastic plate, a wall consisting of gypsum blocks, a thicker masonry wall and a double glazing. It is found that the uncertainty caused by random scattering is important except at very high frequencies, where the modal overlap of the rooms is very high. The results are compared with laboratory measurements, and both are found to agree within the prediction uncertainty in the considered frequency range.
Turi, Christina E; Finley, Jamie; Shipley, Paul R; Murch, Susan J; Brown, Paula N
2015-04-24
Metabolomics is the qualitative and quantitative analysis of all of the small molecules in a biological sample at a specific time and influence. Technologies for metabolomics analysis have developed rapidly as new analytical tools for chemical separations, mass spectrometry, and NMR spectroscopy have emerged. Plants have one of the largest metabolomes, and it is estimated that the average plant leaf can contain upward of 30 000 phytochemicals. In the past decade, over 1200 papers on plant metabolomics have been published. A standard metabolomics data set contains vast amounts of information and can either investigate or generate hypotheses. The key factors in using plant metabolomics data most effectively are the experimental design, authentic standard availability, extract standardization, and statistical analysis. Using cranberry (Vaccinium macrocarpon) as a model system, this review will discuss and demonstrate strategies and tools for analysis and interpretation of metabolomics data sets including eliminating false discoveries and determining significance, metabolite clustering, and logical algorithms for discovery of new metabolites and pathways. Together these metabolomics tools represent an entirely new pipeline for phytochemical discovery. PMID:25751407
Multivariate statistical approach for the assessment of groundwater quality in Ujjain City, India.
Vishwakarma, Vikas; Thakur, Lokendra Singh
2012-10-01
Groundwater quality assessment is an essential study which plays important role in the rational development and utilization of groundwater. Groundwater quality greatly influences the health of local people. The variations of water quality are essentially the combination of both anthropogenic and natural contributions. In order to understand the underlying physical and chemical processes this study analyzes 8 chemical and physical-chemical water quality parameters, viz. pH, turbidity, electrical conductivity, total dissolved solids, total alkalinity, total hardness, chloride and fluoride recorded at the 54 sampling stations during summer season of 2011 by using multivariate statistical techniques. Hierarchical clustering analysis (CA) is first applied to distinguish groundwater quality patterns among the stations, followed by the use of principle component analysis (PCA) and factor analysis (FA) to extract and recognize the major underlying factors contributing to the variations among the water quality measures. The first three components were chosen for interpretation of the data, which accounts for 72.502% of the total variance in the data set. The maximum number of variables, i.e. turbidity, EC, TDS and chloride were characterized by first component, while second and third were characterized by total alkalinity, total hardness, fluoride and pH respectively. This shows that hydro chemical constituents of the groundwater are mainly controlled by EC, TDS, and fluoride. The findings of the cluster analysis are presented in the form of dendrogram of the sampling stations (cases) as well as hydro chemical variables, which produced four major groupings, suggest that groundwater monitoring can be consolidated.
A statistical approach to optimization of alumina etching in a high density plasma
Li Xiao; Gupta, Subhadra; Highsmith, Alton; Paranjpe, Ajit; Rook, Katrina
2008-08-01
Inductively coupled plasma (ICP) reactive ion etching of Al{sub 2}O{sub 3} with fluorine-based gas chemistry in a high density plasma reactor was carried out in an initial investigation aimed at data storage applications. A statistical design of experiments was implemented to optimize etch performance with respect to process variables such as ICP power, platen power, direct current (dc) bias, and pressure. Both soft photoresist masks and hard metal masks were investigated in terms of etch selectivity and surface properties. The reverse power dependence of dc bias on the ratio of ICP to platen power was elucidated. Etch mechanisms in terms of physical and ion enhanced chemical etchings were discussed. The F-based chemistry greatly enhances the etch rate of alumina compared to purely physical processes such as ion milling. Etch rates as high as 150 nm/min were achieved using this process. A practical process window was developed for high etch rates, with reasonable selectivity to hard masks, with the desired profile, and with low substrate bias for minimal damage.
NASA Astrophysics Data System (ADS)
Um, Myoung-Jin; Kim, Hanbeen; Heo, Jun-Haeng
2016-08-01
A general circulation model (GCM) can be applied to project future climate factors, such as precipitation and atmospheric temperature, to study hydrological and environmental climate change. Although many improvements in GCMs have been proposed recently, projected climate data are still required to be corrected for the biases in generating data before applying the model to practical applications. In this study, a new hybrid process was proposed, and its ability to perform bias correction for the prediction of annual precipitation and annual daily maxima, was tested. The hybrid process in this study was based on quantile mapping with the gamma and generalized extreme value (GEV) distributions and a spline technique to correct the bias of projected daily precipitation. The observed and projected daily precipitation values from the selected stations were analyzed using three bias correction methods, namely, linear scaling, quantile mapping, and hybrid methods. The performances of these methods were analyzed to find the optimal method for prediction of annual precipitation and annual daily maxima. The linear scaling method yielded the best results for estimating the annual average precipitation, while the hybrid method was optimal for predicting the variation in annual precipitation. The hybrid method described the statistical characteristics of the annual maximum series (AMS) similarly to the observed data. In addition, this method demonstrated the lowest root mean squared error (RMSE) and the highest coefficient of determination (R2) for predicting the quantiles of the AMS for the extreme value analysis of precipitation.
Solar Wind Turbulence and Intermittency at 0.72 AU - Statistical Approach
NASA Astrophysics Data System (ADS)
Teodorescu, E.; Echim, M.; Munteanu, C.; Zhang, T.; Barabash, S. V.; Budnik, E.; Fedorov, A.
2014-12-01
Through this analysis we characterize the turbulent magnetic fluctuations by Venus Express Magnetometer, VEX-MAG in the solar wind during the last solar cycle minimum at a distance of 0.72 AU from the Sun. We analyze data recorded between 2007 and 2009 with time resolutions of 1 Hz and 32 Hz. In correlation with plasma data from the ASPERA instrument, Analyser of Space Plasma and Energetic Atoms, we identify 550 time intervals, at 1 Hz resolution, when VEX is in the solar wind and which satisfy selection criteria defined based on the amount and the continuity of the data. We identify 118 time intervals that correspond to fast solar wind. We compute the power spectral densities (PSD) for Bx, By, Bz, B, B2, B|| and B^. We perform a statistical analysis of the spectral indices computed for each of the PSD's and evidence a dependence of the spectral index on the solar wind velocity and a slight difference in power content between parallel and perpendicular components of the magnetic field. We also estimate the scale invariance of fluctuations by computing the Probability Distribution Functions (PDFs) for Bx, By, Bz, B and B2 time series and discuss the implications for intermittent turbulence. Research supported by the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement no 313038/STORM, and a grant of the Romanian Ministry of National Education, CNCS - UEFISCDI, project number PN-II-ID-PCE-2012-4-0418.
Turi, Christina E; Finley, Jamie; Shipley, Paul R; Murch, Susan J; Brown, Paula N
2015-04-24
Metabolomics is the qualitative and quantitative analysis of all of the small molecules in a biological sample at a specific time and influence. Technologies for metabolomics analysis have developed rapidly as new analytical tools for chemical separations, mass spectrometry, and NMR spectroscopy have emerged. Plants have one of the largest metabolomes, and it is estimated that the average plant leaf can contain upward of 30 000 phytochemicals. In the past decade, over 1200 papers on plant metabolomics have been published. A standard metabolomics data set contains vast amounts of information and can either investigate or generate hypotheses. The key factors in using plant metabolomics data most effectively are the experimental design, authentic standard availability, extract standardization, and statistical analysis. Using cranberry (Vaccinium macrocarpon) as a model system, this review will discuss and demonstrate strategies and tools for analysis and interpretation of metabolomics data sets including eliminating false discoveries and determining significance, metabolite clustering, and logical algorithms for discovery of new metabolites and pathways. Together these metabolomics tools represent an entirely new pipeline for phytochemical discovery.
Mouser, Paula J; Rizzo, Donna M; Röling, Wilfred F M; Van Breukelen, Boris M
2005-10-01
Managers of landfill sites are faced with enormous challenges when attempting to detect and delineate leachate plumes with a limited number of monitoring wells, assess spatial and temporal trends for hundreds of contaminants, and design long-term monitoring (LTM) strategies. Subsurface microbial ecology is a unique source of data that has been historically underutilized in LTM groundwater designs. This paper provides a methodology for utilizing qualitative and quantitative information (specifically, multiple water quality measurements and genome-based data) from a landfill leachate contaminated aquifer in Banisveld, The Netherlands, to improve the estimation of parameters of concern. We used a principal component analysis (PCA) to reduce nonindependent hydrochemistry data, Bacteria and Archaea community profiles from 16S rDNA denaturing gradient gel electrophoresis (DGGE), into six statistically independent variables, representing the majority of the original dataset variances. The PCA scores grouped samples based on the degree or class of contamination and were similar over considerable horizontal distances. Incorporation of the principal component scores with traditional subsurface information using cokriging improved the understanding of the contaminated area by reducing error variances and increasing detection efficiency. Combining these multiple types of data (e.g., genome-based information, hydrochemistry, borings) may be extremely useful at landfill or other LTM sites for designing cost-effective strategies to detect and monitor contaminants.
Spatio-statistical analysis of temperature fluctuation using Mann-Kendall and Sen's slope approach
NASA Astrophysics Data System (ADS)
Atta-ur-Rahman; Dawood, Muhammad
2016-04-01
This article deals with the spatio-statistical analysis of temperature trend using Mann-Kendall trend model (MKTM) and Sen's slope estimator (SSE) in the eastern Hindu Kush, north Pakistan. The climate change has a strong relationship with the trend in temperature and resultant changes in rainfall pattern and river discharge. In the present study, temperature is selected as a meteorological parameter for trend analysis and slope magnitude. In order to achieve objectives of the study, temperature data was collected from Pakistan Meteorological Department for all the seven meteorological stations that falls in the eastern Hindu Kush region. The temperature data were analysed and simulated using MKTM, whereas for the determination of temperature trend and slope magnitude SSE method have been applied to exhibit the type of fluctuations. The analysis reveals that a positive (increasing) trend in mean maximum temperature has been detected for Chitral, Dir and Saidu Sharif met stations, whereas, negative (decreasing) trend in mean minimum temperature has been recorded for met station Saidu Sharif and Timergara. The analysis further reveals that the concern variation in temperature trend and slope magnitude is attributed to climate change phenomenon in the region.
NASA Astrophysics Data System (ADS)
Sherwood, S. C.; Fuchs, D.; Bony, S.; Jean-Louis, D.
2014-12-01
Earth's climate sensitivity has been the subject of heated debate for decades, and recently spurred renewed interest after the latest IPCC assessment report suggested a downward adjustment of the most likely range of climate sensitivities. Here, we present an observation-based study based on the time period 1964 to 2010, which is unique in that it does not rely on global climate models (GCMs) in any way. The study uses surface observations of temperature and incoming solar radiation from approximately 1300 surface sites, along with observations of the equivalent CO2 concentration (CO2,eq) in the atmosphere, to produce a new best estimate for the transient climate sensitivity of 1.9K (95% confidence interval 1.2K - 2.7K). This is higher than other recent observation-based estimates, and is better aligned with the estimate of 1.8K and range (1.1K - 2.5K) derived from the latest generation of GCMs. The new estimate is produced by incorporating the observations in an energy balance framework, and by applying statistical methods that are standard in the field of Econometrics, but less common in climate studies. The study further suggests that about a third of the continental warming due to increasing CO2,eq was masked by aerosol cooling during the time period studied.
A statistical approach to determining energetic outer radiation belt electron precipitation fluxes
NASA Astrophysics Data System (ADS)
Simon Wedlund, Mea; Clilverd, Mark A.; Rodger, Craig J.; Cresswell-Moorcock, Kathy; Cobbett, Neil; Breen, Paul; Danskin, Donald; Spanswick, Emma; Rodriguez, Juan V.
2014-05-01
Subionospheric radio wave data from an Antarctic-Arctic Radiation-Belt (Dynamic) Deposition VLF Atmospheric Research Konsortia (AARDDVARK) receiver located in Churchill, Canada, is analyzed to determine the characteristics of electron precipitation into the atmosphere over the range 3 < L < 7. The study advances previous work by combining signals from two U.S. transmitters from 20 July to 20 August 2010, allowing error estimates of derived electron precipitation fluxes to be calculated, including the application of time-varying electron energy spectral gradients. Electron precipitation observations from the NOAA POES satellites and a ground-based riometer provide intercomparison and context for the AARDDVARK measurements. AARDDVARK radiowave propagation data showed responses suggesting energetic electron precipitation from the outer radiation belt starting 27 July 2010 and lasting ~20 days. The uncertainty in >30 keV precipitation flux determined by the AARDDVARK technique was found to be ±10%. Peak >30 keV precipitation fluxes of AARDDVARK-derived precipitation flux during the main and recovery phase of the largest geomagnetic storm, which started on 4 August 2010, were >105 el cm-2 s-1 sr-1. The largest fluxes observed by AARDDVARK occurred on the dayside and were delayed by several days from the start of the geomagnetic disturbance. During the main phase of the disturbances, nightside fluxes were dominant. Significant differences in flux estimates between POES, AARDDVARK, and the riometer were found after the main phase of the largest disturbance, with evidence provided to suggest that >700 keV electron precipitation was occurring. Currently the presence of such relativistic electron precipitation introduces some uncertainty in the analysis of AARDDVARK data, given the assumption of a power law electron precipitation spectrum.
Forecast of natural aquifer discharge using a data-driven, statistical approach.
Boggs, Kevin G; Van Kirk, Rob; Johnson, Gary S; Fairley, Jerry P
2014-01-01
In the Western United States, demand for water is often out of balance with limited water supplies. This has led to extensive water rights conflict and litigation. A tool that can reliably forecast natural aquifer discharge months ahead of peak water demand could help water practitioners and managers by providing advanced knowledge of potential water-right mitigation requirements. The timing and magnitude of natural aquifer discharge from the Eastern Snake Plain Aquifer (ESPA) in southern Idaho is accurately forecast 4 months ahead of the peak water demand, which occurs annually in July. An ARIMA time-series model with exogenous predictors (ARIMAX model) was used to develop the forecast. The ARIMAX model fit to a set of training data was assessed using Akaike's information criterion to select the optimal model that forecasts aquifer discharge, given the previous year's discharge and values of the predictor variables. Model performance was assessed by application of the model to a validation subset of data. The Nash-Sutcliffe efficiency for model predictions made on the validation set was 0.57. The predictor variables used in our forecast represent the major recharge and discharge components of the ESPA water budget, including variables that reflect overall water supply and important aspects of water administration and management. Coefficients of variation on the regression coefficients for streamflow and irrigation diversions were all much less than 0.5, indicating that these variables are strong predictors. The model with the highest AIC weight included streamflow, two irrigation diversion variables, and storage. PMID:24571388
Identification of chilling and heat requirements of cherry trees—a statistical approach
NASA Astrophysics Data System (ADS)
Luedeling, Eike; Kunz, Achim; Blanke, Michael M.
2013-09-01
Most trees from temperate climates require the accumulation of winter chill and subsequent heat during their dormant phase to resume growth and initiate flowering in the following spring. Global warming could reduce chill and hence hamper the cultivation of high-chill species such as cherries. Yet determining chilling and heat requirements requires large-scale controlled-forcing experiments, and estimates are thus often unavailable. Where long-term phenology datasets exist, partial least squares (PLS) regression can be used as an alternative, to determine climatic requirements statistically. Bloom dates of cherry cv. `Schneiders späte Knorpelkirsche' trees in Klein-Altendorf, Germany, from 24 growing seasons were correlated with 11-day running means of daily mean temperature. Based on the output of the PLS regression, five candidate chilling periods ranging in length from 17 to 102 days, and one forcing phase of 66 days were delineated. Among three common chill models used to quantify chill, the Dynamic Model showed the lowest variation in chill, indicating that it may be more accurate than the Utah and Chilling Hours Models. Based on the longest candidate chilling phase with the earliest starting date, cv. `Schneiders späte Knorpelkirsche' cherries at Bonn exhibited a chilling requirement of 68.6 ± 5.7 chill portions (or 1,375 ± 178 chilling hours or 1,410 ± 238 Utah chill units) and a heat requirement of 3,473 ± 1,236 growing degree hours. Closer investigation of the distinct chilling phases detected by PLS regression could contribute to our understanding of dormancy processes and thus help fruit and nut growers identify suitable tree cultivars for a future in which static climatic conditions can no longer be assumed. All procedures used in this study were bundled in an R package (`chillR') and are provided as Supplementary materials. The procedure was also applied to leaf emergence dates of walnut (cv. `Payne') at Davis, California.
NASA Astrophysics Data System (ADS)
Peduzzi, P.
2010-04-01
The growing concern for loss of services once provided by natural ecosystems is getting increasing attention. However, the accelerating rate of natural resources destruction calls for rapid and global action. With often very limited budgets, environmental agencies and NGOs need cost-efficient ways to quickly convince decision-makers that sound management of natural resources can help to protect human lives and their welfare. The methodology described in this paper, is based on geospatial and statistical analysis, involving simple Geographical Information System (GIS) and remote sensing algorithms. It is based on free or very low-cost data. It aims to scientifically assess the potential role of vegetation in mitigating landslides triggered by earthquakes by normalising for other factors such as slopes and distance from active fault. The methodology was applied to the 2005 North Pakistan/India earthquake which generated a large number of victims and hundreds of landslides. The study shows that if slopes and proximity from active fault are the main susceptibility factors for post landslides triggered by earthquakes in this area, the results clearly revealed that areas covered by denser vegetation suffered less and smaller landslides than areas with thinner (or devoid of) vegetation cover. Short distance from roads/trails and rivers also proved to be pertinent factors in increasing landslides susceptibility. This project is a component of a wider initiative involving the Global Resource Information Database Europe from the United Nations Environment Programme, the International Union for Conservation of Nature, the Institute of Geomatics and Risk Analysis from the University of Lausanne and the "institut universitaire d'études du développement" from the University of Geneva.
Statistical approach to the analysis of olive long-term pollen season trends in southern Spain.
García-Mozo, H; Yaezel, L; Oteros, J; Galán, C
2014-03-01
Analysis of long-term airborne pollen counts makes it possible not only to chart pollen-season trends but also to track changing patterns in flowering phenology. Changes in higher plant response over a long interval are considered among the most valuable bioindicators of climate change impact. Phenological-trend models can also provide information regarding crop production and pollen-allergen emission. The interest of this information makes essential the election of the statistical analysis for time series study. We analysed trends and variations in the olive flowering season over a 30-year period (1982-2011) in southern Europe (Córdoba, Spain), focussing on: annual Pollen Index (PI); Pollen Season Start (PSS), Peak Date (PD), Pollen Season End (PSE) and Pollen Season Duration (PSD). Apart from the traditional Linear Regression analysis, a Seasonal-Trend Decomposition procedure based on Loess (STL) and an ARIMA model were performed. Linear regression results indicated a trend toward delayed PSE and earlier PSS and PD, probably influenced by the rise in temperature. These changes are provoking longer flowering periods in the study area. The use of the STL technique provided a clearer picture of phenological behaviour. Data decomposition on pollination dynamics enabled the trend toward an alternate bearing cycle to be distinguished from the influence of other stochastic fluctuations. Results pointed to show a rising trend in pollen production. With a view toward forecasting future phenological trends, ARIMA models were constructed to predict PSD, PSS and PI until 2016. Projections displayed a better goodness of fit than those derived from linear regression. Findings suggest that olive reproductive cycle is changing considerably over the last 30years due to climate change. Further conclusions are that STL improves the effectiveness of traditional linear regression in trend analysis, and ARIMA models can provide reliable trend projections for future years taking into
Statistical approach to the analysis of olive long-term pollen season trends in southern Spain.
García-Mozo, H; Yaezel, L; Oteros, J; Galán, C
2014-03-01
Analysis of long-term airborne pollen counts makes it possible not only to chart pollen-season trends but also to track changing patterns in flowering phenology. Changes in higher plant response over a long interval are considered among the most valuable bioindicators of climate change impact. Phenological-trend models can also provide information regarding crop production and pollen-allergen emission. The interest of this information makes essential the election of the statistical analysis for time series study. We analysed trends and variations in the olive flowering season over a 30-year period (1982-2011) in southern Europe (Córdoba, Spain), focussing on: annual Pollen Index (PI); Pollen Season Start (PSS), Peak Date (PD), Pollen Season End (PSE) and Pollen Season Duration (PSD). Apart from the traditional Linear Regression analysis, a Seasonal-Trend Decomposition procedure based on Loess (STL) and an ARIMA model were performed. Linear regression results indicated a trend toward delayed PSE and earlier PSS and PD, probably influenced by the rise in temperature. These changes are provoking longer flowering periods in the study area. The use of the STL technique provided a clearer picture of phenological behaviour. Data decomposition on pollination dynamics enabled the trend toward an alternate bearing cycle to be distinguished from the influence of other stochastic fluctuations. Results pointed to show a rising trend in pollen production. With a view toward forecasting future phenological trends, ARIMA models were constructed to predict PSD, PSS and PI until 2016. Projections displayed a better goodness of fit than those derived from linear regression. Findings suggest that olive reproductive cycle is changing considerably over the last 30years due to climate change. Further conclusions are that STL improves the effectiveness of traditional linear regression in trend analysis, and ARIMA models can provide reliable trend projections for future years taking into
Garanin, S. F.; Kravets, E. M.; Mamyshev, V. I.; Tokarev, V. A.
2009-08-15
Radiation spectra from a plasma with multicharged ions, z >> N >> 1(where z is the charge of an ion and N is the number of electrons in the ion) under coronal equilibrium conditions are considered in the quasiclassical approximation. In this case, the bremsstrahlung and recombination radiation can be described by simple quasiclassical formulas. The statistical model of an atom is used to study the high-frequency component of the line radiation spectra from ions ({h_bar}{omega} > I, where I is the ionization energy) that is produced in collisions of free plasma electrons with the electrons at deep levels of an ion and during radiative filling of the forming hole by electrons from higher levels (X-ray terms, characteristic radiation). The intensity of this high-frequency spectral component of the characteristic radiation coincides in order of magnitude with the bremsstrahlung and recombination radiation intensities. One of the channels of collisions of free electrons with a multicharged ion is considered that results in the excitation of the ion and in its subsequent radiative relaxation, which contributes to the low-frequency component of the line spectrum ({h_bar}{omega} < I). The total radiation intensity of this channel correlates fairly well with the results of calculating the radiation intensity from the multilevel coronal model. An analysis of the plasma behavior in the MAGO-IX experiment by two-dimensional MHD numerical simulations and a description of the experimental data from a DANTE spectrometer by the spectra obtained in this study shows that these experimental results cannot be explained if the D-T plasma is assumed to remain pure in the course of experiment. The agreement can be made better, how-ever, by assuming that the plasma is contaminated with impurities of copper and light elements from the wall.
Febvre, G.
1994-10-01
The problem of the lidar equation inversion lies in the fact that it requires a lidar calibration or else a reference value from the studied medium. This paper presents an approach to calibrate the lidar by calculating the constant Ak (lidar constant A multiplied by the ratio of backscatter coefficient to extinction coefficient k). This approach is based on statistical analysis of in situ measurements. This analysis demonstrates that the extinction coefficient has a typical probablility distribution in cirrus clouds. The property of this distribution, as far as the attenuation of laser beam in the cloud, is used as a constraint to calculate the value of Ak. The validity of this method is discussed and results compared with two other inversion methods.
NASA Astrophysics Data System (ADS)
Del Gado, E.; Ioannidou, K.; Masoero, E.; Baronnet, A.; Pellenq, R. J.-M.; Ulm, F.-J.; Yip, S.
2014-10-01
Calcium-silicate hydrate (C-S-H) is the main binding agent in cement and concrete. It forms at the beginning of cement hydration, it progressively densifies as cement hardens and is ultimately responsible of concrete performances. This hydration product is a cohesive nano-scale gel, whose structure and mechanics are still poorly understood, in spite of its practical importance. Here we review some of the open questions for this fascinating material and a statistical physics approach recently developed, which allows us to investigate the gel formation under the out-of-equilibrium conditions typical of cement hydration and the role of the nano-scale structure in C-S-H mechanics upon hardening. Our approach unveils how some distinctive features of the kinetics of cement hydration can be related to changes in the morphology of the gels and elucidates the role of nano-scale mechanical heterogeneities in the hardened C-S-H.
Hong, Xin; Zhang, Xiaoxiao
2008-12-01
The AcrySof ReSTOR intraocular lens (IOL) is a multifocal lens with state-of-the-art apodized diffractive technology, and is indicated for visual correction of aphakia secondary to removal of cataractous lenses in adult patients with/without presbyopia, who desire near, intermediate, and distance vision with increased spectacle independence. The multifocal design results in some optical contrast reduction, which may be improved by reducing spherical aberration. A novel patent-pending approach was undertaken to investigate the optical performance of aspheric lens designs. Simulated eyes using human normal distributions were corrected with different lens designs in a Monte Carlo simulation that allowed for variability in multiple surgical parameters (e.g. positioning error, biometric variation). Monte Carlo optimized results indicated that a lens spherical aberration of -0.10 microm provided optimal distance image quality.
Hu, Yang; Zhang, Ying; Ren, Jun
2016-01-01
The overall goal is to establish a reliable human protein-protein interaction network and develop computational tools to characterize a protein-protein interaction (PPI) network and the role of individual proteins in the context of the network topology and their expression status. A novel and unique feature of our approach is that we assigned confidence measure to each derived interacting pair and account for the confidence in our network analysis. We integrated experimental data to infer human PPI network. Our model treated the true interacting status (yes versus no) for any given pair of human proteins as a latent variable whose value was not observed. The experimental data were the manifestation of interacting status, which provided evidence as to the likelihood of the interaction. The confidence of interactions would depend on the strength and consistency of the evidence.
Statistical dynamics of classical systems: A self-consistent field approach
Grzetic, Douglas J. Wickham, Robert A.; Shi, An-Chang
2014-06-28
We develop a self-consistent field theory for particle dynamics by extremizing the functional integral representation of a microscopic Langevin equation with respect to the collective fields. Although our approach is general, here we formulate it in the context of polymer dynamics to highlight satisfying formal analogies with equilibrium self-consistent field theory. An exact treatment of the dynamics of a single chain in a mean force field emerges naturally via a functional Smoluchowski equation, while the time-dependent monomer density and mean force field are determined self-consistently. As a simple initial demonstration of the theory, leaving an application to polymer dynamics for future work, we examine the dynamics of trapped interacting Brownian particles. For binary particle mixtures, we observe the kinetics of phase separation.
Hu, Yang; Zhang, Ying; Ren, Jun; Wang, Yadong; Wang, Zhenzhen; Zhang, Jun
2016-01-01
The overall goal is to establish a reliable human protein-protein interaction network and develop computational tools to characterize a protein-protein interaction (PPI) network and the role of individual proteins in the context of the network topology and their expression status. A novel and unique feature of our approach is that we assigned confidence measure to each derived interacting pair and account for the confidence in our network analysis. We integrated experimental data to infer human PPI network. Our model treated the true interacting status (yes versus no) for any given pair of human proteins as a latent variable whose value was not observed. The experimental data were the manifestation of interacting status, which provided evidence as to the likelihood of the interaction. The confidence of interactions would depend on the strength and consistency of the evidence. PMID:27648447
Towards asynchronous brain-computer interfaces: a P300-based approach with statistical models.
Zhang, Haihong; Wang, Chuanchu; Guan, Cuntai
2007-01-01
Asynchronous control is a critical issue in developing brain-computer interfaces for real-life applications, where the machine should be able to detect the occurrence of a mental command. In this paper we propose a computational approach for robust asynchronous control using the P300 signal, in a variant of oddball paradigm. First, we use Gaussian models in the support vector margin space to describe various types of EEG signals that are present in an asynchronous P300-based BCI. This allows us to derive a probability measure of control state given EEG observations. Second, we devise a recursive algorithm to detect and locate control states in ongoing EEG. Experimental results indicate that our system allows information transfer at approx. 20bit/min at low false alarm rate (1/min). PMID:18003145
Hu, Yang; Zhang, Ying; Ren, Jun
2016-01-01
The overall goal is to establish a reliable human protein-protein interaction network and develop computational tools to characterize a protein-protein interaction (PPI) network and the role of individual proteins in the context of the network topology and their expression status. A novel and unique feature of our approach is that we assigned confidence measure to each derived interacting pair and account for the confidence in our network analysis. We integrated experimental data to infer human PPI network. Our model treated the true interacting status (yes versus no) for any given pair of human proteins as a latent variable whose value was not observed. The experimental data were the manifestation of interacting status, which provided evidence as to the likelihood of the interaction. The confidence of interactions would depend on the strength and consistency of the evidence. PMID:27648447
NASA Astrophysics Data System (ADS)
Makino, Hironori; Minami, Nariyuki
2014-07-01
The theory of the quantal level statistics of a classically integrable system, developed by Makino et al. in order to investigate the non-Poissonian behaviors of level-spacing distribution (LSD) and level-number variance (LNV) [H. Makino and S. Tasaki, Phys. Rev. E 67, 066205 (2003); H. Makino and S. Tasaki, Prog. Theor. Phys. Suppl. 150, 376 (2003); H. Makino, N. Minami, and S. Tasaki, Phys. Rev. E 79, 036201 (2009); H. Makino and S. Tasaki, Prog. Theor. Phys. 114, 929 (2005)], is successfully extended to the study of the E(K,L) function, which constitutes a fundamental measure to determine most statistical observables of quantal levels in addition to LSD and LNV. In the theory of Makino et al., the eigenenergy level is regarded as a superposition of infinitely many components whose formation is supported by the Berry-Robnik approach in the far semiclassical limit [M. Robnik, Nonlinear Phenom. Complex Syst. 1, 1 (1998)]. We derive the limiting E(K,L) function in the limit of infinitely many components and elucidate its properties when energy levels show deviations from the Poisson statistics.